Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (341) (entfernen)
Sprache
- Englisch (341) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (17)
- Hydrocortison (13)
- Modellierung (11)
- Fernerkundung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (54)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (20)
- Informatik (16)
- Anglistik (11)
- Fachbereich 6 (8)
- Fachbereich 2 (7)
- Politikwissenschaft (3)
- Computerlinguistik und Digital Humanities (1)
- Fachbereich 3 (1)
- Japanologie (1)
- Sinologie (1)
- Universitätsbibliothek (1)
Why they rebel peacefully: On the violence-reducing effects of a positive attitude towards democracy
Under the impression of Europe’s drift into Nazism and Stalinism in the first half of the 20th century, social psychological research has focused strongly on dangers inherent in people’s attachment to a political system. The dissertation at hand contributes to a more differentiated perspective by examining violence-reducing aspects of political system attachment in four consecutive steps: First, it highlights attachment to a social group as a resource for violence prevention on an intergroup level. The results suggest that group attachment fosters self-control, a well-known protective factor against violence. Second, it demonstrates violence-reducing influences of attachment on a societal level. The findings indicate that attachment to a democracy facilitate peaceful and prevent violent protest tendencies. Third, it introduces the concept of political loyalty, defined as a positive attitude towards democracy, in order to clarify the different approaches of political system attachment. A set of three studies show the reliability and validity of a newly developed political loyalty questionnaire that distinguishes between affective and cognitive aspects. Finally, the dissertation differentiates former findings with regard to protest tendencies using the concept of political loyalty. A set of two experiments show that affective rather than cognitive aspects of political loyalty instigate peaceful protest tendencies and prevent violent ones. Implications of this dissertation for political engagement and peacebuilding as well as avenues for future research are discussed.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
The visualization of relational data is at the heart of information visualization. The prevalence of visual representations for this kind of data is based on many real world examples spread over many application domains: protein-protein interaction networks in the field of bioinformatics, hyperlinked documents in the World Wide Web, call graphs in software systems, or co-author networks are just four instances of a rich source of relational datasets. The most common visual metaphor for this kind of data is definitely the node-link approach, which typically suffers from visual clutter caused by many edge crossings. Many sophisticated algorithms have been developed to layout a graph efficiently and with respect to a list of aesthetic graph drawing criteria. Relations between objects normally change over time. Visualizing the dynamics means an additional challenge for graph visualization researchers. Applying the same layout algorithms for static graphs to intermediate states of dynamic graphs may also be a strategy to compute layouts for an animated graph sequence that shows the dynamics. The major drawback of this approach is the high cognitive effort for a viewer of the animation to preserve his mental map. To tackle this problem, a sophisticated layout algorithm has to inspect the whole graph sequence and compute a layout with as little changes as possible between subsequent graphs. The main contribution and ultimate goal of this thesis is the visualization of dynamic compound weighted multi directed graphs as a static image that targets at visual clutter reduction and at mental map preservation. To achieve this goal, we use a radial space-filling visual metaphor to represent the dynamics in relational data. As a side effect the obtained pictures are very aesthetically appealing. In this thesis we firstly describe static graph visualizations for rule sets obtained by extracting knowledge from software archives under version control. In a different work we apply animated node-link diagrams to code-developer relationships to show the dynamics in software systems. An underestimated visualization paradigm is the radial representation of data. Though this kind of data has a long history back to centuries-old statistical graphics, only little efforts have been done to fully explore the benefits of this paradigm. We evaluated a Cartesian and a radial counterpart of a visualization technique for visually encoding transaction sequences and dynamic compound digraphs with both an eyetracking and an online study. We found some interesting phenomena apart from the fact that also laymen in graph theory can understand the novel approach in a short time and apply it to datasets. The thesis is concluded by an aesthetic dimensions framework for dynamic graph drawing, future work, and currently open issues.
While humans find it easy to process visual information from the real world, machines struggle with this task due to the unstructured and complex nature of the information. Computer vision (CV) is the approach of artificial intelligence that attempts to automatically analyze, interpret, and extract such information. Recent CV approaches mainly use deep learning (DL) due to its very high accuracy. DL extracts useful features from unstructured images in a training dataset to use them for specific real-world tasks. However, DL requires a large number of parameters, computational power, and meaningful training data, which can be noisy, sparse, and incomplete for specific domains. Furthermore, DL tends to learn correlations from the training data that do not occur in reality, making DNNs poorly generalizable and error-prone.
Therefore, the field of visual transfer learning is seeking methods that are less dependent on training data and are thus more applicable in the constantly changing world. One idea is to enrich DL with prior knowledge. Knowledge graphs (KG) serve as a powerful tool for this purpose because they can formalize and organize prior knowledge based on an underlying ontological schema. They contain symbolic operations such as logic, rules, and reasoning, and can be created, adapted, and interpreted by domain experts. Due to the abstraction potential of symbols, KGs provide good prerequisites for generalizing their knowledge. To take advantage of the generalization properties of KG and the ability of DL to learn from large-scale unstructured data, attempts have long been made to combine explicit graph and implicit vector representations. However, with the recent development of knowledge graph embedding methods, where a graph is transferred into a vector space, new perspectives for a combination in vector space are opening up.
In this work, we attempt to combine prior knowledge from a KG with DL to improve visual transfer learning using the following steps: First, we explore the potential benefits of using prior knowledge encoded in a KG for DL-based visual transfer learning. Second, we investigate approaches that already combine KG and DL and create a categorization based on their general idea of knowledge integration. Third, we propose a novel method for the specific category of using the knowledge graph as a trainer, where a DNN is trained to adapt to a representation given by prior knowledge of a KG. Fourth, we extend the proposed method by extracting relevant context in the form of a subgraph of the KG to investigate the relationship between prior knowledge and performance on a specific CV task. In summary, this work provides deep insights into the combination of KG and DL, with the goal of making DL approaches more generalizable, more efficient, and more interpretable through prior knowledge.
In order to investigate the psychobiological consequences of acute stress under laboratory conditions, a wide range of methods for socially evaluative stress induction have been developed. The present dissertation is concerned with evaluating a virtual reality (VR)-based adaptation of one of the most widely used of those methods, the Trier Social Stress Test (TSST). In the three empirical studies collected in this dissertation, we aimed to examine the efficacy and possible areas of application of the adaptation of this well-established psychosocial stressor in a virtual environment. We found that the TSST-VR reliably incites the activation of the major stress effector systems in the human body, albeit in a slightly less pronounced way than the original paradigm. Moreover, the experience of presence is discussed as one potential factor of influence in the origin of the psychophysiological stress response. Lastly, we present a use scenario for the TSST-VR in which we employed the method to investigate the effects of acute stress on emotion recognition performance. We conclude that, due to its advantages concerning versatility, standardization and economic administration, the paradigm harbors enormous potential not only for psychobiological research, but other applications such as clinical practice as well. Future studies should further explore the underlying effect mechanisms of stress in the virtual realm and the implementation of VR-based paradigms in different fields of application.
External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.
Der digitale Fortschritt der vergangenen Jahrzehnte beruht zu einem großen Teil auf der Innovationskraft junger aufstrebender Unternehmen. Während diese Unternehmen auf der einen Seite ihr hohes Maß an Innovativität eint, entsteht für diese zeitgleich auch ein hoher Bedarf an finanziellen Mitteln, um ihre geplanten Innovations- und Wachstumsziele auch in die Tat umsetzen zu können. Da diese Unternehmen häufig nur wenige bis keine Unternehmenswerte, Umsätze oder auch Profitabilität vorweisen können, gestaltet sich die Aufnahme von externem Kapital häufig schwierig bis unmöglich. Aus diesem Umstand entstand in der Mitte des zwanzigsten Jahrhunderts das Geschäftsmodell der Risikofinanzierung, des sogenannten „Venture Capitals“. Dabei investieren Risikokapitalgeber in aussichtsreiche junge Unternehmen, unterstützen diese in ihrem Wachstum und verkaufen nach einer festgelegten Dauer ihre Unternehmensanteile, im Idealfall zu einem Vielfachen ihres ursprünglichen Wertes. Zahlreiche junge Unternehmen bewerben sich um Investitionen dieser Risikokapitalgeber, doch nur eine sehr geringe Zahl erhält diese auch. Um die aussichtsreichsten Unternehmen zu identifizieren, sichten die Investoren die Bewerbungen anhand verschiedener Kriterien, wodurch bereits im ersten Schritt der Bewerbungsphase zahlreiche Unternehmen aus dem Kreis potenzieller Investmentobjekte ausscheiden. Die bisherige Forschung diskutiert, welche Kriterien Investoren zu einer Investition bewegen. Daran anschließend verfolgt diese Dissertation das Ziel, ein tiefergehendes Verständnis darüber zu erlangen, welche Faktoren die Entscheidungsfindung der Investoren beeinflussen. Dabei wird vor allem auch untersucht, wie sich persönliche Faktoren der Investoren, sowie auch der Unternehmensgründer, auf die Investitionsentscheidung auswirken. Ergänzt werden diese Untersuchungen zudem durch die Analyse der Wirkung des digitalen Auftretens von Unternehmensgründern auf die Entscheidungsfindung von Risikokapitalgebern. Des Weiteren verfolgt diese Dissertation als zweites Ziel einen Erkenntnisgewinn über die Auswirkungen einer erfolgreichen Investition auf den Unternehmensgründer. Insgesamt umfasst diese Dissertation vier Studien, die im Folgenden näher beschrieben werden.
In Kapitel 2 wird untersucht, inwiefern sich bestimmte Humankapitaleigenschaften des Investors auf dessen Entscheidungsverhalten auswirken. Mithilfe vorangegangener Interviews und Literaturrecherchen wurden insgesamt sieben Kriterien identifiziert, die Risikokapitalinvestoren in ihrer Entscheidungsfindung nutzen. Daraufhin nahmen 229 Investoren an einem Conjoint Experiment teil, mithilfe dessen gezeigt werden konnte, wie wichtig die jeweiligen Kriterien im Rahmen der Entscheidung sind. Von besonderem Interesse ist dabei, wie sich die Wichtigkeit der Kriterien in Abhängigkeit der Humankapitaleigenschaften der Investoren unterscheiden. Dabei kann gezeigt werden, dass sich die Wichtigkeit der Kriterien je nach Bildungshintergrund und Erfahrung der Investoren unterscheidet. So legen beispielsweise Investoren mit einem höheren Bildungsabschluss und Investoren mit unternehmerischer Erfahrung deutlich mehr Wert auf die internationale Skalierbarkeit der Unternehmen. Zudem unterscheidet sich die Wichtigkeit der Kriterien auch in Abhängigkeit der fachlichen Ausbildung. So legen etwa Investoren mit einer fachlichen Ausbildung in Naturwissenschaften einen deutlich stärkeren Fokus auf den Mehrwert des Produktes beziehungsweise der Dienstleistung. Zudem kann gezeigt werden, dass Investoren mit mehr Investitionserfahrung die Erfahrung des Managementteams wesentlich wichtiger einschätzen als Investoren mit geringerer Investitionserfahrung. Diese Ergebnisse ermöglichen es Unternehmensgründern ihre Bewerbungen um eine Risikokapitalfinanzierung zielgenauer auszurichten, etwa durch eine Analyse des beruflichen Hintergrunds der potentiellen Investoren und eine damit einhergehende Anpassung der Bewerbungsunterlagen, zum Beispiel durch eine stärkere Schwerpunktsetzung besonders relevanter Kriterien.
Die in Kapitel 3 vorgestellte Studie bedient sich der Daten des gleichen Conjoint Experiments aus Kapitel 2, legt hierbei allerdings einen Fokus auf den Unterschied zwischen Investoren aus den USA und Investoren aus Kontinentaleuropa. Dazu wurden Subsamples kreiert, in denen 128 Experimentteilnehmer in den USA angesiedelt sind und 302 in Kontinentaleuropa. Die Analyse der Daten zeigt, dass US-amerikanische Investoren, im Vergleich zu Investoren in Kontinentaleuropa, einen signifikant stärkeren Fokus auf das Umsatzwachstum der Unternehmen legen. Zudem legen kontinentaleuropäische Investoren einen deutlich stärkeren Fokus auf die internationale Skalierbarkeit der Unternehmen. Um die Ergebnisse der Analyse besser interpretieren zu können, wurden diese im Anschluss mit vier amerikanischen und sieben europäischen Investoren diskutiert. Dabei bestätigen die europäischen Investoren die Wichtigkeit der hohen internationalen Skalierbarkeit aufgrund der teilweise geringen Größe europäischer Länder und dem damit zusammenhängenden Zwang, schnell international skalieren zu können, um so zufriedenstellende Wachstumsraten zu erreichen. Des Weiteren wurde der vergleichsweise geringere Fokus auf das Umsatzwachstum in Europa mit fehlenden Mitteln für eine schnelle Expansion begründet. Gleichzeitig wird der starke Fokus der US-amerikanischen Investoren auf Umsatzwachstum mit der höheren Tendenz zu einem Börsengang in den USA begründet, bei dem hohe Umsätze als Werttreiber dienen. Die Ergebnisse dieses Kapitels versetzen Unternehmensgründer in die Lage, ihre Bewerbung stärker an die wichtigsten Kriterien der potenziellen Investoren auszurichten, um so die Wahrscheinlichkeit einer erfolgreichen Investitionsentscheidung zu erhöhen. Des Weiteren bieten die Ergebnisse des Kapitels Investoren, die sich an grenzüberschreitenden syndizierten Investitionen beteiligen, die Möglichkeit, die Präferenzen der anderen Investoren besser zu verstehen und die Investitionskriterien besser auf potenzielle Partner abzustimmen.
Kapitel 4 untersucht ob bestimmte Charaktereigenschaften des sogenannten Schumpeterschen Entrepreneurs einen Einfluss auf die Wahrscheinlichkeit eines zweiten Risikokapitalinvestments haben. Dazu wurden von Gründern auf Twitter gepostete Nachrichten sowie Information von Investitionsrunden genutzt, die auf der Plattform Crunchbase zur Verfügung stehen. Insgesamt wurden mithilfe einer Textanalysesoftware mehr als zwei Millionen Tweets von 3313 Gründern analysiert. Die Ergebnisse der Studie deuten an, dass einige Eigenschaften, die typisch für Schumpetersche Gründer sind, die Chancen für eine weitere Investition erhöhen, während andere keine oder negative Auswirkungen haben. So erhöhen Gründer, die auf Twitter einen starken Optimismus sowie ihre unternehmerische Vision zur Schau stellen die Chancen auf eine zweite Risikokapitalfinanzierung, gleichzeitig werden diese aber durch ein zu starkes Streben nach Erfolg reduziert. Diese Ergebnisse haben eine hohe praktische Relevanz für Unternehmensgründer, die sich auf der Suche nach Risikokapital befinden. Diese können dadurch ihr virtuelles Auftreten („digital identity“) zielgerichteter steuern, um so die Wahrscheinlichkeit einer weiteren Investition zu erhöhen.
Abschließend wird in Kapitel 5 untersucht, wie sich die digitale Identität der Gründer verändert, nachdem diese eine erfolgreiche Risikokapitalinvestition erhalten haben. Dazu wurden sowohl Twitter-Daten als auch Crunchbase-Daten genutzt, die im Rahmen der Erstellung der Studie in Kapitel 4 erhoben wurden. Mithilfe von Textanalyse und Paneldatenregressionen wurden die Tweets von 2094 Gründern vor und nach Erhalt der Investition untersucht. Dabei kann gezeigt werden, dass der Erhalt einer Risikokapitalinvestition das Selbstvertrauen, die positiven Emotionen, die Professionalisierung und die Führungsqualitäten der Gründer erhöhen. Gleichzeitig verringert sich allerdings die Authentizität der von den Gründern verfassten Nachrichten. Durch die Verwendung von Interaktionseffekten kann zudem gezeigt werden, dass die Steigerung des Selbstvertrauens positiv durch die Reputation des Investors moderiert wird, während die Höhe der Investition die Authentizität negativ moderiert. Investoren haben durch diese Erkenntnisse die Möglichkeit, den Weiterentwicklungsprozess der Gründer nach einer erfolgreichen Investition besser nachvollziehen zu können, wodurch sie in die Lage versetzt werden, die Aktivitäten ihrer Gründer auf Social Media Plattformen besser zu kontrollieren und im Bedarfsfall bei ihrer Anpassung zu unterstützen.
Die in den Kapiteln 2 bis 5 vorgestellten Studien dieser Dissertation tragen damit zu einem besseren Verständnis der Entscheidungsfindung im Venture Capital Prozess bei. Der bisherige Stand der Forschung wird um Erkenntnisse erweitert, die sowohl den Einfluss der Eigenschaften der Investoren als auch der Gründer betreffen. Zudem wird auch gezeigt, wie sich die Investition auf den Gründer selbst auswirken kann. Die Implikationen der Ergebnisse, sowie Limitationen und Möglichkeiten künftiger Forschung werden in Kapitel 6 näher beschrieben. Da die in dieser Dissertation verwendeten Methoden und Daten erst seit wenigen Jahren im Kontext der Venture Capital Forschung genutzt werden, beziehungsweise überhaupt verfügbar sind, bietet sie sich als eine Grundlage für weitere Forschung an.
The present thesis addresses the validity of Binge Eating Disorder (BED) as well as underlying mechanisms of BED from three different angles. Three studies provide data discriminating obesity with BED from obesity without BED. Study 1 demonstrates differences between obese individuals with and without BED regarding eating in the natural environment, psychiatric comorbidity, negative affect as well as self reported tendencies in eating behavior. Evidence for possible psychological mechanisms explaining increased intake of BED individuals in the natural environment was given by analyzing associations of negative affect, emotional eating, restrained eating and caloric intake in obese BED compared to NBED controls. Study 2 demonstrated stress-induced changes in the eating behavior of obese individuals with BED. The impact of a psychosocial stressor, the Trier Social Stress Test (TSST, Kirschbaum, Pirke, & Hellhammer, 1993), on behavioral patterns of eating behavior in laboratory was investigated. Special attention was given to stress-induced changes in variables that reflect mechanisms of appetite regulation in obese BED individuals compared to controls. To further explore by which mechanisms stress might trigger binge eating, study 3 investigated differences in stress-induced cortisol secretion after a socially evaluated cold pressure test (SECPT, Schwabe, Haddad, & Schachinger, 2008) in obese BED as compared to obese NBED individuals.
Evidence points to autonomy as having a place next to affiliation, achievement, and power as one of the basic implicit motives; however, there is still some research that needs to be conducted to support this notion.
The research in this dissertation aimed to address this issue. I have specifically focused on two issues that help solidify the foundation of work that has already been conducted on the implicit autonomy motive, and will also be a foundation for future studies. The first issue is measurement. Implicit motives should be measured using causally valid instruments (McClelland, 1980). The second issue addresses the function of motives. Implicit motives orient, select, and energize behavior (McClelland, 1980). If autonomy is an implicit motive, then we need a valid instrument to measure it and we also need to show that it orients, selects, and energizes behavior.
In the following dissertation, I address these two issues in a series of ten studies. Firstly, I present studies that examine the causal validity of the Operant Motive Test (OMT; Kuhl, 2013) for the implicit affiliation and power motives using established methods. Secondly, I developed and empirically tested pictures to specifically assess the implicit autonomy motive and examined their causal validity. Thereafter, I present two studies that investigated the orienting and energizing effects of the implicit autonomy motive. The results of the studies solidified the foundation of the OMT and how it measures nAutonomy. Furthermore, this dissertation demonstrates that nAutonomy fulfills the criteria for two of the main functions of implicit motives. Taken together, the findings of this dissertation provide further support for autonomy as an implicit motive and a foundation for intriguing future studies.
The reduction of information contained in model time series through the use of aggregating statistical performance measures is very high compared to the amount of information that one would like to draw from it for model identification and calibration purposes. It is readily known that this loss imposes important limitations on model identification and -diagnostics and thus constitutes an element of the overall model uncertainty as essentially different model realizations with almost identical performance measures (e.g. r-² or RMSE) can be generated. In three consecutive studies the present work proposes an alternative approach towards hydrological model evaluation based on the application of Self-Organizing Maps (SOM; Kohonen, 2001). The Self-Organizing Map is a type of artificial neural network and unsupervised learning algorithm that is used for clustering, visualization and abstraction of multidimensional data. It maps vectorial input data items with similar patterns onto contiguous locations of a discrete low-dimensional grid of neurons. The iterative training of the SOM causes the neurons to form a discrete, data-compressed representation of the high-dimensional input data. Using appropriate visualization techniques, information on distributions, patterns and relationships in complex data sets can be extracted. Irrespective of their potential, SOM applications have earned very little attention in hydrological modelling compared to other artificial neural network techniques. Therefore, the aim of the present work is to demonstrate that the application of Self-Organizing Maps has very high potential to address fundamental issues of model evaluation: It is shown that the clustering and classification of model time series by means of SOM can provide useful insights into model behaviour. In combination with the diagnostic properties of Signature Indices (Gupta et al., 2008; Yilmaz et al., 2008) SOM provides a novel tool for interpreting the model parameters in the hydrological context and identifying parameter sets that simultaneously meet multiple objectives, even if the corresponding model realizations belong to different models. Moreover, the presented studies and reviews also encourage further studies on the application of SOM in hydrological modelling.
In recent years, the study of dynamical systems has developed into a central research area in mathematics. Actually, in combination with keywords such as "chaos" or "butterfly effect", parts of this theory have been incorporated in other scientific fields, e.g. in physics, biology, meteorology and economics. In general, a discrete dynamical system is given by a set X and a self-map f of X. The set X can be interpreted as the state space of the system and the function f describes the temporal development of the system. If the system is in state x ∈ X at time zero, its state at time n ∈ N is denoted by f^n(x), where f^n stands for the n-th iterate of the map f. Typically, one is interested in the long-time behaviour of the dynamical system, i.e. in the behaviour of the sequence (f^n(x)) for an arbitrary initial state x ∈ X as the time n increases. On the one hand, it is possible that there exist certain states x ∈ X such that the system behaves stably, which means that f^n(x) approaches a state of equilibrium for n→∞. On the other hand, it might be the case that the system runs unstably for some initial states x ∈ X so that the sequence (f^n(x)) somehow shows chaotic behaviour. In case of a non-linear entire function f, the complex plane always decomposes into two disjoint parts, the Fatou set F_f of f and the Julia set J_f of f. These two sets are defined in such a way that the sequence of iterates (f^n) behaves quite "wildly" or "chaotically" on J_f whereas, on the other hand, the behaviour of (f^n) on F_f is rather "nice" and well-understood. However, this nice behaviour of the iterates on the Fatou set can "change dramatically" if we compose the iterates from the left with just one other suitable holomorphic function, i.e. if we consider sequences of the form (g∘f^n) on D, where D is an open subset of F_f with f(D)⊂ D and g is holomorphic on D. The general aim of this work is to study the long-time behaviour of such modified sequences. In particular, we will prove the existence of holomorphic functions g on D having the property that the behaviour of the sequence of compositions (g∘f^n) on the set D becomes quite similarly chaotic as the behaviour of the sequence (f^n) on the Julia set of f. With this approach, we immerse ourselves into the theory of universal families and hypercyclic operators, which itself has developed into an own branch of research. In general, for topological spaces X, Y and a family {T_i: i ∈ I} of continuous functions T_i:X→Y, an element x ∈ X is called universal for the family {T_i: i ∈ I} if the set {T_i(x): i ∈ I} is dense in Y. In case that X is a topological vector space and T is a continuous linear operator on X, a vector x ∈ X is called hypercyclic for T if it is universal for the family {T^n: n ∈ N}. Thus, roughly speaking, universality and hypercyclicity can be described via the following two aspects: There exists a single object which allows us, via simple analytical operations, to approximate every element of a whole class of objects. In the above situation, i.e. for a non-linear entire function f and an open subset D of F_f with f(D)⊂ D, we endow the space H(D) of holomorphic functions on D with the topology of locally uniform convergence and we consider the map C_f:H(D)→H(D), C_f(g):=g∘f|_D, which is called the composition operator with symbol f. The transform C_f is a continuous linear operator on the Fréchet space H(D). In order to show that the above-mentioned "nice" behaviour of the sequence of iterates (f^n) on the set D ⊂ F_f can "change dramatically" if we compose the iterates from the left with another suitable holomorphic function, our aim consists in finding functions g ∈ H(D) which are hypercyclic for C_f. Indeed, for each hypercyclic function g for C_f, the set of compositions {g∘f^n|_D: n ∈ N} is dense in H(D) so that the sequence of compositions (g∘f^n|_D) is kind of "maximally divergent" " meaning that each function in H(D) can be approximated locally uniformly on D via subsequences of (g∘f^n|_D). This kind of behaviour stands in sharp contrast to the fact that the sequence of iterates (f^n) itself converges, behaves like a rotation or shows some "wandering behaviour" on each component of F_f. To put it in a nutshell, this work combines the theory of non-linear complex dynamics in the complex plane with the theory of dynamics of continuous linear operators on spaces of holomorphic functions. As far as the author knows, this approach has not been investigated before.
Chapter 2: Using data from the German Socio-Economic Panel, this study examines the relation-ship between immigrant residential segregation and immigrants" satisfaction with the neighbor-hood. The estimates show that immigrants living in segregated areas are less satisfied with the neighborhood. This is consistent with the hypothesis that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Our result holds true even when controlling for other influences such as household income and quality of the dwelling. It also holds true in fixed effects estimates that account for unobserved time-invariant influences. Chapter 3: Using survey data from the German Socio-Economic Panel, this study shows that immigrants living in segregated residential areas are more likely to report discrimination because of their ethnic background. This applies to both segregated areas where most neighbors are immigrants from the same country of origin as the surveyed person and segregated areas where most neighbors are immigrants from other countries of origin. The results suggest that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Chapter 4: Using data from the German Socio-Economic Panel (SOEP) and administrative data from 1996 to 2009, I investigate the question whether or not right-wing extremism of German residents is affected by the ethnic concentration of foreigners living in the same residential area. My results show a positive but insignificant relationship between ethnic concentration at the county level and the probability of extreme right-wing voting behavior for West Germany. However, due to potential endogeneity issues, I additionally instrument the share of foreigners in a county with the share of foreigners in each federal state (following an approach of Dustmann/Preston 2001). I find evidence for the interethnic contact theory, predicting a negative relationship between foreign-ers" share and right-wing voting. Moreover, I analyze the moderating role of education and the influence of cultural traits on this relationship. Chapter 5: Using data from the Socio-Economic Panel from 1998 to 2009 and administrative data on regional ethnic diversity, I show that ethnic diversity inhibits significantly people- political interest and participation in political organizations in West Germany. People seem to isolate themselves from political participation if exposed to more ethnic diversity which is particularly relevant with respect to the ongoing integration process of the European Union and the increasing transfer of legislative power from the national to European level. The results are robust if an instrumental variable strategy suggested by Dustmann and Preston (2001) is used to take into account that ethnic diversity measured on a local spatial level could be endogenous due to residential sorting. Interestingly, participation in non-political organizations is positively affected by ethnic diversity if selection bias is corrected for.
Internet interventions have gained popularity and the idea is to use them to increase the availability of psychological treatment. Research suggests that internet interventions are effective for a number of psychological disorders with effect sizes comparable to those found in face-to-face treatment. However, when provided as an add-on to treatment as usual, internet interventions do not seem to provide additional benefit. Furthermore, adherence and dropout rates vary greatly between studies, limiting the generalizability of the findings. This underlines the need to further investigate differences between internet interventions, participating patients, and their usage of interventions. A stronger focus on the processes of change seems necessary to better understand the varying findings regarding outcome, adherence and dropout in internet interventions. Thus, the aim of this dissertation was to investigate change processes in internet interventions and the factors that impact treatment response. This could help to identify important variables that should be considered in research on internet interventions as well as in clinical settings that make use of internet interventions.
Study I (Chapter 5) investigated early change patterns in participants of an internet intervention targeting depression. Data from 409 participants were analyzed using Growth Mixture Modeling. Specifically a piecewise model was applied to model change from screening to registration (pretreatment) and early change (registration to week four of treatment). Three early change patterns were identified; two were characterized by improvement and one by deterioration. The patterns were predictive of treatment outcome. The results therefore indicated that early change should be closely monitored in internet interventions, as early change may be an important indicator of treatment outcome.
Study II (Chapter 6) picked up on the idea of analyzing change patterns in internet interventions and extended it by using the Muthen-Roy model to identify change-dropout patterns. A sligthly bigger sample of the dataset from Study I was analyzed (N = 483). Four change-dropout patterns emerged; high risk of dropout was associated with rapid improvement and deterioration. These findings indicate that clinicians should consider how dropout may depend on patient characteristics as well as symptom change, as dropout is associated with both deterioration and a good enough dosage of treatment.
Study III (Chapter 7) compared adherence and outcome in different participant groups and investigated the impact of adherence to treatment components on treatment outcome in an internet intervention targeting anxiety symptoms. 50 outpatient participants waiting for face- to-face treatment and 37 self-referred participants were compared regarding adherence to treatment components and outcome. In addition, outpatient participants were compared to a matched sample of outpatients, who had no access to the internet intervention during the waiting period. Adherence to treatment components was investigated as a predictor of treatment outcome. Results suggested that especially adherence may vary depending on participant group. Also using specific measures of adherence such as adherence to treatment components may be crucial to detect change mechanisms in internet interventions. Fostering adherence to treatment components in participants may increase the effectiveness of internet interventions.
Results of the three studies are discussed and general conclusions are drawn.
Implications for future research as well as their utility for clinical practice and decision- making are presented.
In the modeling context, non-linearities and uncertainty go hand in hand. In fact, the utility function's curvature determines the degree of risk-aversion. This concept is exploited in the first article of this thesis, which incorporates uncertainty into a small-scale DSGE model. More specifically, this is done by a second-order approximation, while carrying out the derivation in great detail and carefully discussing the more formal aspects. Moreover, the consequences of this method are discussed when calibrating the equilibrium condition. The second article of the thesis considers the essential model part of the first paper and focuses on the (forward-looking) data needed to meet the model's requirements. A large number of uncertainty measures are utilized to explain a possible approximation bias. The last article keeps to the same topic but uses statistical distributions instead of actual data. In addition, theoretical (model) and calibrated (data) parameters are used to produce more general statements. In this way, several relationships are revealed with regard to a biased interpretation of this class of models. In this dissertation, the respective approaches are explained in full detail and also how they build on each other.
In summary, the question remains whether the exact interpretation of model equations should play a role in macroeconomics. If we answer this positively, this work shows to what extent the practical use can lead to biased results.
Cortisol exhibits typical ultradian and circadian rhythm and disturbances in its secretory pattern have been described in stress-related pathology. The aim of this thesis was to dissect the underlying structure of cortisol pulsatility and to develop tools to investigate the effects of this pulsatility on immune cell trafficking and the responsiveness of the neuroendocrine system and GR target genes to stress. Deconvolution modeling was set up as a tool for investigation of the pulsatile secretion underlying the ultradian cortisol rhythm. This further allowed us to investigate the role of the single cortisol pulses on the immune cell trafficking and the role of induced cortisol pulses on the kinetics of expression of GR target genes. The development of these three tools, would allow to induce and investigate in future the significance of single cortisol pulses for health and disease.
This thesis presents a study of tsunami deposits created by the 2004 Indian Ocean tsunami at the Thai Andaman coast. The outcomes of a study are the characteristics of tsunami deposit for paleo-tsunami database, the identification of major sediment layers in tsunami deposit and the reconstructing tsunami run-ups from the characteristics of tsunami deposit for a coastal development program. The investigations of tsunami deposit are made almost 3 years after the event. Field investigations characterize the tsunami deposit as a distinct sediment layer variable in thickness of gray sand deposited with an erosional basis on a pre-existing soil. The best location for the observation of recent tsunami deposit is the area located about 50-200 m inland from the coastline. In most cases, the deposit layer is normally graded. In some cases, the deposit contains rip-up clasts of muddy soils and/or organic matters. The tsunami deposits are compared with three deposits from coastal sub-environments. The mean grain-size and standard deviation of deposits show that the shoreface deposits are fine to very fine sand, poorly to moderately well sorted; the swash zone deposits are coarse to fine sand, poorly to well sorted; the berm/dune deposits are medium to fine sand, poorly to well sorted; and the tsunami deposits are coarse to very fine sand, poorly to moderately well sorted. The plots of deposit mean grain-size versus sorting indicate that the tsunami deposits are composed of shoreface deposits, swash zone deposits and berm/dune deposits as well. The vertical variation of the texture of tsunami deposit shows that the mean grain-size fines upward and fining landward. The analysis and interpretation of the run-up numbers from the characteristics of tsunami deposits get three run-ups for the 2004 Indian Ocean tsunami at the Thai Andaman coast. It corresponds to field observations from the eye-witness reports and local people- affirmations. The total deposition is a major transportation pattern of onshore tsunami sediments. The sediments must fine in the direction of transport. In general, the major origins of the sediment are the swash zone and berm/dune zone where coarse to medium sand is a significant material, the minor origin of tsunami sediment is a shoreface where a significant material is fine to very fine sand. Only at an area with flat slope shorface, the major origin of tsunami sediment is the shoreface. The thicknesses, the mean grain-sizes, and the standard deviations of tsunami deposits are used to evaluate the influences of coastal morphology on the sediment characteristics. The evaluations show that the tsunami affected areas were attacked by the variable energy waves. Wave energies at the direct tsunami wave affected areas are higher than at the indirect tsunami wave affected areas. Tsunami wave energy is highly dissipated at an area with steep slope shoreface. In the same way, tsunami run-up energy is highly dissipated at an area with steep slope onshore. A channel paralleled to the coastline decreases the run-up velocity, slightly dissipates run-up energy. The road and pond highly influence the characteristics of tsunami deposit and tsunami run-up. A road obstructs the run-up velocity, dissipates run-up energy. A pond decreases run-up velocity, dissipates run-up energy. The characteristics of tsunami deposit can be interpreted for reconstructing the characteristics of tsunami run-up such as a run-up height and a flow velocity. Soulsby et al.(2007)- model is applied for reconstructing tsunami run-up at the study areas. The input parameters are sediment grain-size and sediment inundation distance. Ao Kheuy beach and Khuk Khak beach, Phang Nga province, Thailand are the areas listed for reconstructing tsunami run-up. The evaluated run-up heights are 4.2-4.9 m at Ao Kheuy beach, and 5.4-9.4 m at Khuk Khak beach. The evaluated run-up velocities are 12.8-19.2 m/s (maximum) and 0.2-1.9 m/s (mean) at the coastline and onshore, respectively. Hence, a reasonably good agreement between the evaluated and observed run-up is found. Tsunami run-up height and velocity can be used for coastal development and risk management in the tsunami affected areas. The case studies from the Thai Andaman coast suggest that in the area from coastline to about 70-140 m inland was flooded by the high velocity (high energy) run-ups, and those run-up energies were dissipated there. That area ought to be a non-residential area or a physical protection construction area (flood barrier, forest planting, etc.).
The discretization of optimal control problems governed by partial differential equations typically leads to large-scale optimization problems. We consider flow control involving the time-dependent Navier-Stokes equations as state equation which is stamped by exactly this property. In order to avoid the difficulties of dealing with large-scale (discretized) state equations during the optimization process, a reduction of the number of state variables can be achieved by employing a reduced order modelling technique. Using the snapshot proper orthogonal decomposition method, one obtains a low-dimensional model for the computation of an approximate solution to the state equation. In fact, often a small number of POD basis functions suffices to obtain a satisfactory level of accuracy in the reduced order solution. However, the small number of degrees of freedom in a POD based reduced order model also constitutes its main weakness for optimal control purposes. Since a single reduced order model is based on the solution of the Navier-Stokes equations for a specified control, it might be an inadequate model when the control (and consequently also the actual corresponding flow behaviour) is altered, implying that the range of validity of a reduced order model, in general, is limited. Thus, it is likely to meet unreliable reduced order solutions during a control problem solution based on one single reduced order model. In order to get out of this dilemma, we propose to use a trust-region proper orthogonal decomposition (TRPOD) approach. By embedding the POD based reduced order modelling technique into a trust-region framework with general model functions, we obtain a mechanism for updating the reduced order models during the optimization process, enabling the reduced order models to represent the flow dynamics as altered by the control. In fact, a rigorous convergence theory for the TRPOD method is obtained which justifies this procedure also from a theoretical point of view. Benefiting from the trust-region philosophy, the TRPOD method guarantees to save a lot of computational work during the control problem solution, since the original state equation only has to be solved if we intend to update our model function in the trust-region framework. The optimization process itself is completely based on reduced order information only.
The complicated human alternative GR promoter region plays a pivotal role in the regulation of GR levels. In this thesis, both genomic and environmental factors linked with GR expression are covered. This research showed that GR promoters were susceptible to silencing by methylation and the activity of the individual promoters was also modulated by SNPs. E2F1 is a major element to drive the expression of GR 1F transcripts and single CpG dinucleotide methylation cannot mediate the inhibition of transcription in vitro. Also, the distribution of GR first exons and 3" splice variants (GRα and GR-P) is expressed throughout the human brain with no region-specific alternative first exon usage. These data mirrored the consistently low levels of methylation in the brain, and the observed homogeneity throughout the studied regions. Taken together, the research presented in this thesis explored several layers of complexity in GR transcriptional regulation.
Mobile computing poses different requirements on middleware than more traditional desktop systems interconnected by fixed networks. Not only the characteristics of mobile network technologies as for example lower bandwidth and unreliability demand for customized support. Moreover, the devices employed in mobile settings usually are less powerful than their desktop counterparts. Slow processors, a fairly limited amount of memory, and smaller displays are typical properties of mobile equipment, again requiring special treatment. Furthermore, user mobility results in additional requirements on appropriate middleware support. As opposed to the quite static environments dominating the world of desktop computing, dynamic aspects gain more importance. Suitable strategies and techniques for exploring the environment e.g. in order to discover services available locally are only one example. Managing resources in a fault-tolerant manner, reducing the impact ill-behaved clients have on system stability define yet another exemplary prerequisite. Most state of the art middleware has been designed for use in the realm of static, resource rich environments and hence is not immediately applicable in mobile settings as set forth above. The work described throughout this thesis aims at investigating the suitability of different middleware technologies with regard to application design, development, and deployment in the context of mobile networks. Mostly based upon prototypes, shortcomings of those technologies are identified and possible solutions are proposed and evaluated where appropriate. Besides tailoring middleware to specific communication and device characteristics, the cellular structure of current mobile networks may and shall be exploited in favor of more scalable and robust systems. Hence, an additional topic considered within this thesis is to point out and investigate suitable approaches permitting to benefit from such cellular infrastructures. In particular, a system architecture for the development of applications in the context of mobile networks will be proposed. An evaluation of this architecture employing mobile agents as flexible, network-side representatives for mobile terminals is performed, again based upon a prototype application. In summary, this thesis aims at providing several complementary approaches regarding middleware support tailored for mobile, cellular networks, a field considered to be of rising importance in a world where mobile communication and particularly data services emerge rapidly, augmenting the globally interconnecting, wired Internet.
Physically-based distributed rainfall-runoff models as the standard analysis tools for hydro-logical processes have been used to simulate the water system in detail, which includes spa-tial patterns and temporal dynamics of hydrological variables and processes (Davison et al., 2015; Ek and Holtslag, 2004). In general, catchment models are parameterized with spatial information on soil, vegetation and topography. However, traditional approaches for eval-uation of the hydrological model performance are usually motivated with respect to dis-charge data alone. This may thus cloud model realism and hamper understanding of the catchment behavior. It is necessary to evaluate the model performance with respect to in-ternal hydrological processes within the catchment area as well as other components of wa-ter balance rather than runoff discharge at the catchment outlet only. In particular, a consid-erable amount of dynamics in a catchment occurs in the processes related to interactions of the water, soil and vegetation. Evapotranspiration process, for instance, is one of those key interactive elements, and the parameterization of soil and vegetation in water balance mod-eling strongly influences the simulation of evapotranspiration. Specifically, to parameterize the water flow in unsaturated soil zone, the functional relationships that describe the soil water retention and hydraulic conductivity characteristics are important. To define these functional relationships, Pedo-Transfer Functions (PTFs) are common to use in hydrologi-cal modeling. Opting the appropriate PTFs for the region under investigation is a crucial task in estimating the soil hydraulic parameters, but this choice in a hydrological model is often made arbitrary and without evaluating the spatial and temporal patterns of evapotran-spiration, soil moisture, and distribution and intensity of runoff processes. This may ulti-mately lead to implausible modeling results and possibly to incorrect decisions in regional water management. Therefore, the use of reliable evaluation approaches is continually re-quired to analyze the dynamics of the current interactive hydrological processes and predict the future changes in the water cycle, which eventually contributes to sustainable environ-mental planning and decisions in water management.
Remarkable endeavors have been made in development of modelling tools that provide insights into the current and future of hydrological patterns in different scales and their im-pacts on the water resources and climate changes (Doell et al., 2014; Wood et al., 2011). Although, there is a need to consider a proper balance between parameter identifiability and the model's ability to realistically represent the response of the natural system. Neverthe-less, tackling this issue entails investigation of additional information, which usually has to be elaborately assembled, for instance, by mapping the dominant runoff generation pro-cesses in the intended area, or retrieving the spatial patterns of soil moisture and evapotran-spiration by using remote sensing methods, and evaluation at a scale commensurate with hydrological model (Koch et al., 2022; Zink et al., 2018). The present work therefore aims to give insights into the modeling approaches to simulate water balance and to improve the soil and vegetation parameterization scheme in the hydrological model subject to producing more reliable spatial and temporal patterns of evapotranspiration and runoff processes in the catchment.
An important contribution to the overall body of work is a book chapter included among publications. The book chapter provides a comprehensive overview of the topic and valua-ble insights into the understanding the water balance and its estimation methods.
Moreover, the first paper aimed to evaluate the hydrological model behavior with re-spect to contribution of various sources of information. To do so, a multi-criteria evaluation metric including soft and hard data was used to define constraints on outputs of the 1-D hydrological model WaSiM-ETH. Applying this evaluation metric, we could identify the optimal soil and vegetation parameter sets that resulted in a “behavioral” forest stand water balance model. It was found out that even if simulations of transpiration and soil water con-tent are consistent with measured data, but still the dominant runoff generation processes or total water balance might be wrongly calculated. Therefore, only using an evaluation scheme which looks over different sources of data and embraces an understanding of the local controls of water loss through soil and plant, allowed us to exclude the unrealistic modeling outputs. The results suggested that we may need to question the generally accept-ed soil parameterization procedures that apply default parameter sets.
The second paper attempts to tackle the pointed model evaluation hindrance by getting down to the small-scale catchment (in Bavaria). Here, a methodology was introduced to analyze the sensitivity of the catchment water balance model to the choice of the Pedo-Transfer Functions (PTF). By varying the underlying PTFs in a calibrated and validated model, we could determine the resulting effects on the spatial distribution of soil hydraulic properties, total water balance in catchment outlet, and the spatial and temporal variation of the runoff components. Results revealed that the water distribution in the hydrologic system significantly differs amongst various PTFs. Moreover, the simulations of water balance components showed high sensitivity to the spatial distribution of soil hydraulic properties. Therefore, it was suggested that opting the PTFs in hydrological modeling should be care-fully tested by looking over the spatio-temporal distribution of simulated evapotranspira-tion and runoff generation processes, whether they are reasonably represented.
To fulfill the previous studies’ suggestions, the third paper then aims to focus on evalu-ating the hydrological model through improving the spatial representation of dominant run-off processes. It was implemented in a mesoscale catchment in southwestern Germany us-ing the hydrological model WaSiM-ETH. Dealing with the issues of inadequate spatial ob-servations for rigorous spatial model evaluation, we made use of a reference soil hydrologic map available for the study area to discern the expected dominant runoff processes across a wide range of hydrological conditions. The model was parameterized by applying 11 PTFs and run by multiple synthetic rainfall events. To compare the simulated spatial patterns to the patterns derived by digital soil map, a multiple-component spatial performance metric (SPAEF) was applied. The simulated DRPs showed a large variability with regard to land use, topography, applied rainfall rates, and the different PTFs, which highly influence the rapid runoff generation under wet conditions.
The three published manuscripts proceeded towards the model evaluation viewpoints that ultimately attain the behavioral model outputs. It was performed through obtaining information about internal hydrological processes that lead to certain model behaviors, and also about the function and sensitivity of some of the soil and vegetation parameters that may primarily influence those internal processes in a catchment. Accordingly, using this understanding on model reactions, and by setting multiple evaluation criteria, it was possi-ble to identify which parameterization could lead to behavioral model realization. This work, in fact, will contribute to solving some of the issues (e.g., spatial variability and modeling methods) identified as the 23 unsolved problems in hydrology in the 21st century (Blöschl et al., 2019). The results obtained in the present work encourage the further inves-tigations toward a comprehensive model calibration procedure considering multiple data sources simultaneously. This will enable developing the new perspectives to the current parameter estimation methods, which in essence, focus on reproducing the plausible dy-namics (spatio-temporal) of the other hydrological processes within the watershed.
This study aims to estimate the cotton yield at the field and regional level via the APSIM/OZCOT crop model, using an optimization-based recalibration approach based on the state variable of the cotton canopy - the leaf area index (LAI), derived from atmospherically corrected Landsat-8 OLI remote sensing images in 2014. First, a local sensitivity and global analysis approach was employed to test the sensitivity of cultivar, soil and agronomic parameters to the dynamics of the LAI. After sensitivity analyses, a series of sensitive parameters were obtained. Then, the APSIM/OZCOT crop model was calibrated by observations over a two-year span (2006-2007) at the Aksu station, combined with these sensitive cultivar parameters and the current understanding of cotton cultivar parameters. Third, the relationship between the observed in-situ LAI and synchronous perpendicular vegetation indices derived from six Landsat-8 OLI images covering the entire growth stage was modelled to generate LAI maps in time and space. Finally, the Particle Swarm Optimization (PSO) and general-purpose optimization approach (based on Nelder-Mead algorithm) were used to recalibrate four sensitive agronomic parameters (row spacing, sowing density per row, irrigation amount and total fertilization) according to the minimization of the root-mean-square deviation (RMSE) between the simulated LAI from the APSIM/OZCOT model and retrieved LAI from Landsat-8 OLI remote sensing images. After the recalibration, the best simulated results compared with observed cotton yield were obtained. The results showed that: (1) FRUDD, FLAI and DDISQ were the major cultivar parameters suitable for calibrating the cotton cultivar. (2) After the calibration, the simulated LAI performed well with an RMSE and mean absolute error (MAE) of 0.45 and 0.33, respectively, in 2006 and 0.46 and 0.41, respectively, in 2007. The coefficient of determination between the observed and simulated LAI was 0.83 and 0.97, respectively, in 2006 and 2007. The Pearson- correlation coefficient was 0.913 and 0.988 in 2006 and 2007, respectively, with a significant positive correlation between the simulated and observed LAI. The difference between the observed and simulated yield was 776.72 kg/ha and 259.98 kg/ha in 2006 and 2007, respectively. (3) Cotton cultivation in 2014 was obtained using three Landsat-8 OLI images - DOY136 (May), DOY 168 (June) and DOY 200 (July) - based on the phenological differences in cotton and other vegetation types. (4) The yield estimation after the assimilation closely approximated the field-observed values, and the coefficient of determination was as high as 0.82, after recalibration of the APSIM/OZCOT model for ten cotton fields. The difference between the observed and assimilated yields for the ten fields ranged from 18.2 to 939.7 kg/ha. The RMSE and MAE between the assimilated and observed yield was 417.5 and 303.1 kg/ha, respectively. These findings provide scientific evidence for the feasibility of coupled remote sensing and APSIM/OZCOT model at the field level. (5) Upscaling from field level to regional level, the assimilation algorithm and scheme are both especially important. Although the PSO method is very efficient, the computational efficiency is also the shortcoming of the assimilation strategy on a regional scale. Comparisons between the PSO and general-purpose optimization method (based on the Nelder-Mead algorithm) were implemented from the RSME, LAI curve and computational time. The general-purpose optimization method (based on the Nelder-Mead algorithm) was used for the regional assimilation between remote sensing and the APSIM/OZCOT model. Meanwhile, the basic unit for regional assimilation was also determined as cotton field rather than pixel. Moreover, the crop growth simulation was also divided into two phases (vegetative growth and reproductive growth) for regional assimilation. (6) The regional assimilation at the vegetative growth stage between the remote sensing derived and APSIM/OZCOT model-simulated LAI was implemented by adjusting two parameters: row spacing and sowing density per row. The results showed that the sowing density of cotton was higher in the southern part than in the northern part of the study area. The spatial pattern of cotton density was also consistent with the reclamation from 2001 to 2013. Cotton fields after early reclamation were mainly located in the southern part while the recent reclamation was located in the northern part. Poor soil quality, lack of irrigation facilities and woodland belts of cotton fields in the northern part caused the low density of cotton. Regarding the row spacing, the northern part was larger than the southern part due to the variation of two agronomic modes from military and private companies. (7) The irrigation and fertilization amount were both used as key parameters to be adjusted for regional assimilation during the reproductive growth period. The result showed that the irrigation per time ranged from 58.14 to 89.99 mm in the study area. The spatial distribution of the irrigation amount is higher in the northern part while lower in southern study area. The application of urea fertilization ranged from 500.35 to 1598.59 kg/ha in the study area. The spatial distribution of fertilization was lower in the northern part and higher in the southern part. More fertilization applied in the southern study area aims to increase the boll weight and number for pursuing higher yields of cotton. The frequency of the RSME during the second assimilation was mainly located in the range of 0.4-0.6 m2/m2. The estimated cotton yield ranged from 1489 to 8895 kg/ha. The spatial distribution of the estimated yield is also higher in the southern part than the northern study area.
Das erste Kapitel "ECOWAS" capability and potential to overcome constraints to growth and poverty reduction of its member states" diskutiert die Analyse wirtschaftlicher und sozialer Barrieren für ökonomisches Wachstum " eine der Hauptelemente für Entwicklungs- und Armutsreduktionsstrategien in Entwicklungsländern. Die Form der länderspezifischen Analyse von Wachstumsbarrieren wurde nach dem Scheitern der auf alle Länder generalisierten Entwicklungsstrategie des Washington Consensus insbesondere durch den Ansatz der "Growth Diagnostics" der Harvard Professoren Hausman, Rodrik und Velasco eingeführt. Es zeigt sich jedoch, dass bisher der Fokus rein auf den länderspezifischen Analysen bzw. Strategieentwicklungen liegt. Diese Arbeit erweiterte die Diskussion auf die regionale Ebene, indem es beispielhaft an der Economic Community of West African States (ECOWAS) die länderspezifischen Wachstumsbarrieren mit den regionalen Wachstumsbarrieren vergleicht. Dies erfolgt mittels einer Darstellung der in Studien und Strategien bereits identifizierten, länderspezifischen Wachstumsbarrieren in den jeweiligen Ländern sowie mit der Auswertung der regionalen Strategien der ECOWAS. Dazu wird ermittelt, inwieweit auf der regionalen Ebene auch messbare Ergebnisse bei der Bekämpfung von Wachstumsbarrieren erzielt werden. Es zeigt sich, dass ,trotz der wirtschaftlichen und sozialen Diversität der Region, die ECOWAS den Großteil der in den Ländern identifizierten Wachstumsbarrieren ebenfalls auflistet und darüber hinaus sogar mit messbaren Ergebnissen dazu beiträgt, Veränderungen des Status Quo zu erreichen. Die Erweiterung des Ansatzes der Growth Diagnostics auf die regionale Ebene sowie die Erweiterung um das vergleichende Element von länderspezifischen und regionalen Wachstumsbarrieren zeigen sich als praktikabler Weg, Entwicklungsstrategien auf regionaler Ebene zu prüfen und subsidiär weiterzuentwickeln. Das zweite Kapitel "Simplifying evaluation of potential causalities in development projects using Qualitative Comparative Analysis (QCA)" diskutiert die Methode der qualitativen komperativen Analyse (QCA) als Evaluierungsmethodik für Projekte der Entwicklungszusammenarbeit. Hierbei stehen die adäquate Messung sowie die verständliche Darstellung der Wirkung von Entwicklungszusammenarbeit im Vordergrund. Dies ist ein Beitrag zu der intensiv geführten Diskussion, wie Wirkung von Hilfe in Entwicklungsländern gemessen und daraus für weitere Projekte gelernt werden kann. Mit der beispielhaften Anwendung der QCA auf einen Datensatz der deutschen Entwicklungszusammenarbeit im Senegal wird erstmalig diese Methode für die Entwicklungszusammenarbeit in der Praxis angewandt. Der Fokus liegt dabei auf der Überprüfung von bestimmten Programmtheorien, d.h. der Annahme bestimmter Zusammenhänge zwischen eingesetzten Mitteln, äußeren Umständen und den Projektergebnissen bei der Implementierung von Projekten. Während solche Programmtheorien in dem Großteil der Projektskizzen der deutschen Entwicklungszusammenarbeit enthalten sind, werden die wenigsten dieser Programmtheorien geprüft. Diese Arbeit zeigt QCA als eine effiziente Methode für diese Überprüfung. Eine eindeutige Bestätigung oder Falsifizierung dieser Theorien ist mittels dieser Methodik möglich. Dazu können die Ergebnisse bei den beiden einfacheren Formen der QCA, der crisp-set sowie der multi-value QCA, leicht nachvollziehbar vermittelt werden. Des Weiteren zeigt die Arbeit, dass QCA ebenfalls die Weiterentwicklung einer Programmtheorie ermöglicht, allerdings ist diese Weiterentwicklung nur begrenzt effizient und stark von den vorliegenden Daten sowie der Datenstruktur abhängig. Die Arbeit zeigt somit das Potential der QCA insbesondere für den Test von Programmtheorien auf und stellt die praktische Anwendung für mögliche Replizierung beispielhaft dar. Das dritte und letzte Kapitel der Doktorarbeit "The regional trade dynamics of Turkey: a panel data gravity model" analysiert den türkischen Handel, um die Veränderungen der letzten Jahrzehnte aufzuzeigen und daran zu diskutieren, inwieweit sich die Türkei als aufstrebendes Schwellenland von den bestehenden Handelsstrukturen loslöst. Diese Arbeit ist ein Beitrag zur Diskussion der sich Verschiebenden Machtkonstellationen durch das wirtschaftliche Aufholen der Schwellenländer. Bei der Türkei ist diese Diskussion zusätzlich interessant, da die Frage, ob die Türkei sich von der westlichen Welt, Nordamerika und Europa, abwendet, berücksichtigt wird. Mittels Dummy-Variablen für verschiedene Regionen in einem Gravitätsmodell werden die türkischen Handelsdaten zuerst insgesamt und nach Sektoren analysiert und die Veränderungen über verschieden Perioden des türkischen Außenhandels betrachtet. Es zeigt sich, dass in den türkischen Handelsbeziehungen eine Regionalisierung und eine Diversifizierung der Handelspartner stattfinden. Allerdings geht dies nicht mit einer Abkehr von westlichen Handelspartnern einher.
Global food security poses large challenges to a fast changing human society and has been a key topic for scientists, agriculturist, and policy makers in the 21st century. The United Nation predicts a total world population of 9.15 billion in 2050 and defines the provision of food security as the second major point in the UN Sustainable Development Goals. As the capacities of both, land and water resources, are finite and locally heavily overused, reducing agriculture’s environmental impact while meeting an increasing demand for food of a constantly growing population is one of the greatest challenges of our century. Therefore, a multifaceted solution is required, including approaches using geospatial data to optimize agricultural food production.
The availability of precise and up-to-date information on vegetation parameters is mandatory to fulfill the requirements of agricultural applications. Direct field measurements of such vegetation parameters are expensive and time-consuming. On the contrary, remote sensing offers a variety of techniques for a cost-effective and non-destructive retrieval of vegetation parameters. Although not widely used, hyperspectral thermal infrared (TIR) remote sensing has demonstrated being a valuable addition to existing remote sensing techniques for the retrieval of vegetation parameters.
This thesis examined the potential of TIR imaging spectroscopy as an important contribution to the growing need of food security. The main scientific question dealt with the extraction of vegetation parameters from imaging TIR spectroscopy. To this end, two studies impressively demonstrated the ability of extracting vegetation related parameters from leaf emissivity spectra: (i) the discrimination of eight plant species based on their emissivity spectra and (ii) the detection of drought stress in potato plants using temperature measures and emissivity spectra.
The datasets used in these studies were collected using the Telops Hyper-Cam LW, a novel imaging spectrometer. Since this FTIR spectrometer presents some particularities, special attention was paid on the development of dedicated experimental data acquisition setups and on data processing chains. The latter include data preprocessing and the development of algorithms for extracting precise surface temperatures, reproducible emissivity spectra and, in the end, vegetation parameters.
The spectrometer’s versatility allows the collection of airborne imaging spectroscopy datasets. Since the general availability of airborne TIR spectrometers is limited, the preprocessing and
data extraction methods are underexplored compared to reflective remote sensing. This counts especially for atmospheric correction (AC) and temperature and emissivity separation (TES) algorithms. Therefore, we implemented a powerful simulation environment for the development of preprocessing algorithms for airborne hyperspectral TIR image data. This simulation tool is designed in a modular way and includes the image data acquisition and processing chain from surface temperature and emissivity to the final at-sensor radiance data. It includes a series of available algorithms for TES, AC as well as combined AC and TES approaches. Using this simulator, one of the most promising algorithms for the preprocessing of airborne TIR data – ARTEMISS – was significantly optimized. The retrieval error of the atmospheric water vapor during the atmospheric characterization was reduced. As a result, this improvement in atmospheric characterization accuracy enhanced the subsequent retrieval of surface temperatures and surface emissivities intensely.
Although, the potential of hyperspectral TIR applications in ecology, agriculture, and biodiversity has been impressively demonstrated, a serious contribution to a global provision of food security requires the retrieval of vegetation related parameters with global coverage, high spatial resolution and at high revisit frequencies.
Emerging from the findings in this thesis, the spectral configuration of a spaceborne TIR spectrometer concept was developed. The sensors spectral configuration aims at the retrieval of precise land surface temperatures and land surface emissivity spectra. Complemented with additional characteristics, i.e. short revisit times and a high spatial resolution, this sensor potentially allows the retrieval of valuable vegetation parameters needed for agricultural optimizations. The technical feasibility of such a sensor concept underlines the potential contribution to the multifaceted solution required for achieving the challenging goal of guaranteeing global food security in a world of increasing population.
In conclusion, thermal remote sensing and more precisely hyperspectral thermal remote sensing has been presented as a valuable technique for a variety of applications contributing to the final goal of a global food security.
The influence of affect on vocal parameters has been well investigated in speech portrayed by actors, but little is known about affect expression in more natural or authentic speech behavior. This is partly due to the difficulty of generating speech samples that represent authentic expression of speaker affect. The present work investigates the influence of speaker affect on the vocal fundamental frequency (F0) in comparatively authentic speech samples. Three well-documented psychophysiological research methods were applied for the induction of affective states in German native speakers in order to obtain speech samples with authentic affect expression: the Cold Pressor Test (CPT), the Stroop Color-Word Test (SCWT) and the presentation of slides from the International Affective Pictures System (IAPS). The here reported results show that the influence of affect on F0 is differentially modulated by psychophysiological processes as well as socio-cultural influences. They also indicate that this approach may be useful for future research and further to gain a deeper understanding of authentic vocal affect expression. Moreover, F0 may constitute an additional non-invasive, easy to obtain measure for the established psychophysiological research methodology.
The vision of a future information and communication society has prompted leading politicians in the United States, the European Union and Japan to influence or even lead the economic and social transition in the context of an active technology policy. The technological development of society, however, is a product of a complex interplay of technological, economic and socio-political constraints. These constraints limit the political decision-making and implementation abilities. Moreover, facts and information are continuously changing during a paradigmatic technological, economic and social shift, which limits political decision-making abilities. This study compares political decision-making to promote computer-mediated communications in the Triad since the beginning of the 1980s, on four levels: the development of a political vision, the long-term aims and strategies, technology policy (e.g. the promotion of technological development and competition policy) and regulatory policy (e.g. universal access, protection of privacy and intellectual property). While technology policy tends to be uncontroversial, during a paradigmatic shift regulatory policy is difficult and lengthy. Nevertheless, the inclusion of interest groups, which rise during this paradigmatic shift and which are close to the technologies and their societal consequences, help to aid decision-making processes. In this context, politics in the United States has been more successful that in the European Union and especially Japan. Although this study predates the rise of eCommerce over the Internet, it addresses many of the themes underlying it. Of these themes, many remain politically unsettled, both on national, supranational and especially international levels. For example, for encryption and secure payments, which are necessary for eCommerce, no international standards do yet exist. The issue of taxation has hardly been opened for discussions. In sum, this study does not only offer a historical overview of the development of the Internet, but it also discusses issues of continuing present concern.
Reptiles belong to a taxonomic group characterized by increasing worldwide population declines. However, it has not been until comparatively recent years that public interest in these taxa has increased, and conservation measures are starting to show results. While many factors contribute to these declines, environmental pollution, especially in form of pesticides, has seen a strong increase in the last few decades, and is nowadays considered a main driver for reptile diversity loss. In light of the above, and given that reptiles are extremely underrepresented in ecotoxicological studies regarding the effects of plant protection products, this thesis aims at studying the impacts of pesticide exposure in reptiles, by using the Common wall lizard (Podarcis muralis) as model species. In a first approach, I evaluated the risk of pesticide exposure for reptile species within the European Union, as a means to detect species with above average exposure probabilities and to detect especially sensitive reptile orders. While helpful to detect species at risk, a risk evaluation is only the first step towards addressing this problem. It is thus indispensable to identify effects of pesticide exposure in wildlife. For this, the use of enzymatic biomarkers has become a popular method to study sub-individual responses, and gain information regarding the mode of action of chemicals. However, current methodologies are very invasive. Thus, in a second step, I explored the use of buccal swabs as a minimally invasive method to detect changes in enzymatic biomarker activity in reptiles, as an indicator for pesticide uptake and effects at the sub-individual level. Finally, the last part of this thesis focuses on field data regarding pesticide exposure and its effects on reptile wildlife. Here, a method to determine pesticide residues in food items of the Common wall lizard was established, as a means to generate data for future dietary risk assessments. Subsequently, a field study was conducted with the aim to describe actual effects of pesticide exposure on reptile populations at different levels.
As a target for condemnation, the thematic prevalence of racism in African American novels of satire is not surprising. In order to confront this vice in its shifting manifestations, however, the African American satirist has to employ special techniques. This thesis examines some of these devices as they occur in George Schuyler- Black No More, Charles Wright- The Wig, and Percival Everett- Erasure. Given the reciprocity of target and technique in the satiric context, close attention is paid to how the authors under study locate and interrogate racism in their narratives. In this respect, the significance of anti-essentialist Marxist criticism in Schuyler- Black No More and the author- portrayal of the society of his time as capitalist machinery is examined. While Schuyler is concerned with exposing the general socioeconomic workings of the 1920s from a Marxist perspective, Wright offers the reader perspective into how this oppressive machinery psychologically manipulates and corrupts the individual in the historic context of Lyndon B. Johnson- political vision of the Great Society. Everett then elaborates on the epistemological concern which is traceable in Wright- work and addresses the role media representation plays in manufacturing images and rigid categories that shape systematic racism. As such, the present study not only highlights the versatility of satire as a rhetorical secret weapon and thus ventures toward the idiosyncrasies of the African American novel of satire, it also makes an effort to trace the ever-changing face of racial discrimination.
The startle response in psychophysiological research: modulating effects of contextual parameters
(2013)
Startle reactions are fast, reflexive, and defensive responses which protect the body from injury in the face of imminent danger. The underlying reflex is basic and can be found in many species. Even though it consists of only a few synapses located in the brain stem, the startle reflex offers a valuable research method for human affective, cognitive, and psychological research. This is because of moderating effects of higher mental processes such as attention and emotion on the response magnitude: affective foreground stimulation and directed attention are validated paradigms in startle-related research. This work presents findings from three independent research studies that deal with (1) the application of the established "affective modulation of startle"-paradigm to the novel setting of attractiveness and human mating preferences, (2) the question of how different components of the startle response are affected by a physiological stressor and (3) how startle stimuli affect visual attention towards emotional stimuli. While the first two studies treat the startle response as a dependent variable by measuring its response magnitude, the third study uses startle stimuli as an experimental manipulation and investigates its potential effects on a behavioural measure. The first chapter of this thesis describes the basic mechanisms of the startle response as well as the body of research that sets the foundation of startle research in psychophysiology. It provides the rationale for the presented studies, and offers a short summary of the obtained results. Chapter two to four represent primary research articles that are published or in press. At the beginning of each chapter the contribution of all authors is explained. The references for all chapters are listed at the end of this thesis. The overall scope of this thesis is to show how the human startle response is modulated by a variety of factors, such as the attractiveness of a potential mating partner or the exposure to a stressor. In conclusion, the magnitude of the startle response can serve as a measure for such psychological states and processes. Beyond the involuntary, physiological startle reflex, startle stimuli also affect intentional behavioural responses, which we could demonstrate for eye movements in a visual attention paradigm.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
Startups are essential agents for the evolution of economies and the creative destruction of established market conditions for the benefit of a more effective and efficient economy. Their significance is manifested in their drive for innovation and technological advancements, their creation of new jobs, their contribution to economic growth, and their impact on increased competition and increased market efficiency. By reason of their attributes of newness and smallness, startups often experience a limitation in accessing external financial resources. Extant research on entrepreneurial finance examines the capital structure of startups, various funding tools, financing environments in certain regions, and investor selection criteria among other topics. My dissertation contributes to this research area by examining the becoming increasingly important funding instrument of venture debt. Prior research on venture debt only investigated the business model of venture debt, the concept of venture debt, the selection criteria of venture debt providers, and the role of patents in the venture debt provider’s selection process. Based on qualitative and quantitative methods, the dissertation outlines the emergence of venture debt in Europe as well as the impact of venture debt on startups to open up a better understanding of venture debt.
The results of the qualitative studies indicate that venture debt was formed based on a ‘Kirznerian’ entrepreneurial opportunity and venture debt impacts startups positive and negative in their development via different impact mechanisms.
Based on these results, the dissertation analyzes the empirical impact of venture debt on a startup’s ability to acquire additional financial resources as well as the role of the reputation of venture debt providers. The results suggest that venture debt increases the likelihood of acquiring additional financial resources via subsequent funding rounds and trade sales. In addition, a higher venture debt provider reputation increases the likelihood of acquiring additional financial resources via IPOs.
Attitudes are "the most distinctive and indispensable concept in contemporary social psychology" (Allport, 1935, p. 798). This outstanding position of the attitude concept in social cognitive research is not only reflected in the innumerous studies focusing on this concept but also in the huge number of theoretical approaches that have been put forth since then. Yet, it is still an open question, what attitudes actually are. That is, the question of how attitude objects are represented in memory cannot be unequivocally answered until now (e.g., Barsalou, 1999; Gawronski, 2007; Pratkanis, 1989, Chapter 4). In particular, researchers strongly differ with respect to their assumptions on the content, format and structural nature of attitude representations (Ferguson & Fukukura, 2012). This prevailing uncertainty on what actually constitutes our likes and dislikes is strongly dovetailed with the question of which processes result in the formation of these representations. In recent years, this issue has mainly been addressed in evaluative conditioning research (EC). In a standard EC-paradigm a neutral stimulus (conditioned stimulus, CS) is repeatedly paired with an affective stimulus (unconditioned stimulus, US). The pairing of stimuli then typically results in changes in the evaluation of the CS corresponding to the evaluative response of the US (De Houwer, Baeyens, & Field, 2005). This experimental approach on the formation of attitudes has primarily been concerned with the question of how the representations underlying our attitudes are formed. However, which processes operate on the formation of such an attitude representation is not yet understood (Jones, Olson, & Fazio, 2010; Walther, Nagengast, & Trasselli, 2005). Indeed, there are several ideas on how CS-US pairs might be encoded in memory. Notwithstanding the importance of these theoretical ideas, looking at the existing empirical work within the research area of EC (for reviews see Hofmann, De Houwer, Perugini, Baeyens, & Crombez, 2010; De Houwer, Thomas, & Baeyens, 2001) leaves one with the impression that scientists have skipped the basic processes. Basic processes hereby especially refer to the attentional processes being involved in the encoding of CSs and USs as well as the relation between them. Against the background of this huge gap in current research on attitude formation, the focus of this thesis will be to highlight the contribution of selective attention processes to a better understanding of the representation underlying our likes and dislikes. In particular, the present thesis considers the role of selective attention processes for the solution of the representation issue from three different perspectives. Before illustrating these different perspectives, Chapter 1 is meant to envision the omnipresence of the representation problem in current theoretical as well as empirical work on evaluative conditioning. Likewise, it emphasizes the critical role of selective attention processes for the representation question in classical conditioning and how this knowledge might be used to put forth the uniqueness of evaluative conditioning as compared to classical conditioning. Chapter 2 then considers the differential influence of attentional resources and goal-directed attention on attitude learning. The primary objective of the presented experiment was thereby to investigate whether attentional resources and goal-directed attention exert their influence on EC via changes in the encoding of CS-US relations in memory (i.e., contingency memory). Taking the findings from this experiment into account, Chapter 3 focuses on the selective processing of the US relative to the CS. In particular, the two experiments presented in this chapter were meant to explore the moderating influence of the selective processing of the US in its relation to the CS on EC. In Chapter 4 the important role of the encoding of the US in relation to the CS, as outlined in Chapter 3, is illuminated in the context of different retrieval processes. Against the background of the findings from the two presented experiments, the interplay between the encoding of CS-US contingencies and the moderation of EC via different retrieval processes will be discussed. Finally, a general discussion of the findings, their theoretical implications and future research lines will be outlined in Chapter 5.
Phase-amplitude cross-frequency coupling is a mechanism thought to facilitate communication between neuronal ensembles. The mechanism could underlie the implementation of complex cognitive processes, like executive functions, in the brain. This thesis contributes to answering the question, whether phase-amplitude cross-frequency coupling - assessed via electroencephalography (EEG) - is a mechanism by which executive functioning is implemented in the brain and whether an assumed performance effect of stress on executive functioning is reflected in phase-amplitude coupling strength. A huge body of studies shows that stress can influence executive functioning, in essence having detrimental effects. In two independent studies, each being comprised of two core executive function tasks (flexibility and behavioural inhibition as well as cognitive inhibition and working memory), beta-gamma phase-amplitude coupling was robustly detected in the left and right prefrontal hemispheres. No systematic pattern of coupling strength modulation by either task demands or acute stress was detected. Beta-gamma coupling might also be present in more basic attention processes. This is the first investigation of the relationship between stress, executive functions and phase-amplitude coupling. Therefore, many aspects have not been explored yet. For example, studying phase precision instead of coupling strength as an indicator for phase-amplitude coupling modulations. Furthermore, data was analysed in source space (independent component analysis); comparability to sensor space has still to be determined. These as well as other aspects should be investigated, due to the promising finding of very robust and strong beta-gamma coupling for all executive functions. Additionally, this thesis tested the performance of two widely used phase-amplitude coupling measures (mean vector length and modulation index). Both measures are specific and sensitive to coupling strength and coupling width. The simulation study also drew attention to several confounding factors, which influence phase-amplitude coupling measures (e. g. data length, multimodality).
The Role of Dopamine and Acetylcholine as Modulators of Selective Attention and Response Speed
(2015)
The principles of top-down and bottom-up processing are essential to cognitive psychology. At their broadest, most general definition, they denote that processing can be driven either by the salience of the stimulus input or by individual goals and strategies. Selective top-down attention, specifically, consists in the deliberate prioritizing of stimuli that are deemed goal-relevant, while selective bottom-up attention relies on the automatic allocation of attention to salient stimuli (Connor, Egeth, & Yantis, 2004; Schneider, Schote, Meyer, & Frings, 2014). Variations within neurotransmitter systems can modulate cognitive performance in a domain-specific fashion (Greenwood, Fossella, & Parasuraman, 2005). Noudoost and Moore (2011a) proposed that the influence of the dopaminergic neurotransmitter system on selective top-down attention might be greater than the influence of this system on selective bottom-up attention; likewise, they assumed that the cholinergic neurotransmitter system might be more important for selective bottom-up than top-down attention. To test this hypothesis, naturally occurring variations within the two neurotransmitter systems were assessed. Five polymorphisms were selected; two of the dopaminergic system (the COMT Val158Met polymorphism and the DAT1 polymorphism) and three of the cholinergic system (the CHRNA4 rs1044396 polymorphism, the CHRNA5 rs3841324 polymorphism, and the CHRNA5 rs16969968 polymorphism). It was tested whether these polymorphisms modulated the performance in tasks of selective top-down attention (a Stroop task and a Negative priming task) and in a task of selective bottom-up attention (a Posner-Cuing task). Indeed, the dopaminergic polymorphisms influenced selective top-down attention, but exerted no effects on bottom-up attention. This aligned with the hypothesis proposed by Noudoost and Moore (2011a). In contrast, the cholinergic polymorphisms were not found to modulate selective bottom-up attention. The three cholinergic polymorphisms, however, affected the general response speed in the Stroop task, Negative priming task, and Posner-Cuing task (irrespective of attentional processing). In sum, the findings of this study provide strong indications that the dopaminergic system modulates selective top-down attention, while the cholinergic system is highly relevant for the general speed of information processing.
The role of cortisol and cortisol dynamics in patients after aneurysmal subarachnoid hemorrhage
(2011)
Spontaneous aneurysmal subarachnoid hemorrhage (SAH) is a form of stroke which constitutes a severe trauma to the brain and often leads to serious long-term medical and psychosocial sequels which persist for years after the acute event. Recently, adrenocorticotrophic hormone deficiency has been identified as one possible consequence of the bleeding and is assumed to occur in around 20% of all survivors. Additionally, a number of studies report a high prevalence of post-SAH symptoms such as lack of initiative, fatigue, loss of concentration, impaired quality of life and psychiatric symptoms such as depression. The overlap of these symptoms and those of patients with untreated partial or complete hypopituitarism lead to the suggestion that neuroendocrine dysregulations may contribute to the psychosocial sequels of SAH. Therefore, one of the aims of this work is to gain insights into the role of neuroendocrine dysfunction on quality of life and the prevalence of psychiatric sequels in SAH-patients. Additionally, as data on cortisol dynamics after SAH are scarce, diurnal cortisol profiles are investigated in patients in the acute and chronic phase, as well as the cortisol awakening response and feedback sensitivity in the chronic phase after SAH. As a result, it can be shown that some SAH patients exhibit lower serum cortisol levels but at the same time a higher cortisol awakening response in saliva than healthy controls. Also, patients in the chronic phase after SAH do have a stable diurnal cortisol rhythm while there are disturbances in around 50% of all patients in the acute phase, leading to the conclusion that a single baseline measurement of cortisol is of no substantial use for diagnosing cortisol dysregulations in the acute phase after SAH. It is assumed that in SAH patients endocrine changes occur over time and that a combination of adrenal exhaustion and a subsequent downregulation of corticosteroid binding globulin may be the most probable causes for the dissociation of serum cortisol concentrations and salivary cortisol profiles in the investigated SAH patients. These changes may be an emergency response after SAH and, as elevated free cortisol levels are connected to a better psychosocial outcome in patients in the chronic phase after SAH, this reaction may even be adaptive.
The stress hormone cortisol as the end-product of the hypothalamic-pituitary-adrenal (HPA) axis has been found to play a crucial role in the release of aggressive behavior (Kruk et al., 2004; Böhnke et al., 2010). In order to further explore potential mechanisms underlying the relationship between stress and aggression, such as changes in (social) information processing, we conducted two experimental studies that are presented in this thesis. In both studies, acute stress was induced by means of the Socially Evaluated Cold Pressor Test (SECP) designed by Schwabe et al. (2008). Stressed participants were classified as either cortisol responders or nonresponders depending on their rise in cortisol following the stressor. Moreover, basal HPA axis activity was measured prior to the experimental sessions and EEG was recorded throughout the experiments. The first study dealt with the influence of acute stress on cognitive control processes. 41 healthy male participants were assigned to either the stress condition or the non-stressful control procedure of the SECP. Before as well as after the stress induction, all participants performed a cued task-switching paradigm in order to measure cognitive control processes. Results revealed a significant influence of acute and basal cortisol levels, respectively, on the motor preparation of the upcoming behavioral response, that was reflected in changes in the magnitude of the terminal Contingent Negative Variation (CNV). In the second study, the effect of acute stress and subsequent social provocation on approach-avoidance motivation was examined. 72 healthy students (36 males, 36 females) took part in the study. They performed an approach-avoidance task, using emotional facial expressions as stimuli, before as well as after the experimental manipulation of acute stress (again via the SECP) and social provocation realized by means of the Taylor Aggression Paradigm (Taylor, 1967). Additionally to salivary cortisol, testosterone samples were collected at several points in time during the experimental session. Results indicated a positive relationship between acute testosterone levels and the motivation to approach social threat stimuli in highly provoked cortisol responders. Similar results were found when the testosterone-to-cortisol ratio at baseline was taken into account instead of acute testosterone levels. Moreover, brain activity during the approach-avoidance task was significantly influenced by acute stress and social provocation, as reflected in reductions of early (P2) as well as of later (P3) ERP components in highly provoked cortisol responders. This may indicate a less accurate, rapid processing of socially relevant stimuli due to an acute increase in cortisol and subsequent social provocation. In conclusion, the two studies presented in this thesis provide evidence for significant changes in information processing due to acute stress, basal cortisol levels and social provocation, suggesting an enhanced preparation for a rapid behavioral response in the sense of a fight-or-flight reaction. These results confirm the model of Kruk et al. (2004) proposing a mediating role of changed information processes in the stress-aggression-link.
Stress has been considered one of the most relevant factors promoting aggressive behavior. Animal and human pharmacological studies revealed the stress hormones corticosterone in rodents and cortisol in humans to constitute a particularly important neuroendocrine determinate in facilitating aggression and beyond that, assumedly in its continuation and escalation. Moreover, cortisol-induced alterations of social information processing, as well as of cognitive control processes, have been hypothesized as possible influencing factors in the stress-aggression link. So far, the immediate impact of a preceding stressor and thereby stress-induced rise of cortisol on aggressive behavior as well as higher-order cognitive control processes and social information processing in this context have gone mostly unheeded. The present thesis aimed to extend the hitherto findings of stress and aggression in this regard. For this purpose two psychophysiological studies with healthy adults were carried out, both using the socially evaluated-cold pressor test as an acute stress induction. Additionally to behavioral data and subjective reports, event related potentials were measured and acute levels of salivary cortisol were collected on the basis of which stressed participants were divided into cortisol-responders and "nonresponders. Study 1 examined the impact of acute stress-induced cortisol increase on inhibitory control and its neural correlates. 41 male participants were randomly assigned to the stress procedure or to a non-stressful control condition. Beforehand and afterwards, participants performed a Go Nogo task with visual letters to measure response inhibition. The effect of acute stress-induced cortisol increase on covert and overt aggressive behavior and on the processing of provoking stimuli within the aggressive encounter was investigated in study 2. Moreover, this experiment examined the combined impact of stress and aggression on ensuing affective information processing. 71 male and female participants were either exposed to the stress or to the control condition. Following this, half of each group received high or low levels of provocation during the Taylor Aggression Paradigm. At the end of the experiment, a passive viewing paradigm with affective pictures depicting positive, negative, or aggressive scenes with either humans or objects was realized. The results revealed that men were not affected by a stress-induced rise in cortisol on a behavioral level, showing neither impaired response inhibition nor enhanced aggressive behavior. In contrast, women showed enhanced overt and covert aggressive behavior under a surge of endogenous cortisol, confirming previous results, albeit only in case of high provocation and only up to the level of the control group. Unlike this rather moderate impact on behavior, cortisol showed a distinct impact on neural correlates of information processing throughout inhibitory control, aggression-eliciting stimuli, and emotional pictures for both men and women. At this, stress-induced increase of cortisol resulted in enhanced N2 amplitudes to Go stimuli, whereas P2 amplitudes to both and N2 to Nogo amplitudes retained unchanged, indicating an overcorrection and caution of the response activation in favor of successful inhibitory control. The processing of aggression-eliciting stimuli during the aggressive encounter was complexly altered by stress differently for women and men. Under increased cortisol levels, the frontal or parietal P3 amplitude patterns were either diminished or reversed in the case of high provocation compared to the control group and to cortisol-nonresponders, indicating a desensitization towards aggression-eliciting stimuli in males, but a more elaborate processing of those in women. Moreover, stress-induced cortisol and provocation jointly altered subsequent affective information processing at early as well as later stages of the information processing stream. Again, increased levels of cortisol led opposite directed amplitudes in the case of high provocation relative to the control group and cortisol-nonresponders, with enhanced N2 amplitudes in men and reduced P3 and LPP amplitudes in men and women for all affective pictures, suggesting initially enhanced emotional reactivity in men, but ensuing reduced motivational attention and enhanced emotion regulation in both, men and women. As a result, these present findings confirm the relevance of HPA activity in the elicitation and persistence of human aggressive behavior. Moreover, they reveal the significance of compensatory and emotion regulatory strategies and mechanisms in response to stress and provocation, indorsing the relevance of social information and cognitive control processes. Still, more research is needed to clarify the conditions which lead to the facilitation of aggression and by which compensatory mechanisms this is prevented.
This work is concerned with arbitrage bounds for prices of contingent claims under transaction costs, but regardless of other conceivable market frictions. Assumptions on the underlying market are held as weak as convenient for the deduction of meaningful results that make good economic sense. In discrete time we also allow for underlying price processes with uncountable state space. In continuous time the underlying price process is modeled by a semimartingale. For the most part we could avoid any stronger assumptions. The main problems with which we deal in this work are the modelling of (proportional) transaction costs, Fundamental Theorems of Asset Pricing under transaction costs, dual characterizations of arbitrage bounds under transaction costs, Quantile-Hedging under transaction costs, alternatives to the Black-Scholes model in continuous time (under transaction costs). The results apply to stock and currency markets.
The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.
The economic growth theory analyses which factors affect economic growth and tries to analyze how it can last. A popular neoclassical growth model is the Ramsey-Cass-Koopmans model, which aims to determine how much of its income a nation or an economy should save in order to maximize its welfare. In this thesis, we present and analyze an extended capital accumulation equation of a spatial version of the Ramsey model, balancing diffusive and agglomerative effects. We model the capital mobility in space via a nonlocal diffusion operator which allows for jumps of the capital stock from one location to an other. Moreover, this operator smooths out heterogeneities in the factor distributions slower, which generated a more realistic behavior of capital flows. In addition to that, we introduce an endogenous productivity-production operator which depends on time and on the capital distribution in space. This operator models the technological progress of the economy. The resulting mathematical model is an optimal control problem under a semilinear parabolic integro-differential equation with initial and volume constraints, which are a nonlocal analog to local boundary conditions, and box-constraints on the state and the control variables. In this thesis, we consider this problem on a bounded and unbounded spatial domain, in both cases with a finite time horizon. We derive existence results of weak solutions for the capital accumulation equations in both settings and we proof the existence of a Ramsey equilibrium in the unbounded case. Moreover, we solve the optimal control problem numerically and discuss the results in the economic context.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Surveys are commonly tailored to produce estimates of aggregate statistics with a desired level of precision. This may lead to very small sample sizes for subpopulations of interest, defined geographically or by content, which are not incorporated into the survey design. We refer to subpopulations where the sample size is too small to provide direct estimates with adequate precision as small areas or small domains. Despite the small sample sizes, reliable small area estimates are needed for economic and political decision making. Hence, model-based estimation techniques are used which increase the effective sample size by borrowing strength from other areas to provide accurate information for small areas. The paragraph above introduced small area estimation as a field of survey statistics where two conflicting philosophies of statistical inference meet: the design-based and the model-based approach. While the first approach is well suited for the precise estimation of aggregate statistics, the latter approach furnishes reliable small area estimates. In most applications, estimates for both large and small domains based on the same sample are needed. This poses a challenge to the survey planner, as the sampling design has to reflect different and potentially conflicting requirements simultaneously. In order to enable efficient design-based estimates for large domains, the sampling design should incorporate information related to the variables of interest. This may be achieved using stratification or sampling with unequal probabilities. Many model-based small area techniques require an ignorable sampling design such that after conditioning on the covariates the variable of interest does not contain further information about the sample membership. If this condition is not fulfilled, biased model-based estimates may result, as the model which holds for the sample is different from the one valid for the population. Hence, an optimisation of the sampling design without investigating the implications for model-based approaches will not be sufficient. Analogously, disregarding the design altogether and focussing only on the model is prone to failure as well. Instead, a profound knowledge of the interplay between the sample design and statistical modelling is a prerequisite for implementing an effective small area estimation strategy. In this work, we concentrate on two approaches to address this conflict. Our first approach takes the sampling design as given and can be used after the sample has been collected. It amounts to incorporate the survey design into the small area model to avoid biases stemming from informative sampling. Thus, once a model is validated for the sample, we know that it holds for the population as well. We derive such a procedure under a lognormal mixed model, which is a popular choice when the support of the dependent variable is limited to positive values. Besides, we propose a three pillar strategy to select the additional variable accounting for the design, based on a graphical examination of the relationship, a comparison of the predictive accuracy of the choices and a check regarding the normality assumptions.rnrnOur second approach to deal with the conflict is based on the notion that the design should allow applying a wide variety of analyses using the sample data. Thus, if the use of model-based estimation strategies can be anticipated before the sample is drawn, this should be reflected in the design. The same applies for the estimation of national statistics using design-based approaches. Therefore, we propose to construct the design such that the sampling mechanism is non-informative but allows for precise design-based estimates at an aggregate level.
The fragmentation of landscapes has an important impact on the conservation of biodiversity. The genetic diversity is an important factor for a population- viability, influenced by the landscape structure. However, different species with differing ecological demands react rather differently on the same landscape pattern. To address this feature, we studied ten xerothermophilous butterfly species with differing habitat requirements (habitat specialists with low dispersal power in contrast to habitat generalists with low dispersal power and habitat generalists with higher dispersal power). We analysed allozyme loci for about 10 populations (Ã 40 individuals) of each species in a western German study region with adjoining areas in Luxemburg and north-eastern France. The genetic diversity and genetic differentiation between local populations was discussed under conservation genetic aspects. For generalists we detected a more or less panmictic structure and for species with lower abundance and sedentarily behaviour the effect of isolation by distance. On the other hand, the isolation of specialists was mostly reflected by strong genetic differentiation patterns between the investigated populations. Parameters of genetic diversity were mostly significantly higher in generalists, compared to specialists. Substructures within populations as an answer of low intrapatch migration, low population densities and high population fluctuations could be shown as well. Aspects of landscape history (the historical distribution of habitats resulting of the presence of limestone areas) and the changes of extensive sheep pasturing and the loss of potential habitats in the last few decades (recent fragmentation) are discussed against the gained genetic data-set of the ten butterflies.
The present dissertation was developed to emphasize the importance of self-regulatory abilities and to derive novel opportunities to empower self-regulation. From the perspective of PSI (Personality Systems Interactions) theory (Kuhl, 2001), interindividual differences in self-regulation (action vs. state orientation) and their underlying mechanisms are examined in detail. Based on these insights, target-oriented interventions are derived, developed, and scientifically evaluated. The present work comprises a total of four studies which, on the one hand, highlight the advantages of a good self-regulation (e.g., enacting difficult intentions under demands; relation with prosocial power motive enactment and well-being). On the other hand, mental contrasting (Oettingen et al., 2001), an established self-regulation method, is examined from a PSI perspective and evaluated as a method to support individuals that struggle with self-regulatory deficits. Further, derived from PSI theory`s assumptions, I developed and evaluated a novel method (affective shifting) that aims to support individuals in overcoming self-regulatory deficits. Thereby affective shifting supports the decisive changes in positive affect for successful intention enactment (Baumann & Scheffer, 2010). The results of the present dissertation show that self-regulated changes between high and low positive affect are crucial for efficient intention enactment and that methods such as mental contrasting and affective shifting can empower self-regulation to support individuals to successfully close the gap between intention and action.
As an interface between an individual and its environment, the skin is a major site of direct exposure to exogenous substances. Once absorbed, these substances may interact with different biomolecules within the skin. The aryl hydrocarbon receptor (AhR) signaling pathway is one mechanism whereby the skin responds to exposures, predominantly through the induction or upregulation of metabolizing enzymes. One known physiological role of the AhR in many tissues is its involvement in the control of cell cycle progression. In skin, almost nothing is known about this physiological function. Moreover, the question whether frequently used naturally occurring phenolic derivatives like eugenol and isoeugenol impact on the AhR within the skin has rarely been studied so far. Eugenol and isoeugenol are due to their odour referred to as fragrances. The ubiquitous distribution of eugenol and isoeugenol results in an almost unavoidable contact with these substances in our daily lives. Despite this fact, their molecular mechanisms of action in skin are poorly understood. There is evidence supporting the hypothesis that these substances may impact on the AhR. On the one hand, eugenol is shown to induce cytochrome P450 1A1 (CYP1A1), a well-known target gene of the AhR. On the other hand, their known anti-proliferative properties might also be mediated by the AhR, based on its physiological function. In order to proof this hypothesis, it was investigated whether eugenol and isoeugenol impact on the AhR signaling pathway in skin cells. Results revealed that eugenol as well as isoeugenol impact on the AhR signaling pathway in skin cells. Both substances caused the translocation of the AhR into the nucleus, induced the expression of the well-known AhR target genes CYP1A1 and AhR repressor (AhRR) and exhibited impact on cell cycle progression. Both substances caused an AhR-dependent cell cycle arrest in skin cells, modulated protein levels of several cell cycle regulatory proteins, inhibited DNA synthesis and thereby reduced cell numbers. The comparison of wildtype cells to AhR knockdown cells revealed an influence of the AhR on cell cycle progression in skin cells in the absence of exogenous ligands. AhR knockdown cells exhibited a slower progression through the cell cycle caused by an accumulation of cells in the G0/G1 phase of the cell cycle and a decreased DNA synthesis rate. Modulation of cell cycle regulatory proteins involved in the transition from the G0/G1 to the S phase of the cell cycle was altered in AhR knockdown cells as well. To conclude, eugenol as well as isoeugenol were able to impact on the AhR signaling pathway in skin cells. Their molecular mechanisms of action are similar to those of classical AhR ligands, although their structural characteristics strongly differ from that of these ligands. In the absence of exogenous ligands the AhR promotes cell cycle progression in many tissues and this knowledge could be expanded on skin-derived cells within the scope of this thesis.
This thesis consists of four highly related chapters examining China’s rise in the aluminium industry. The first chapter addresses the conditions that allowed China, which first entered the market in the 1950s, to rise to world leadership in aluminium production. Although China was a latecomer, its re-entry into the market after the oil crises in the 1970s was a success and led to its ascent as the world’s largest aluminium producer by 2001. With an estimated production of 40.4 million tonnes in 2022, China represented almost 60% of the global output. Chapter 1 examines the factors underlying this success, such as the decline of international aluminium cartels, the introduction of innovative technology, the US granting China the MFN tariff status, Chinese-specific factors, and supportive government policies. Chapter 2 develops a mathematical model to analyze firms’ decisions in the short term. It examines how an incumbent with outdated technology and a new entrant with access to a new type of technology make strategic decisions, including the incumbent’s decision whether to deter entry, the production choice of firms, the optimal technology adoption rate of the newcomer, and cartel formation. Chapter 3 focuses on the adoption of new technology by firms upon market entry in four scenarios: firstly, a free market Cournot competition; secondly, a situation in which the government determines technology adoption rates; thirdly, a scenario in which the government controls both technology and production; and finally, a scenario where the government dictates technology adoption rates, production levels, and also the number of market participants. Chapter 4 applies the Spencer and Brander (1983) framework to examine strategic industrial policy. The model assumes that there are two exporting firms in two different countries that sell a product to a third country. We examine how the domestic firm is influenced by government intervention, such as the provision of a fixed-cost subsidy to improve its competitiveness relative to the foreign company. Chapter 4 initially investigates a scenario where only one government offers a fixed-cost subsidy, followed by an analysis of the case when both governments simultaneously provide financial help. Taken together, these chapters provide a comprehensive analysis of the strategic, technological, and political factors contributing to China’s leadership in the global aluminium industry.
Chapter 1: The Rise of China as a Latecomer in the Global Aluminium Industry
This chapter examines China’s remarkable transformation into a global leader in the aluminium industry, a sector in which the country accounted for approximately 58.9% of worldwide production in 2022. We examine how China, a latecomer to the aluminium industry that started off with labor-intensive technology in 1953, grew into the largest aluminium producer with some of the most advanced smelters in the world. This analysis identifies and discusses several opportunities that Chinese aluminium producers took advantage of. The first set of opportunities happened during the 1970s oil crises, which softened international competition and allowed China to acquire innovative smelting technology from Japan. The second set of opportunities started at about the same time when China opened its economy in 1978. The substantial demand for aluminium in China is influenced by both external and internal factors. Externally, the US granted China’s MFN tariff status in 1980 and China entered the World Trade Organization (WTO) in 2001. Both events contributed to a surge in Chinese aluminium consumption. Internally, China’s investment-led growth model boosted further its aluminium demand. Additional factors specific to China, such as low labor costs and the abundance of coal as an energy source, offer Chinese firms competitive advantages against international players. Furthermore, another window of opportunity is due to Chinese governmental policies, including phasing out old technology, providing subsidies, and gradually opening the economy to enhance domestic competition before expanding globally. By describing these elements, the study provides insights into the dynamic interplay of external circumstances and internal strategies that contributed to the success of the Chinese aluminium industry.
Chapter 2: Technological Change and Strategic Choices for Incumbent and New Entrant
This chapter introduces an oligopoly model that includes two actors: an incumbent and a potential entrant, that compete in the same market. We assume that two participants are located in different parts of the market: the incumbent is situated in area 1, whereas the potential entrant may venture into the other region, area 2. The incumbent exists in stage zero, where it can decide whether to deter the newcomer’s entry. A new type of technology exists in period one, when the newcomer may enter the market. In the short term, the incumbent is trapped with the outdated technology, while the new entrant may choose to partially or completely adopt the latest technology. Our results suggest the following: Firstly, the incumbent only tries to deter the new entrant if a condition for entry cost is met. Secondly, the new entrant is only interested in forming a cartel with the incumbent if a function of the ratio of the variable to new technology’s fixed-cost parameters is sufficiently high. Thirdly, if the newcomer asks to form a cartel, the incumbent will always accept this request. Finally, we can obtain the optimal new technology adoption rate for the newcomer.
Chapter 3: Technological Adoption and Welfare in Cournot Oligopoly
This study examines the difference between the optimal technology adoption rates chosen by firms in a homogeneous Cournot oligopoly and that preferred by a benevolent government upon firms’ market entry. To address the question of whether the technology choices of firms and government are similar, we analyze several different scenarios, which differ in the extent of government intervention in the market. Our results suggest a relationship between the number of firms in the market and the impact of government intervention on technology adoption rates. Especially in situations with a low number of firms that are interested in entering the market, greater government influence tends to lead to higher technology adoption rates of firms. Conversely, in scenarios with a higher number of firms and a government that lacks control over the number of market players, the technology adoption rate of firms will be highest when the government plays no role.
Chapter 4: International Technological Innovation and Industrial Strategies
Supporting domestic firms when they first enter the market may be seen as a favorable policy choice by governments around the world thanks to their ability to enhance the competitive advantage of domestic firms in non-cooperative competition against foreign enterprises (infant industry protection argument). This advantage may allow domestic firms to increase their market share and generate higher profits, thereby improving domestic welfare. This chapter utilizes the Spencer and Brander (1983) framework as a theoretical foundation to elucidate the effects of fixed-cost subsidies on firms’ production levels, technological innovations, and social welfare. The analysis examines two firms in different countries, each producing a homogeneous product that is sold in a third, separate country. We first examine the Cournot-Nash equilibrium in the absence of government intervention, followed by analyzing a scenario where just one government provides a financial subsidy for its domestic firm, and finally, we consider a situation where both governments simultaneously provide financial assistance for their respective firms. Our results suggest that governments aim to maximize social welfare by providing fixed-cost subsidies to their respective firms, finding themselves in a Chicken game scenario. Regarding technology innovation, subsidies lead to an increased technological adoption rate for recipient firms, regardless of whether one or both firms in a market receive support, compared to the situation without subsidies. The technology adoption rate of the recipient firm is higher than of its rival when only the recipient firm benefits from the fixed-cost subsidy. The lowest technology adoption rate of a firm occurs when the firm does not receive a fixed-cost subsidy, but its competitor does. Furthermore, global welfare will benefit the most in case when both exporting countries grant fixed-cost subsidies, and this welfare level is higher when only one country subsidizes than when no subsidies are provided by any country.
The skin is continuously challenged by environmental antigens that may penetrate and elicit a skin sensitization, which can develop into allergic contact dermatitis. Medical treatment for allergic contact dermatitis is limited - in fact only acute symptoms can be cured and for secondary prevention of the disease a lifelong avoidance of the allergen(s) is necessary. Therefore, the screening of the sensitization potential of substance used in commercially available products is indispensable to prevent such diseases. Hence, risk assessment is deduced from data obtained by murine local lymph node assay predominantly, but there exists a need to develop methods capable of providing the same information that do not require the use of animals in view of legislative initiatives such as REACH (registration, evaluation, authorization of chemicals) as well as the 7th Amendment to the Cosmetics Directive (2003/15/EC). Therefore, a number of promising in silico and in vitro approaches are being developed to address this need. In vitro test systems using the response of dendritic cells, which are the key player in the elicitation process of contact dermatitis, are established, but, although these novel methods for hazard identification might find application in the context of screening, it is not clear whether these approaches are useful for the purposes of risk assessment and risk management to predict allergic potency. Therefore, it was investigated whether on the one hand in vitro generated dendritic cells from primary blood monocytes (MoDC) and on the other hand a continuous monocytic cell line, the THP-1 cells, suggested as dendritic cell surrogate, react to a presumably weak allergen. Ascaridol, predicted as one of the possible causes for tea tree oil contact dermatitis, was studied and its effects in these two in vitro skin sensitization models were explored. Thus, the surface expression of CD86, HLADR, CD54, and CD40, which are known as activation markers in both in vitro models, were measured via flow cytometry. For MoDC, an augmented CD86 and HLADR surface expression in comparison to untreated cells were determined after 24 h exposure with ascaridol. An increased CD54 and CD40 surface expression were found only in some donors. After long term incubation of 96 h, ascaridol-treated MoDC still up-regulated CD86 and additionally an augmented CD40 expression was measured in all studied donors. An enhanced CD54 expression was determined for 50 percentage of all investigated donors. Furthermore, CD80, CD83 and CD209 protein expression were up-regulated in MoDC after 96 h of ascaridol incubation. In addition, it was determined that after 24 h ascaridol-treated MoDC showed an increased capacity to uptake antigens, whereas after 96 h this capacity got lost and antigen-capturing devices were reduced in comparison to non-treated MoDC. Moreover, the cytokine release of ascaridol-treated MoDC were measured after 24 h. Tumor necrosis factor (TNF)alpha, interleukin (IL)-1beta and IL 6 secretion were determined in some donors. Furthermore, IL-8 release was clearly increased after 24 h ascaridol treatment. By the same token, THP-1 cells were analyzed after ascaridol treatment for several activation markers. We found a similar response pattern as measured in MoDC. Ascaridol induced CD86 expression as well as CD54 after 24 h incubation. Additionally, the impact of ascaridol on phosphorylation of p38 mitogen-activated protein kinase, which had been shown to be involved in increased expression of activation markers like CD86 by others, were studied via Western blot analysis. A phosphorylation of p38 was determined after 15 min of ascaridol stimulation. Moreover, an augmented CD40 and HLADR surface expression were measured in a dose-response manner after 24 h ascaridol treatment. Also similar to MoDC an enhanced IL-8 secretion after ascaridol stimulation was observed in THP-1 cells. Hence, for the first time it was shown that ascaridol has immuno-modulating effects. The obtained data from both in vitro systems, MoDC and THP-1 cells, identified ascaridol as a sensitizer. Although for both systems there remain significant challenges to overcome for potency assessment, ascaridol is presumed to be a weak sensitizer probably. Interestingly, ascaridol treatment of THP-1 cells resulted also in an increased augmentation of CD184 and CCR2, two chemokine receptors expressed on monocyte. Therefore, these data encouraged the exploration of chemokine receptors as tools in skin sensitization prediction. Consequently, the combination of chemical assays with in vitro techniques may provide a useful surrogate to animal testing for skin sensitization. Due to the continuously changing environmental conditions, it is necessary to regularly monitor and update the spectrum of sensitizers that elicit contact dermatitis. Therefore, both debated in vitro test systems will become indispensable tools.
The harmonic Faber operator
(2018)
P. K. Suetin points out in the beginning of his monograph "Faber Polynomials and Faber Series" that Faber polynomials play an important role in modern approximation theory of a complex variable as they are used in representing analytic functions in simply connected domains, and many theorems on approximation of analytic functions are proved with their help [50]. In 1903, the Faber polynomials were firstly discovered by G. Faber. It was Faber's aim to find a generalisation of Taylor series of holomorphic functions in the open unit disc D in the following way. As any holomorphic function in D has a Taylor series representation f(z)=\sum_{\nu=0}^{\infty}a_{\nu}z^{\nu} (z\in\D) converging locally uniformly inside D, for a simply connected domain G, Faber wanted to determine a system of polynomials (Q_n) such that each function f being holomorphic in G can be expanded into a series
f=\sum_{\nu=0}^{\infty}b_{\nu}Q_{\nu} converging locally uniformly inside G. Having this goal in mind, Faber considered simply connected domains bounded by an analytic Jordan curve. He constructed a system of polynomials (F_n) with this property. These polynomials F_n were named after him as Faber polynomials. In the preface of [50], a detailed summary of results concerning Faber polynomials and results obtained by the aid of them is given. An important application of Faber polynomials is e.g. the transfer of known assertions concerning polynomial approximation of functions belonging to the disc algebra to results of the approximation of functions being continuous on a compact continuum K which contains at least two points and has a connected complement and being holomorphic in the interior of K. In this field, the Faber operator denoted by T turns out to be a powerful tool (for an introduction, see e.g. D. Gaier's monograph). It
assigns a polynomial of degree at most n given in the monomial basis \sum_{\nu=0}^{n}a_{\nu}z^{\nu} with a polynomial of degree at most n given in the basis of Faber polynomials \sum_{\nu=0}^{n}a_{\nu}F_{\nu}. If the Faber operator is continuous with respect to the uniform norms, it has a unique continuous extension to an operator mapping the disc algebra onto the space of functions being continuous on the whole compact continuum and holomorphic in its interior. For all f being element of the disc algebra and all polynomials P, via the obvious estimate for the uniform norms ||T(f)-T(P)||<= ||T|| ||f-P||, it can be seen that the original task of approximating F=T(f) by polynomials is reduced to the polynomial approximation of the function f. Therefore, the question arises under which conditions the Faber operator is continuous and surjective. A fundamental result in this regard was established by J. M. Anderson and J. Clunie who showed that if the compact continuum is bounded by a rectifiable Jordan curve with bounded boundary rotation and free from cusps, then the Faber operator with respect to the uniform norms is a topological isomorphism. Now, let f be a harmonic function in D. Similar as above, we find that f has a uniquely determined representation f=\sum_{\nu=-\infty}^{\infty}a_{\nu}p_{\nu}
converging locally uniformly inside D where p_{n}(z)=z^{n} for n\in\N_{0} and p_{-n}(z)=\overline{z}^{n} for n\in\N}. One may ask whether there is an analogue for harmonic functions on simply connected domains G. Indeed, for a domain G bounded by an analytic Jordan curve, the conjecture that each function f being harmonic in G has a uniquely determined representation f=\sum_{\nu= \infty}^{\infty}b_{\nu}F_{\nu} where F_{-n}(z)=\overline{F_{n}(z\)} for n\inN, converging locally uniformly inside G, holds true. Let now K be a compact continuum containing at least two points and having a connected complement. A main component of this thesis will be the examination of the harmonic Faber operator mapping a harmonic polynomial given in the basis of the harmonic monomials \sum_{\nu=-n}^{n}a_{\nu}p_{\nu} to a harmonic polynomial given as \sum_{\nu=-n}^{n}a_{\nu}F_{\nu}.
If this operator, which is based on an idea of J. Müller, is continuous with respect to the uniform norms, it has a unique continuous extension to an operator mapping the functions being continuous on \partial\D onto the continuous functions on K being
harmonic in the interior of K. Harmonic Faber polynomials and the harmonic Faber operator will be the objects accompanying us throughout
our whole discussion. After having given an overview about notations and certain tools we will use in our consideration in the first chapter, we begin our studies with an introduction to the Faber operator and the harmonic Faber operator. We start modestly and consider domains bounded by an analytic Jordan curve. In Section 2, as a first result, we will show that, for such a domain G, the harmonic Faber operator has a unique continuous extension to an operator mapping the space of the harmonic functions in D onto the space
of the harmonic functions in G, and moreover, the harmonic Faber
operator is an isomorphism with respect to the topologies of locally
uniform convergence. In the further sections of this chapter, we illumine the behaviour of the (harmonic) Faber operator on certain function spaces. In the third chapter, we leave the situation of compact continua bounded by an analytic Jordan curve. Instead we consider closures of domains bounded by Jordan curves having a Dini continuous curvature. With the aid of the concept of compact operators and the Fredholm alternative, we are able to show that the harmonic Faber operator is a topological isomorphism. Since, in particular, the main result of the third chapter holds true for closures K of domains bounded by analytic Jordan curves, we can make use of it to obtain new results concerning the approximation of functions being continuous on K and harmonic in the interior of K by harmonic polynomials. To do so, we develop techniques applied by L. Frerick and J. Müller in [11] and adjust them to our setting. So, we can transfer results about the classic Faber operator to the harmonic Faber operator. In the last chapter, we will use the theory of harmonic Faber polynomials
to solve certain Dirichlet problems in the complex plane. We pursue
two different approaches: First, with a similar philosophy as in [50],
we develop a procedure to compute the coefficients of a series \sum_{\nu=-\infty}^{\infty}c_{\nu}F_{\nu} converging uniformly to the solution of a given Dirichlet problem. Later, we will point out how semi-infinite programming with harmonic Faber polynomials as ansatz functions can be used to get an approximate solution of a given Dirichlet problem. We cover both approaches first from a theoretical point of view before we have a focus on the numerical implementation of concrete examples. As application of the numerical computations, we considerably obtain visualisations of the concerned Dirichlet problems rounding out our discussion about the harmonic Faber polynomials and the harmonic Faber operator.
The thesis studies the question how universal behavior is inherited by the Hadamard product. The type of universality that is considered here is universality by overconvergence; a definition will be given in chapter five. The situation can be described as follows: Let f be a universal function, and let g be a given function. Is the Hadamard product of f and g universal again? This question will be studied in chapter six. Starting with the Hadamard product for power series, a definition for a more general context must be provided. For plane open sets both containing the origin this has already been done. But in order to answer the above question, it becomes necessary to have a Hadamard product for functions that are not holomorphic at the origin. The elaboration of such a Hadamard product and its properties are the second central part of this thesis; chapter three will be concerned with them. The idea of the definition of such a Hadamard product will follow the case already known: The Hadamard product will be defined by a parameter integral. Crucial for this definition is the choice of appropriate integration curves; these will be introduced in chapter two. By means of the Hadamard product- properties it is possible to prove the Hadamard multiplication theorem and the Borel-Okada theorem. A generalization of these theorems will be presented in chapter four.
Stress represents a significant problem for Western societies inducing costs as high as 3-4 % of the European gross national products, a burden that is continually increasing (WHO Briefing, EUR/04/5047810/B6). The classical stress response system is the hypothalamic-pituitary-adrenal (HPA) axis which acts to restore homeostasis after disturbances. Two major components within the HPA axis system are the glucocorticoid receptor (GR) and the mineralocorticoid receptor (MR). Cortisol, released from the adrenal glands at the end of the HPA axis, binds to MRs and with a 10 fold lower affinity to GRs. Both, impairment of the HPA axis and an imbalance in the MR/GR ratio enhances the risk for infection, inflammation and stress related psychiatric disorders. Major depressive disorder (MDD) is characterised by a variety of symptoms, however, one of the most consistent findings is the hyperactivity of the HPA axis. This may be the result of lower numbers or reduced activity of GRs and MRs. The GR gene consists of multiple alternative first exons resulting in different GR mRNA transcripts whereas for the MR only two first exons are known to date. Both, the human GR promoter 1F and the homologue rat Gr promoter 1.7 seem to be susceptible to methylation during stressful early life events resulting in lower 1F/1.7 transcript levels. It was proposed that this is due to methylation of a NGFI-A binding site in both, the rat promoter 1.7 and the human promoter 1F. The research presented in this thesis was undertaken to determine the differential expression and methylation patterns of GR and MR variants in multiple areas of the limbic brain system in the healthy and depressed human brain. Furthermore, the transcriptional control of the GR transcript 1F was investigated as expression changes of this transcript were associated with MDD, childhood abuse and early life stress. The role of NGFI-A and several other transcription factors on 1F regulation was studied in vitro and the effect of Ngfi-a overexpression on the rat Gr promoter 1.7 in vivo. The susceptibility to epigenetic programming of several GR promoters was investigated in MDD. In addition, changes in methylation levels have been determined in response to a single acute stressor in rodents. Our results showed that GR and MR first exon transcripts are differentially expressed in the human brain, but this is not due to epigenetic programming. We showed that NGFI-A has no effect on endogenous 1F/1.7 expression in vitro and in vivo. We provide evidence that the transcription factor E2F1 is a major element in the transcriptional complex necessary to drive the expression of GR 1F transcripts. In rats, highly individual methylation patterns in the paraventricular nucleus of the hypothalamus (PVN) suggest that this is not related to the stressor but can rather be interpreted as pre-existing differences. In contrast, the hippocampus showed a much more uniform epigenetic status, but still is susceptible to epigenetic modification even after a single acute stress suggesting a differential "state‟ versus "trait‟ regulation of the GR gene in different brain regions. The results of this thesis have given further insight in the complex transcriptional regulation of GR and MR first exons in health and disease. Epigenetic programming of GR promoters seems to be involved in early life stress and acute stress in adult rats; however, the susceptibility to methylation in response to stress seems to vary between brain regions.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.