Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (335) (entfernen)
Sprache
- Englisch (335) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (17)
- Hydrocortison (13)
- Fernerkundung (10)
- Modellierung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (51)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (19)
- Informatik (16)
- Anglistik (11)
- Fachbereich 6 (8)
- Fachbereich 2 (7)
My study attempts to illustrate the generic development of the family novel in the second half of the twentieth century. At its beginning stands a preliminary classification of the various types of family fiction as they are referred to in secondary literature, which is then followed by a definition of the family novel proper. With its microscopic approach to novels featuring the American family and its (post-)postmodern variations, my study marks a first step into as yet uncharted territory. Assuming that the family novel has emerged as a result of the twentieth century's emphasis on the modern nuclear family, focuses on the family as a gestalt rather than on a single protagonist, and is concerned with issues of social and cultural significance, this study examines how the family, its forms and its conflicts are functionalized for the respective author's cultural critique. From post-war to post-millennium, family novelists have sketched the American family in various precarious conditions, and their texts are critical assessments of contemporary socioeconomic and cultural conditions. My close reading of John Cheever's The Wapshot Chronicle (1957), Don DeLillo's White Noise (1985) and Jonathan Franzen's The Corrections (2001) intends to reveal, shared values as well as significant differences on a formal as well as on a thematic level. As my examination of the respective novel shows, authors react to social and cultural change with new functionalizations of the family in fiction. Unlike the general assumption of literary crticism, family novels do not approach new cultural developments in a conventional or even traditionalist manner. A comparison of White Noise with The Wapshot Chronicle demonstrates that DeLillo's postmodern family novel transcends the rather nostalgic perspective of Cheever's 1950s work. Similarly, Jonathan Franzen's fin de millennium family novel The Corrections holds a post-postmodern position, which can be aptly described by Franzen's own term 'tragical realism'. The significant changes and developments of the family novel in the past five decades demonstrate the need for a continuous reassessment of the genre, and in this respect, my study is merely a beginning.
Objective: Only 20-25% of the variance for the two to four-fold increased risk of developing breast cancer among women with family histories of the disease can be explained by known gene mutations. Other factors must exist. Here, a familial breast cancer model is proposed in which overestimation of risk, general distress, and cancer-specific distress constitute the type of background stress sufficient to increase unrelated acute stress reactivity in women at familial risk for breast cancer. Furthermore, these stress reactions are thought to be associated with central adiposity, an independent well-established risk factor for breast cancer. Hence, stress through its hormonal correlates and possible associations with central adiposity may play a crucial role in the etiology of breast cancer in women at familial risk for the disease. Methods: Participants were 215 healthy working women with first-degree relatives diagnosed before (high familial risk) or after age 50 (low familial risk), or without breast cancer in first-degree relatives (no familial risk). Participants completed self-report measures of perceived lifetime breast cancer risk, intrusive thoughts and avoidance about breast cancer (Impact of Event Scale), negative affect (Profile of Mood States), and general distress (Brief Symptom Inventory). Anthropometric measurements were taken. Urine samples during work, home, and sleep were collected for assessment of cortisol responses in the naturalistic setting where work was conceptualized as the stressful time of the day. Results: A series of analyses indicated a gradient increase of cortisol levels in response to the work environment from no, low, to high familial risk of breast cancer. When adding breast cancer intrusions to the model with familial risk status predicting work cortisol levels, significant intrusion effects emerged rendering the familial risk group non-significant. However, due to a lack of association between intrusions and cortisol in the low and high familial risk group separately, as well as a significant difference between low and high familial risk on intrusions, but not on work cortisol levels, full mediation of familial risk group effects on work cortisol by intrusions could not be established. A separate analysis indicated increased levels of central but not general adiposity in women at high familial risk of breast cancer compared to the low and no risk groups. There were no significant associations between central adiposity and cortisol excretion. Conclusion: A hyperactive hypothalamus-pituitary-adrenal axis with a more pronounced excretion of its end product cortisol, as well as elevated levels of central but not overall adiposity in women at high familial risk for breast cancer may indicate an increased health risk which expands beyond that of increased breast cancer risk for these women.
The startle response in psychophysiological research: modulating effects of contextual parameters
(2013)
Startle reactions are fast, reflexive, and defensive responses which protect the body from injury in the face of imminent danger. The underlying reflex is basic and can be found in many species. Even though it consists of only a few synapses located in the brain stem, the startle reflex offers a valuable research method for human affective, cognitive, and psychological research. This is because of moderating effects of higher mental processes such as attention and emotion on the response magnitude: affective foreground stimulation and directed attention are validated paradigms in startle-related research. This work presents findings from three independent research studies that deal with (1) the application of the established "affective modulation of startle"-paradigm to the novel setting of attractiveness and human mating preferences, (2) the question of how different components of the startle response are affected by a physiological stressor and (3) how startle stimuli affect visual attention towards emotional stimuli. While the first two studies treat the startle response as a dependent variable by measuring its response magnitude, the third study uses startle stimuli as an experimental manipulation and investigates its potential effects on a behavioural measure. The first chapter of this thesis describes the basic mechanisms of the startle response as well as the body of research that sets the foundation of startle research in psychophysiology. It provides the rationale for the presented studies, and offers a short summary of the obtained results. Chapter two to four represent primary research articles that are published or in press. At the beginning of each chapter the contribution of all authors is explained. The references for all chapters are listed at the end of this thesis. The overall scope of this thesis is to show how the human startle response is modulated by a variety of factors, such as the attractiveness of a potential mating partner or the exposure to a stressor. In conclusion, the magnitude of the startle response can serve as a measure for such psychological states and processes. Beyond the involuntary, physiological startle reflex, startle stimuli also affect intentional behavioural responses, which we could demonstrate for eye movements in a visual attention paradigm.
Der digitale Fortschritt der vergangenen Jahrzehnte beruht zu einem großen Teil auf der Innovationskraft junger aufstrebender Unternehmen. Während diese Unternehmen auf der einen Seite ihr hohes Maß an Innovativität eint, entsteht für diese zeitgleich auch ein hoher Bedarf an finanziellen Mitteln, um ihre geplanten Innovations- und Wachstumsziele auch in die Tat umsetzen zu können. Da diese Unternehmen häufig nur wenige bis keine Unternehmenswerte, Umsätze oder auch Profitabilität vorweisen können, gestaltet sich die Aufnahme von externem Kapital häufig schwierig bis unmöglich. Aus diesem Umstand entstand in der Mitte des zwanzigsten Jahrhunderts das Geschäftsmodell der Risikofinanzierung, des sogenannten „Venture Capitals“. Dabei investieren Risikokapitalgeber in aussichtsreiche junge Unternehmen, unterstützen diese in ihrem Wachstum und verkaufen nach einer festgelegten Dauer ihre Unternehmensanteile, im Idealfall zu einem Vielfachen ihres ursprünglichen Wertes. Zahlreiche junge Unternehmen bewerben sich um Investitionen dieser Risikokapitalgeber, doch nur eine sehr geringe Zahl erhält diese auch. Um die aussichtsreichsten Unternehmen zu identifizieren, sichten die Investoren die Bewerbungen anhand verschiedener Kriterien, wodurch bereits im ersten Schritt der Bewerbungsphase zahlreiche Unternehmen aus dem Kreis potenzieller Investmentobjekte ausscheiden. Die bisherige Forschung diskutiert, welche Kriterien Investoren zu einer Investition bewegen. Daran anschließend verfolgt diese Dissertation das Ziel, ein tiefergehendes Verständnis darüber zu erlangen, welche Faktoren die Entscheidungsfindung der Investoren beeinflussen. Dabei wird vor allem auch untersucht, wie sich persönliche Faktoren der Investoren, sowie auch der Unternehmensgründer, auf die Investitionsentscheidung auswirken. Ergänzt werden diese Untersuchungen zudem durch die Analyse der Wirkung des digitalen Auftretens von Unternehmensgründern auf die Entscheidungsfindung von Risikokapitalgebern. Des Weiteren verfolgt diese Dissertation als zweites Ziel einen Erkenntnisgewinn über die Auswirkungen einer erfolgreichen Investition auf den Unternehmensgründer. Insgesamt umfasst diese Dissertation vier Studien, die im Folgenden näher beschrieben werden.
In Kapitel 2 wird untersucht, inwiefern sich bestimmte Humankapitaleigenschaften des Investors auf dessen Entscheidungsverhalten auswirken. Mithilfe vorangegangener Interviews und Literaturrecherchen wurden insgesamt sieben Kriterien identifiziert, die Risikokapitalinvestoren in ihrer Entscheidungsfindung nutzen. Daraufhin nahmen 229 Investoren an einem Conjoint Experiment teil, mithilfe dessen gezeigt werden konnte, wie wichtig die jeweiligen Kriterien im Rahmen der Entscheidung sind. Von besonderem Interesse ist dabei, wie sich die Wichtigkeit der Kriterien in Abhängigkeit der Humankapitaleigenschaften der Investoren unterscheiden. Dabei kann gezeigt werden, dass sich die Wichtigkeit der Kriterien je nach Bildungshintergrund und Erfahrung der Investoren unterscheidet. So legen beispielsweise Investoren mit einem höheren Bildungsabschluss und Investoren mit unternehmerischer Erfahrung deutlich mehr Wert auf die internationale Skalierbarkeit der Unternehmen. Zudem unterscheidet sich die Wichtigkeit der Kriterien auch in Abhängigkeit der fachlichen Ausbildung. So legen etwa Investoren mit einer fachlichen Ausbildung in Naturwissenschaften einen deutlich stärkeren Fokus auf den Mehrwert des Produktes beziehungsweise der Dienstleistung. Zudem kann gezeigt werden, dass Investoren mit mehr Investitionserfahrung die Erfahrung des Managementteams wesentlich wichtiger einschätzen als Investoren mit geringerer Investitionserfahrung. Diese Ergebnisse ermöglichen es Unternehmensgründern ihre Bewerbungen um eine Risikokapitalfinanzierung zielgenauer auszurichten, etwa durch eine Analyse des beruflichen Hintergrunds der potentiellen Investoren und eine damit einhergehende Anpassung der Bewerbungsunterlagen, zum Beispiel durch eine stärkere Schwerpunktsetzung besonders relevanter Kriterien.
Die in Kapitel 3 vorgestellte Studie bedient sich der Daten des gleichen Conjoint Experiments aus Kapitel 2, legt hierbei allerdings einen Fokus auf den Unterschied zwischen Investoren aus den USA und Investoren aus Kontinentaleuropa. Dazu wurden Subsamples kreiert, in denen 128 Experimentteilnehmer in den USA angesiedelt sind und 302 in Kontinentaleuropa. Die Analyse der Daten zeigt, dass US-amerikanische Investoren, im Vergleich zu Investoren in Kontinentaleuropa, einen signifikant stärkeren Fokus auf das Umsatzwachstum der Unternehmen legen. Zudem legen kontinentaleuropäische Investoren einen deutlich stärkeren Fokus auf die internationale Skalierbarkeit der Unternehmen. Um die Ergebnisse der Analyse besser interpretieren zu können, wurden diese im Anschluss mit vier amerikanischen und sieben europäischen Investoren diskutiert. Dabei bestätigen die europäischen Investoren die Wichtigkeit der hohen internationalen Skalierbarkeit aufgrund der teilweise geringen Größe europäischer Länder und dem damit zusammenhängenden Zwang, schnell international skalieren zu können, um so zufriedenstellende Wachstumsraten zu erreichen. Des Weiteren wurde der vergleichsweise geringere Fokus auf das Umsatzwachstum in Europa mit fehlenden Mitteln für eine schnelle Expansion begründet. Gleichzeitig wird der starke Fokus der US-amerikanischen Investoren auf Umsatzwachstum mit der höheren Tendenz zu einem Börsengang in den USA begründet, bei dem hohe Umsätze als Werttreiber dienen. Die Ergebnisse dieses Kapitels versetzen Unternehmensgründer in die Lage, ihre Bewerbung stärker an die wichtigsten Kriterien der potenziellen Investoren auszurichten, um so die Wahrscheinlichkeit einer erfolgreichen Investitionsentscheidung zu erhöhen. Des Weiteren bieten die Ergebnisse des Kapitels Investoren, die sich an grenzüberschreitenden syndizierten Investitionen beteiligen, die Möglichkeit, die Präferenzen der anderen Investoren besser zu verstehen und die Investitionskriterien besser auf potenzielle Partner abzustimmen.
Kapitel 4 untersucht ob bestimmte Charaktereigenschaften des sogenannten Schumpeterschen Entrepreneurs einen Einfluss auf die Wahrscheinlichkeit eines zweiten Risikokapitalinvestments haben. Dazu wurden von Gründern auf Twitter gepostete Nachrichten sowie Information von Investitionsrunden genutzt, die auf der Plattform Crunchbase zur Verfügung stehen. Insgesamt wurden mithilfe einer Textanalysesoftware mehr als zwei Millionen Tweets von 3313 Gründern analysiert. Die Ergebnisse der Studie deuten an, dass einige Eigenschaften, die typisch für Schumpetersche Gründer sind, die Chancen für eine weitere Investition erhöhen, während andere keine oder negative Auswirkungen haben. So erhöhen Gründer, die auf Twitter einen starken Optimismus sowie ihre unternehmerische Vision zur Schau stellen die Chancen auf eine zweite Risikokapitalfinanzierung, gleichzeitig werden diese aber durch ein zu starkes Streben nach Erfolg reduziert. Diese Ergebnisse haben eine hohe praktische Relevanz für Unternehmensgründer, die sich auf der Suche nach Risikokapital befinden. Diese können dadurch ihr virtuelles Auftreten („digital identity“) zielgerichteter steuern, um so die Wahrscheinlichkeit einer weiteren Investition zu erhöhen.
Abschließend wird in Kapitel 5 untersucht, wie sich die digitale Identität der Gründer verändert, nachdem diese eine erfolgreiche Risikokapitalinvestition erhalten haben. Dazu wurden sowohl Twitter-Daten als auch Crunchbase-Daten genutzt, die im Rahmen der Erstellung der Studie in Kapitel 4 erhoben wurden. Mithilfe von Textanalyse und Paneldatenregressionen wurden die Tweets von 2094 Gründern vor und nach Erhalt der Investition untersucht. Dabei kann gezeigt werden, dass der Erhalt einer Risikokapitalinvestition das Selbstvertrauen, die positiven Emotionen, die Professionalisierung und die Führungsqualitäten der Gründer erhöhen. Gleichzeitig verringert sich allerdings die Authentizität der von den Gründern verfassten Nachrichten. Durch die Verwendung von Interaktionseffekten kann zudem gezeigt werden, dass die Steigerung des Selbstvertrauens positiv durch die Reputation des Investors moderiert wird, während die Höhe der Investition die Authentizität negativ moderiert. Investoren haben durch diese Erkenntnisse die Möglichkeit, den Weiterentwicklungsprozess der Gründer nach einer erfolgreichen Investition besser nachvollziehen zu können, wodurch sie in die Lage versetzt werden, die Aktivitäten ihrer Gründer auf Social Media Plattformen besser zu kontrollieren und im Bedarfsfall bei ihrer Anpassung zu unterstützen.
Die in den Kapiteln 2 bis 5 vorgestellten Studien dieser Dissertation tragen damit zu einem besseren Verständnis der Entscheidungsfindung im Venture Capital Prozess bei. Der bisherige Stand der Forschung wird um Erkenntnisse erweitert, die sowohl den Einfluss der Eigenschaften der Investoren als auch der Gründer betreffen. Zudem wird auch gezeigt, wie sich die Investition auf den Gründer selbst auswirken kann. Die Implikationen der Ergebnisse, sowie Limitationen und Möglichkeiten künftiger Forschung werden in Kapitel 6 näher beschrieben. Da die in dieser Dissertation verwendeten Methoden und Daten erst seit wenigen Jahren im Kontext der Venture Capital Forschung genutzt werden, beziehungsweise überhaupt verfügbar sind, bietet sie sich als eine Grundlage für weitere Forschung an.
Stress has been considered one of the most relevant factors promoting aggressive behavior. Animal and human pharmacological studies revealed the stress hormones corticosterone in rodents and cortisol in humans to constitute a particularly important neuroendocrine determinate in facilitating aggression and beyond that, assumedly in its continuation and escalation. Moreover, cortisol-induced alterations of social information processing, as well as of cognitive control processes, have been hypothesized as possible influencing factors in the stress-aggression link. So far, the immediate impact of a preceding stressor and thereby stress-induced rise of cortisol on aggressive behavior as well as higher-order cognitive control processes and social information processing in this context have gone mostly unheeded. The present thesis aimed to extend the hitherto findings of stress and aggression in this regard. For this purpose two psychophysiological studies with healthy adults were carried out, both using the socially evaluated-cold pressor test as an acute stress induction. Additionally to behavioral data and subjective reports, event related potentials were measured and acute levels of salivary cortisol were collected on the basis of which stressed participants were divided into cortisol-responders and "nonresponders. Study 1 examined the impact of acute stress-induced cortisol increase on inhibitory control and its neural correlates. 41 male participants were randomly assigned to the stress procedure or to a non-stressful control condition. Beforehand and afterwards, participants performed a Go Nogo task with visual letters to measure response inhibition. The effect of acute stress-induced cortisol increase on covert and overt aggressive behavior and on the processing of provoking stimuli within the aggressive encounter was investigated in study 2. Moreover, this experiment examined the combined impact of stress and aggression on ensuing affective information processing. 71 male and female participants were either exposed to the stress or to the control condition. Following this, half of each group received high or low levels of provocation during the Taylor Aggression Paradigm. At the end of the experiment, a passive viewing paradigm with affective pictures depicting positive, negative, or aggressive scenes with either humans or objects was realized. The results revealed that men were not affected by a stress-induced rise in cortisol on a behavioral level, showing neither impaired response inhibition nor enhanced aggressive behavior. In contrast, women showed enhanced overt and covert aggressive behavior under a surge of endogenous cortisol, confirming previous results, albeit only in case of high provocation and only up to the level of the control group. Unlike this rather moderate impact on behavior, cortisol showed a distinct impact on neural correlates of information processing throughout inhibitory control, aggression-eliciting stimuli, and emotional pictures for both men and women. At this, stress-induced increase of cortisol resulted in enhanced N2 amplitudes to Go stimuli, whereas P2 amplitudes to both and N2 to Nogo amplitudes retained unchanged, indicating an overcorrection and caution of the response activation in favor of successful inhibitory control. The processing of aggression-eliciting stimuli during the aggressive encounter was complexly altered by stress differently for women and men. Under increased cortisol levels, the frontal or parietal P3 amplitude patterns were either diminished or reversed in the case of high provocation compared to the control group and to cortisol-nonresponders, indicating a desensitization towards aggression-eliciting stimuli in males, but a more elaborate processing of those in women. Moreover, stress-induced cortisol and provocation jointly altered subsequent affective information processing at early as well as later stages of the information processing stream. Again, increased levels of cortisol led opposite directed amplitudes in the case of high provocation relative to the control group and cortisol-nonresponders, with enhanced N2 amplitudes in men and reduced P3 and LPP amplitudes in men and women for all affective pictures, suggesting initially enhanced emotional reactivity in men, but ensuing reduced motivational attention and enhanced emotion regulation in both, men and women. As a result, these present findings confirm the relevance of HPA activity in the elicitation and persistence of human aggressive behavior. Moreover, they reveal the significance of compensatory and emotion regulatory strategies and mechanisms in response to stress and provocation, indorsing the relevance of social information and cognitive control processes. Still, more research is needed to clarify the conditions which lead to the facilitation of aggression and by which compensatory mechanisms this is prevented.
In splitting theory of locally convex spaces we investigate evaluable characterizations of the pairs (E, X) of locally convex spaces such that each exact sequence 0 -> X -> G -> E -> 0 of locally convex spaces splits, i.e. either X -> G has a continuous linear left inverse or G -> E has a continuous linear right inverse. In the thesis at hand we deal with splitting of short exact sequences of so-called PLH spaces, which are defined as projective limits of strongly reduced spectra of strong duals of Fréchet-Hilbert spaces. This class of locally convex spaces contains most of the spaces of interest for application in the theory of partial differential operators as the space of Schwartz distributions , the space of real analytic functions and various spaces of ultradifferentiable functions and ultradistributions. It also contains non-Schwartz spaces as B(2,k,loc)(Ω) and spaces of smooth and square integrable functions that are not covered by the current theory for PLS spaces. We prove a complete characterizations of the above problem in the case of X being a PLH space and E either being a Fréchet-Hilbert space or a strong dual of one by conditions of type (T ). To this end, we establish the full homological toolbox of Yoneda Ext functors in exact categories for the category of PLH spaces including the long exact sequence, which in particular involves a thorough discussion of the proper concept of exactness. Furthermore, we exhibit the connection to the parameter dependence problem via the Hilbert tensor product for hilbertizable locally convex spaces. We show that the Hilbert tensor product of two PLH spaces is again a PLH space which in particular proves the positive answer to Grothendieck- problème des topologies. In addition to that we give a complete characterization of the vanishing of the first derivative of the functor proj for tensorized PLH spectra if one of the PLH spaces E and X meets some nuclearity assumptions. To apply our results to concrete cases we establish sufficient conditions of (DN)-(Ω) type and apply them to the parameter dependence problem for partial differential operators with constant coefficients on B(2,k,loc)(Ω) spaces as well as to the smooth and square integrable parameter dependence problem. Concluding we give a complete solution of all the problems under consideration for PLH spaces of Köthe type.
Chapter 2: Using data from the German Socio-Economic Panel, this study examines the relation-ship between immigrant residential segregation and immigrants" satisfaction with the neighbor-hood. The estimates show that immigrants living in segregated areas are less satisfied with the neighborhood. This is consistent with the hypothesis that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Our result holds true even when controlling for other influences such as household income and quality of the dwelling. It also holds true in fixed effects estimates that account for unobserved time-invariant influences. Chapter 3: Using survey data from the German Socio-Economic Panel, this study shows that immigrants living in segregated residential areas are more likely to report discrimination because of their ethnic background. This applies to both segregated areas where most neighbors are immigrants from the same country of origin as the surveyed person and segregated areas where most neighbors are immigrants from other countries of origin. The results suggest that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Chapter 4: Using data from the German Socio-Economic Panel (SOEP) and administrative data from 1996 to 2009, I investigate the question whether or not right-wing extremism of German residents is affected by the ethnic concentration of foreigners living in the same residential area. My results show a positive but insignificant relationship between ethnic concentration at the county level and the probability of extreme right-wing voting behavior for West Germany. However, due to potential endogeneity issues, I additionally instrument the share of foreigners in a county with the share of foreigners in each federal state (following an approach of Dustmann/Preston 2001). I find evidence for the interethnic contact theory, predicting a negative relationship between foreign-ers" share and right-wing voting. Moreover, I analyze the moderating role of education and the influence of cultural traits on this relationship. Chapter 5: Using data from the Socio-Economic Panel from 1998 to 2009 and administrative data on regional ethnic diversity, I show that ethnic diversity inhibits significantly people- political interest and participation in political organizations in West Germany. People seem to isolate themselves from political participation if exposed to more ethnic diversity which is particularly relevant with respect to the ongoing integration process of the European Union and the increasing transfer of legislative power from the national to European level. The results are robust if an instrumental variable strategy suggested by Dustmann and Preston (2001) is used to take into account that ethnic diversity measured on a local spatial level could be endogenous due to residential sorting. Interestingly, participation in non-political organizations is positively affected by ethnic diversity if selection bias is corrected for.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
As the oldest genre in New Zealand literature written in English, poetry always played a significant role in the country's literary debate and was generally considered to be an indicator of the country's cultural advancement. Throughout the 20th century, the question of home, of where it is and what it entails, became a crucial issue in discussing a distinct New Zealand sense of identity and in strengthening its independent cultural status. The establishment of a national sense of home was thus of primary concern, and poetry was regarded as the cultural marker of New Zealand's independence as a nation. In this politically motivated cultural debate, the writing of women was only considered on the margin, largely because their writing was considered too personal and too intimately tied together with daily life, especially domestic life, as to be able to contribute to a larger cultural statement. Such criticism built on gender role stereotypes, like for instance women's roles as mothers and housewives in the 1950s. The strong alignment of women with the home environment is not coincidental but a construct that was, and still is, predominantly shaped by white patriarchal ideology. However, it is in particular women's, both Pakeha and Maori, thorough investigation into the concept of home from within New Zealand's society that bears the potential for revealing a more profound relationship between actual social reality and the poetic imagination. The close reading of selected poems by Ursula Bethell, Mary Stanley, Lauris Edmond and J.C. Sturm in this thesis reveals the ways in which New Zealand women of different backgrounds subvert, transcend and deconstruct such paradigms through their poetic imagination. Bethell, Stanley, Edmond and Sturm position their concepts of home at the crossroads between the public and the private realm. Their poems explore the correspondence between personal and national concerns and assess daily life against the backdrop of New Zealand's social development. Such complex socio-cultural interdependence has not been paid sufficient attention to in literary criticism, largely because a suitable approach to capturing the complexity of this kind of interconnectedness was lacking. With Spaces of Overlap and Spaces of Mediation this thesis presents two critical models that seek to break the tight critical frames in the assessment of poetic concepts of home. Both notions are based on a contextualised approach to the poetic imagination in relation to social reality and seek to carve out the concept of home in its interconnected patterns. Eventually, this approach helps to comprehend the ways in which women's intimate negotiations of home translate into moments of cultural insight and transcend the boundaries of the individual poets' concerns. The focus on women's (re)negotiations of home counteracts the traditionally male perspective on New Zealand poetry and provides a more comprehensive picture of New Zealand's cultural fabric. In highlighting the works of Ursula Bethell, Mary Stanley, Lauris Edmond and J.C. Sturm, this thesis not only emphasises their individual achievements but makes clear that a traditional line of New Zealand women's poetry exists that has been neglected far too long in the estimation of New Zealand's literary history.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
Many real-life phenomena, such as computer systems, communication networks, manufacturing systems, supermarket checkout lines as well as structural military systems can be represented by means of queueing models. Looking at queueing models, a controller may considerably improve the system's performance by reducing queue lengths, or increasing the throughput, or diminishing the overhead, whereas in the absence of a controller the system behavior may get quite erratic, exhibiting periods of high load and long queues followed by periods, during which the servers remain idle. The theoretical foundations of controlled queueing systems are led in the theory of Markov, semi-Markov and semi-regenerative decision processes. In this thesis, the essential work consists in designing controlled queueing models and investigation of their optimal control properties for the application in the area of the modern telecommunication systems, which should satisfy the growing demands for quality of service (QoS). For two types of optimization criterion (the model without penalties and with set-up costs), a class of controlled queueing systems is defined. The general case of the queue that forms this class is characterized by a Markov Additive Arrival Process and heterogeneous Phase-Type service time distributions. We show that for these queueing systems the structural properties of optimal control policies, e.g. monotonicity properties and threshold structure, are preserved. Moreover, we show that these systems possess specific properties, e.g. the dependence of optimal policies on the arrival and service statistics. In order to practically use controlled stochastic models, it is necessary to obtain a quick and an effective method to find optimal policies. We present the iteration algorithm which can be successfully used to find an optimal solution in case of a large state space.
In this study, candidate loci for periodic catatonia (SCZD10, OMIM #605419) on chromosome 15q15 and 22q13.33 have been fine mapped and investigated. Previously, several studies found evidences for a major susceptibility locus on chromosome 15q15 and a further potential locus on 22q13.33 pointing to genetic heterogeneity. Fine mapping was done in our multiplex families through linkage and mutational analysis using genomic markers selected from public databases. Positional candidate genes like SPRED1 and BRD1, and ultra-conserved elements were investigated by direct sequencing in these families. The results narrow down the susceptibility locus on chromosome 15q14-15q15.1 to a region between markers D15S1042 and D15S968, as well as exclusion of SPRED1 and ultra-conserved elements as susceptibility candidates. Fine mapping for two chromosome 23q13.33-linked families showed that the recombination events would place the disease-causing gene to a telomeric ~577 Kb interval and SNP rs138880 investigation revealed an A-allele in the affected person, therefore excludes BRD1 as well as confirmed MLC1 to be the candidate gene for periodic catatonia.
Early life adversity (ELA) is associated with a higher risk for diseases in adulthood. Changes in the immune system have been proposed to underlie this association. Although higher levels of inflammation and immunosenescence have been reported, data on cell-specific immune effects are largely absent. In addition, stress systems and health behaviors are altered in ELA, which may contribute to the generation of the "ELA immune phenotype". In this thesis, we have investigated the ELA immune phenotype on a cellular level and whether this is an indirect consequence of changes in behavior or stress reactivity. To address these questions the EpiPath cohort was established, consisting of 115 young adults with or without ELA. ELA participants had experienced separation from their parents in early childhood and were subsequently adopted, which is a standard model for ELA, whereas control participants grew up with their biological parents. At a first visit, blood samples were taken for analysis of epigenetic markers and immune parameters. A selection of the cohort underwent a standardized laboratory stress test (SLST). Endocrine, immune, and cardiovascular parameters were assessed at several time points before and after stress. At a second visit, participants underwent structural clinical interviews and filled out psychological questionnaires. We observed a higher number of activated T cells in ELA, measured by HLA-DR and CD25 expression. Neither cortisol levels nor health-risk behaviors explained the observed group differences. Besides a trend towards higher numbers of CCR4+CXCR3-CCR6+ CD4 T cells in ELA, relative numbers of immune cell subsets in circulation were similar between groups. No difference was observed in telomere length or in methylation levels of age-related CpGs in whole blood. However, we found a higher expression of senescence markers (CD57) on T cells in ELA. In addition, these cells had an increased cytolytic potential. A mediation analysis demonstrated that cytomegalovirus infection " an important driving force of immunosenescence " largely accounted for elevated CD57 expression. The psychological investigations revealed that after adoption, family conditions appeared to have been similar to the controls. However, PhD thesis MMC Elwenspoek 18 ELA participants scored higher on a depression index, chronic stress, and lower on self-esteem. Psychological, endocrine, and cardiovascular parameters significantly responded to the SLST, but were largely similar between the two groups. Only in a smaller subset of groups matched for gender, BMI, and age, the cortisol response seemed to be blunted in ELA participants. Although we found small differences in the methylation level of the GR promoter, GR sensitivity and mRNA expression levels GR as well as expression of the GR target genes FKBP5 and GILZ were similar between groups. Taken together, our data suggest an elevated state of immune activation in ELA, in which particularly T cells are affected. Furthermore, we found higher levels of T cells immunosenescence in ELA. Our data suggest that ELA may increase the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell specific immunosenescence. Importantly, we found no evidence of HPA dysregulation in participants exposed to ELA in the EpiPath cohort. Thus, the observed immune phenotype does not seem to be secondary to alterations in the stress system or health-risk behaviors, but rather a primary effect of early life programming on immune cells. Longitudinal studies will be necessary to further dissect cause from effect in the development of the ELA immune phenotype.
The influence of the dopamine agonist Ritalin-® on performance in a card sorting task involving a monetary reward component was tested in 43 healthy male participants. It was investigated whether Ritalin-® would have differential behavioral effects as a function of the participants' parental bonding experiences and the personality variable "Novelty Seeking". When activity and performance accuracy were stimulated my monetary reward, Ritalin-® reduced activity in response to reward and added to the reward-induced increase in performance accuracy. However, performance accuracy after drug challenge was improved only in the low care participants. In the high care participants, it was contrarily impaired. This observation suggests that the successful therapeutic administration of Ritalin-® in ADHD may be influenced by early life parental care. Suggesting an association between the personality dimension of "Novelty Seeking" and the dopamine system, high "Novelty Seeking" scores positively correlated with sensitivity to Ritalin-® challenge.
Family firms play a crucial role in the DACH region (Germany, Austria, Switzerland). They are characterized by a long tradition, a strong connection to the region, and a well-established network. However, family firms also face challenges, especially in finding a suitable successor. Wealthy entrepreneurial families are increasingly opting to establish Single Family Offices (SFOs) as a solution to this challenge. An SFO takes on the management and protection of family wealth. Its goal is to secure and grow the wealth over generations. In Germany alone, there are an estimated 350 to 450 SFOs, with 70% of them being established after the year 2000. However, research on SFOs is still in its early stages, particularly regarding the role of SFOs as firm owners. This dissertation delves into an exploration of SFOs through four quantitative empirical studies. The first study provides a descriptive overview of 216 SFOs from the DACH-region. Findings reveal that SFOs exhibit a preference for investing in established companies and real estate. Notably, only about a third of SFOs engage in investments in start-ups. Moreover, SFOs as a group are heterogeneous. Categorizing them into three groups based on their relationship with the entrepreneurial family and the original family firm reveals significant differences in their asset allocation strategies. Subsequent studies in this dissertation leverage a hand-collected sample of 173 SFO-owned firms from the DACH region, meticulously matched with 684 family-owned firms from the same region. The second study focusing on financial performance indicates that SFO-owned firms tend to exhibit comparatively poorer financial performance than family-owned firms. However, when members of the SFO-owning family hold positions on the supervisory or executive board of the firm, there's a notable improvement. The third study, concerning cash holdings, reveals that SFO-owned firms maintain a higher cash holding ratio compared to family-owned firms. Notably, this effect is magnified when the SFO has divested its initial family firms. Lastly, the fourth study regarding capital structure highlights that SFO-owned firms tend to display a higher long-term debt ratio than family-owned firms. This suggests that SFO-owned firms operate within a trade-off theory framework, like private equity-owned firms. Furthermore, this effect is stronger for SFOs that sold their original family firm. The outcomes of this research are poised to provide entrepreneurial families with a practical guide for effectively managing and leveraging SFOs as a strategic long-term instrument for succession and investment planning.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
This dissertation is dedicated to the analysis of the stabilty of portfolio risk and the impact of European regulation introducing risk based classifications for investment funds.
The first paper examines the relationship between portfolio size and the stability of mutual fund risk measures, presenting evidence for economies of scale in risk management. In a unique sample of 338 fund portfolios we find that the volatility of risk numbers decreases for larger funds. This finding holds for dispersion as well as tail risk measures. Further analyses across asset classes provide evidence for the robustness of the effect for balanced and fixed income portfolios. However, a size effect did not emerge for equity funds, suggesting that equity fund managers simply scale their strategy up as they grow. Analyses conducted on the differences in risk stability between tail risk measures and volatilities reveal that smaller funds show higher discrepancies in that respect. In contrast to the majority of prior studies on the basis of ex-post time series risk numbers, this study contributes to the literature by using ex-ante risk numbers based on the actual assets and de facto portfolio data.
The second paper examines the influence of European legislation regarding risk classification of mutual funds. We conduct analyses on a set of worldwide equity indices and find that a strategy based on the long term volatility as it is imposed by the Synthetic Risk Reward Indicator (SRRI) would lead to substantial variations in exposures ranging from short phases of very high leverage to long periods of under investments that would be required to keep the risk classes. In some cases, funds will be forced to migrate to higher risk classes due to limited means to reduce volatilities after crises events. In other cases they might have to migrate to lower risk classes or increase their leverage to ridiculous amounts. Overall, we find if the SRRI creates a binding mechanism for fund managers, it will create substantial interference with the core investment strategy and may incur substantial deviations from it. Fruthermore due to the forced migrations the SRRI degenerates to a passive indicator.
The third paper examines the impact of this volatility based fund classification on portfolio performance. Using historical data on equity indices we find initially that a strategy based on long term portfolio volatility, as it is imposed by the Synthetic Risk Reward Indicator (SRRI), yields better Sharpe Ratios (SRs) and Buy and Hold Returns (BHRs) for the investment strategies matching the risk classes. Accounting for the Fama-French factors reveals no significant alphas for the vast majority of the strategies. In our simulation study where volatility was modelled through a GJR(1,1) - model we find no significant difference in mean returns, but significantly lower SRs for the volatility based strategies. These results were confirmed in robustness checks using alternative models and timeframes. Overall we present evidence which suggests that neither the higher leverage induced by the SRRI nor the potential protection in downside markets does pay off on a risk adjusted basis.
The discretization of optimal control problems governed by partial differential equations typically leads to large-scale optimization problems. We consider flow control involving the time-dependent Navier-Stokes equations as state equation which is stamped by exactly this property. In order to avoid the difficulties of dealing with large-scale (discretized) state equations during the optimization process, a reduction of the number of state variables can be achieved by employing a reduced order modelling technique. Using the snapshot proper orthogonal decomposition method, one obtains a low-dimensional model for the computation of an approximate solution to the state equation. In fact, often a small number of POD basis functions suffices to obtain a satisfactory level of accuracy in the reduced order solution. However, the small number of degrees of freedom in a POD based reduced order model also constitutes its main weakness for optimal control purposes. Since a single reduced order model is based on the solution of the Navier-Stokes equations for a specified control, it might be an inadequate model when the control (and consequently also the actual corresponding flow behaviour) is altered, implying that the range of validity of a reduced order model, in general, is limited. Thus, it is likely to meet unreliable reduced order solutions during a control problem solution based on one single reduced order model. In order to get out of this dilemma, we propose to use a trust-region proper orthogonal decomposition (TRPOD) approach. By embedding the POD based reduced order modelling technique into a trust-region framework with general model functions, we obtain a mechanism for updating the reduced order models during the optimization process, enabling the reduced order models to represent the flow dynamics as altered by the control. In fact, a rigorous convergence theory for the TRPOD method is obtained which justifies this procedure also from a theoretical point of view. Benefiting from the trust-region philosophy, the TRPOD method guarantees to save a lot of computational work during the control problem solution, since the original state equation only has to be solved if we intend to update our model function in the trust-region framework. The optimization process itself is completely based on reduced order information only.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
The stress hormone cortisol as the end-product of the hypothalamic-pituitary-adrenal (HPA) axis has been found to play a crucial role in the release of aggressive behavior (Kruk et al., 2004; Böhnke et al., 2010). In order to further explore potential mechanisms underlying the relationship between stress and aggression, such as changes in (social) information processing, we conducted two experimental studies that are presented in this thesis. In both studies, acute stress was induced by means of the Socially Evaluated Cold Pressor Test (SECP) designed by Schwabe et al. (2008). Stressed participants were classified as either cortisol responders or nonresponders depending on their rise in cortisol following the stressor. Moreover, basal HPA axis activity was measured prior to the experimental sessions and EEG was recorded throughout the experiments. The first study dealt with the influence of acute stress on cognitive control processes. 41 healthy male participants were assigned to either the stress condition or the non-stressful control procedure of the SECP. Before as well as after the stress induction, all participants performed a cued task-switching paradigm in order to measure cognitive control processes. Results revealed a significant influence of acute and basal cortisol levels, respectively, on the motor preparation of the upcoming behavioral response, that was reflected in changes in the magnitude of the terminal Contingent Negative Variation (CNV). In the second study, the effect of acute stress and subsequent social provocation on approach-avoidance motivation was examined. 72 healthy students (36 males, 36 females) took part in the study. They performed an approach-avoidance task, using emotional facial expressions as stimuli, before as well as after the experimental manipulation of acute stress (again via the SECP) and social provocation realized by means of the Taylor Aggression Paradigm (Taylor, 1967). Additionally to salivary cortisol, testosterone samples were collected at several points in time during the experimental session. Results indicated a positive relationship between acute testosterone levels and the motivation to approach social threat stimuli in highly provoked cortisol responders. Similar results were found when the testosterone-to-cortisol ratio at baseline was taken into account instead of acute testosterone levels. Moreover, brain activity during the approach-avoidance task was significantly influenced by acute stress and social provocation, as reflected in reductions of early (P2) as well as of later (P3) ERP components in highly provoked cortisol responders. This may indicate a less accurate, rapid processing of socially relevant stimuli due to an acute increase in cortisol and subsequent social provocation. In conclusion, the two studies presented in this thesis provide evidence for significant changes in information processing due to acute stress, basal cortisol levels and social provocation, suggesting an enhanced preparation for a rapid behavioral response in the sense of a fight-or-flight reaction. These results confirm the model of Kruk et al. (2004) proposing a mediating role of changed information processes in the stress-aggression-link.
Every day we are exposed to a large set of appetitive food cues, mostly of high caloric, high carbohydrate content. Environmental factors like food cue exposition can impact eating behavior, by triggering anticipatory endocrinal responses and reinforcing the reward value of food. Additionally, it has been shown that eating behavior is largely influence by neuroendocrine factors. Energy homeostasis is of great importance for survival in all animal species. It is challenged under the state of food deprivation which is considered to be a metabolic stressor. Interestingly, the systems regulating stress and food intake share neural circuits. Adrenal glucocorticoids, as cortisol, and the pancreatic hormone insulin have been shown to be crucial to maintain catabolic and anabolic balance. Cortisol and insulin can cross the blood-brain barrier and interact with receptors distributed throughout the brain, influencing appetite and eating behavior. At the same time, these hormones have an important impact on the stress response. The aim of the current work is to broaden the knowledge on reward related food cue processing. With that purpose, we studied how food cue processing is influenced by food deprivation in women (in different phases of the menstrual cycle) and men. Furthermore, we investigated the impact of the stress/metabolic hormones, insulin and cortisol, at neural sites important for energy metabolism and in the processing of visual food cues. The Chapter I of this thesis details the underlying mechanisms of the startle response and its application in the investigation of food cue processing. Moreover, it describes the effects of food deprivation and of the stress-metabolic hormones insulin and cortisol in reward related processing of food cues. It explains the rationale for the studies presented in Chapter II-IV and describes their main findings. A general discussion of the results and recommendations for future research is given. In the study described in Chapter II, startle methodology was used to study the impact of food deprivation in the processing of reward related food cues. Women in different phases of the menstrual cycle and men were studied, in order to address potential effects of sex and menstrual cycle. All participants were studied either satiated or food deprived. Food deprivation provoked enhanced acoustic startle (ASR) response during foreground presentation of visual food cues. Sex and menstrual cycle did not influence this effect. The startle pattern towards food cues during fasting can be explained by a frustrative nonreward effect (FNR), driven by the impossibility to consume the exposed food. In Chapter III, a study is described, which was carried out to explore the central effects of insulin and cortisol, using continuous arterial spin labeling to map cerebral blood flow patterns. Following standardized periods of fasting, male participants received either intranasal insulin, oral cortisol, both, or placebo. Intranasal insulin increased resting regional cerebral blood flow in the putamen and insular cortex, structures that are involved in the regulation of eating behavior. Neither cortisol nor interaction effects were found. These results demonstrate that insulin exerts an action in metabolic centers during resting state, which is not affected by glucocorticoids. The study described in Chapter IV uses a similar pharmacological manipulation as the one presented in Chapter III, while assessing processing of reward related food cues through the startle paradigm validated in Chapter II. A sample of men was studied during short-term food deprivation. Considering the importance of both cortisol and insulin in glucose metabolism, food pictures were divided by glycemic index. Cortisol administration enhanced ASR during foreground presentation of "high glycemic" food pictures. This result suggests that cortisol provokes an increase in reward value of high glycemic food cues, which is congruent with previous research on stress and food consumption. This thesis gives support to the FNR hypothesis towards food cues during states of deprivation. Furthermore, it highlights the potential effects of stress related hormones in metabolism-connected neuronal structures, and in the reward related mechanisms of food cue processing. In a society marked by increased food exposure and availability, alongside with increased stress, it is important to better understand the impact of food exposition and its interaction with relevant hormones. This thesis contributes to the knowledge in this field. More research in this direction is needed.
As a target for condemnation, the thematic prevalence of racism in African American novels of satire is not surprising. In order to confront this vice in its shifting manifestations, however, the African American satirist has to employ special techniques. This thesis examines some of these devices as they occur in George Schuyler- Black No More, Charles Wright- The Wig, and Percival Everett- Erasure. Given the reciprocity of target and technique in the satiric context, close attention is paid to how the authors under study locate and interrogate racism in their narratives. In this respect, the significance of anti-essentialist Marxist criticism in Schuyler- Black No More and the author- portrayal of the society of his time as capitalist machinery is examined. While Schuyler is concerned with exposing the general socioeconomic workings of the 1920s from a Marxist perspective, Wright offers the reader perspective into how this oppressive machinery psychologically manipulates and corrupts the individual in the historic context of Lyndon B. Johnson- political vision of the Great Society. Everett then elaborates on the epistemological concern which is traceable in Wright- work and addresses the role media representation plays in manufacturing images and rigid categories that shape systematic racism. As such, the present study not only highlights the versatility of satire as a rhetorical secret weapon and thus ventures toward the idiosyncrasies of the African American novel of satire, it also makes an effort to trace the ever-changing face of racial discrimination.
The following dissertation contains three studies examining academic boredom development in five high-track German secondary schools (AVG-project data; Study 1: N = 1,432; Study 2: N = 1,861; Study 3: N = 1,428). The investigation period spanned 3.5 years, with four waves of measurement from grades 5 to 8 (T1: 5th grade, after transition to secondary school; T2: 5th grade, after mid-term evaluations; T3: 6th grade, after mid-term evaluations; T4: 8th grade, after mid-term evaluations). All three studies featured cross-sectional and longitudinal analyses, separating, and comparing the subject domains of mathematics and German.
Study 1 provided an investigation of academic boredom’s factorial structure alongside correlational and reciprocal relations of different forms of boredom and academic self-concept. Analyses included reciprocal effects models and latent correlation analyses. Results indicated separability of boredom intensity, boredom due to underchallenge and boredom due to overchallenge, as separate, correlated factors. Evidence for reciprocal relations between boredom and academic self-concept was limited.
Study 2 examined the effectiveness and efficacy of full-time ability grouping for as a boredom intervention directed at the intellectually gifted. Analyses included propensity score matching, and latent growth curve modelling. Results pointed to limited effectiveness and efficacy for full-time ability grouping regarding boredom reduction.
Study 3 explored gender differences in academic boredom development, mediated by academic interest, academic self-concept, and previous academic achievement. Analyses included measurement invariance testing, and multiple-indicator-multi-cause-models. Results showed one-sided gender differences, with boys reporting less favorable boredom development compared to girls, even beyond the inclusion of relevant mediators.
Findings from all three studies were embedded into the theoretical framework of control-value theory (Pekrun, 2006; 2019; Pekrun et al., 2023). Limitations, directions for future research, and practical implications were acknowledged and discussed.
Overall, this dissertation yielded important insights into boredom’s conceptual complexity. This concerned factorial structure, developmental trajectories, interrelations to other learning variables, individual differences, and domain specificities.
Keywords: Academic boredom, boredom intensity, boredom due to underchallenge, boredom due to overchallenge, ability grouping, gender differences, longitudinal data analysis, control-value theory
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
Academic achievement is a central outcome in educational research, both in and outside higher education, has direct effects on individual’s professional and financial prospects and a high individual and public return on investment. Theories comprise cognitive as well as non-cognitive influences on achievement. Two examples frequently investigated in empirical research are knowledge (as a cognitive determinant) and stress (as a non-cognitive determinant) of achievement. However, knowledge and stress are not stable, what raises questions as to how temporal dynamics in knowledge on the one hand and stress on the other contribute to achievement. To study these contributions in the present doctoral dissertation, I used meta-analysis, latent profile transition analysis, and latent state-trait analysis. The results support the idea of knowledge acquisition as a cumulative and long-term process that forms the basis for academic achievement and conceptual change as an important mechanism for the acquisition of knowledge in higher education. Moreover, the findings suggest that students’ stress experiences in higher education are subject to stable, trait-like influences, as well as situational and/or interactional, state-like influences which are differentially related to achievement and health. The results imply that investigating the causal networks between knowledge, stress, and academic achievement is a promising strategy for better understanding academic achievement in higher education. For this purpose, future studies should use longitudinal designs, randomized controlled trials, and meta-analytical techniques. Potential practical applications include taking account of students’ prior knowledge in higher education teaching and decreasing stress among higher education students.
Earth observation (EO) is a prerequisite for sustainable land use management, and the open-data Landsat mission is at the forefront of this development. However, increasing data volumes have led to a "digital-divide", and consequently, it is key to develop methods that account for the most data-intensive processing steps, then used for the generation and provision of analysis-ready, standardized, higher-level (Level 2 and Level 3) baseline products for enhanced uptake in environmental monitoring systems. Accordingly, the overarching research task of this dissertation was to develop such a framework with a special emphasis on the yet under-researched drylands of Southern Africa. A fully automatic and memory-resident radiometric preprocessing streamline (Level 2) was implemented. The method was applied to the complete Angolan, Zambian, Zimbabwean, Botswanan, and Namibian Landsat record, amounting 58,731 images with a total data volume of nearly 15 TB. Cloud/shadow detection capabilities were improved for drylands. An integrated correction of atmospheric, topographic and bidirectional effects was implemented, based on radiative theory with corrections for multiple scatterings, and adjacency effects, as well as including a multilayered toolset for estimating aerosol optical depth over persistent dark targets or by falling back on a spatio-temporal climatology. Topographic and bidirectional effects were reduced with a semi-empirical C-correction and a global set of correction parameters, respectively. Gridding and reprojection were already included to facilitate easy and efficient further processing. The selection of phenologically similar observations is a key monitoring requirement for multi-temporal analyses, and hence, the generation of Level 3 products that realize phenological normalization on the pixel-level was pursued. As a prerequisite, coarse resolution Land Surface Phenology (LSP) was derived in a first step, then spatially refined by fusing it with a small number of Level 2 images. For this purpose, a novel data fusion technique was developed, wherein a focal filter based approach employs multi-scale and source prediction proxies. Phenologically normalized composites (Level 3) were generated by coupling the target day (i.e. the main compositing criterion) to the input LSP. The approach was demonstrated by generating peak, end and minimum of season composites, and by comparing these with static composites (fixed target day). It was shown that the phenological normalization accounts for terrain- and land cover class-induced LSP differences, and the use of Level 2 inputs enables a wide range of monitoring options, among them the detection of within state processes like forest degradation. In summary, the developed preprocessing framework is capable of generating several analysis-ready baseline EO satellite products. These datasets can be used for regional case studies, but may also be directly integrated into more operational monitoring systems " e.g. in support of the Reducing Emissions from Deforestation and Forest Degradation (REDD) incentive. In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Trier University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
Das erste Kapitel "ECOWAS" capability and potential to overcome constraints to growth and poverty reduction of its member states" diskutiert die Analyse wirtschaftlicher und sozialer Barrieren für ökonomisches Wachstum " eine der Hauptelemente für Entwicklungs- und Armutsreduktionsstrategien in Entwicklungsländern. Die Form der länderspezifischen Analyse von Wachstumsbarrieren wurde nach dem Scheitern der auf alle Länder generalisierten Entwicklungsstrategie des Washington Consensus insbesondere durch den Ansatz der "Growth Diagnostics" der Harvard Professoren Hausman, Rodrik und Velasco eingeführt. Es zeigt sich jedoch, dass bisher der Fokus rein auf den länderspezifischen Analysen bzw. Strategieentwicklungen liegt. Diese Arbeit erweiterte die Diskussion auf die regionale Ebene, indem es beispielhaft an der Economic Community of West African States (ECOWAS) die länderspezifischen Wachstumsbarrieren mit den regionalen Wachstumsbarrieren vergleicht. Dies erfolgt mittels einer Darstellung der in Studien und Strategien bereits identifizierten, länderspezifischen Wachstumsbarrieren in den jeweiligen Ländern sowie mit der Auswertung der regionalen Strategien der ECOWAS. Dazu wird ermittelt, inwieweit auf der regionalen Ebene auch messbare Ergebnisse bei der Bekämpfung von Wachstumsbarrieren erzielt werden. Es zeigt sich, dass ,trotz der wirtschaftlichen und sozialen Diversität der Region, die ECOWAS den Großteil der in den Ländern identifizierten Wachstumsbarrieren ebenfalls auflistet und darüber hinaus sogar mit messbaren Ergebnissen dazu beiträgt, Veränderungen des Status Quo zu erreichen. Die Erweiterung des Ansatzes der Growth Diagnostics auf die regionale Ebene sowie die Erweiterung um das vergleichende Element von länderspezifischen und regionalen Wachstumsbarrieren zeigen sich als praktikabler Weg, Entwicklungsstrategien auf regionaler Ebene zu prüfen und subsidiär weiterzuentwickeln. Das zweite Kapitel "Simplifying evaluation of potential causalities in development projects using Qualitative Comparative Analysis (QCA)" diskutiert die Methode der qualitativen komperativen Analyse (QCA) als Evaluierungsmethodik für Projekte der Entwicklungszusammenarbeit. Hierbei stehen die adäquate Messung sowie die verständliche Darstellung der Wirkung von Entwicklungszusammenarbeit im Vordergrund. Dies ist ein Beitrag zu der intensiv geführten Diskussion, wie Wirkung von Hilfe in Entwicklungsländern gemessen und daraus für weitere Projekte gelernt werden kann. Mit der beispielhaften Anwendung der QCA auf einen Datensatz der deutschen Entwicklungszusammenarbeit im Senegal wird erstmalig diese Methode für die Entwicklungszusammenarbeit in der Praxis angewandt. Der Fokus liegt dabei auf der Überprüfung von bestimmten Programmtheorien, d.h. der Annahme bestimmter Zusammenhänge zwischen eingesetzten Mitteln, äußeren Umständen und den Projektergebnissen bei der Implementierung von Projekten. Während solche Programmtheorien in dem Großteil der Projektskizzen der deutschen Entwicklungszusammenarbeit enthalten sind, werden die wenigsten dieser Programmtheorien geprüft. Diese Arbeit zeigt QCA als eine effiziente Methode für diese Überprüfung. Eine eindeutige Bestätigung oder Falsifizierung dieser Theorien ist mittels dieser Methodik möglich. Dazu können die Ergebnisse bei den beiden einfacheren Formen der QCA, der crisp-set sowie der multi-value QCA, leicht nachvollziehbar vermittelt werden. Des Weiteren zeigt die Arbeit, dass QCA ebenfalls die Weiterentwicklung einer Programmtheorie ermöglicht, allerdings ist diese Weiterentwicklung nur begrenzt effizient und stark von den vorliegenden Daten sowie der Datenstruktur abhängig. Die Arbeit zeigt somit das Potential der QCA insbesondere für den Test von Programmtheorien auf und stellt die praktische Anwendung für mögliche Replizierung beispielhaft dar. Das dritte und letzte Kapitel der Doktorarbeit "The regional trade dynamics of Turkey: a panel data gravity model" analysiert den türkischen Handel, um die Veränderungen der letzten Jahrzehnte aufzuzeigen und daran zu diskutieren, inwieweit sich die Türkei als aufstrebendes Schwellenland von den bestehenden Handelsstrukturen loslöst. Diese Arbeit ist ein Beitrag zur Diskussion der sich Verschiebenden Machtkonstellationen durch das wirtschaftliche Aufholen der Schwellenländer. Bei der Türkei ist diese Diskussion zusätzlich interessant, da die Frage, ob die Türkei sich von der westlichen Welt, Nordamerika und Europa, abwendet, berücksichtigt wird. Mittels Dummy-Variablen für verschiedene Regionen in einem Gravitätsmodell werden die türkischen Handelsdaten zuerst insgesamt und nach Sektoren analysiert und die Veränderungen über verschieden Perioden des türkischen Außenhandels betrachtet. Es zeigt sich, dass in den türkischen Handelsbeziehungen eine Regionalisierung und eine Diversifizierung der Handelspartner stattfinden. Allerdings geht dies nicht mit einer Abkehr von westlichen Handelspartnern einher.
The present dissertation was developed to emphasize the importance of self-regulatory abilities and to derive novel opportunities to empower self-regulation. From the perspective of PSI (Personality Systems Interactions) theory (Kuhl, 2001), interindividual differences in self-regulation (action vs. state orientation) and their underlying mechanisms are examined in detail. Based on these insights, target-oriented interventions are derived, developed, and scientifically evaluated. The present work comprises a total of four studies which, on the one hand, highlight the advantages of a good self-regulation (e.g., enacting difficult intentions under demands; relation with prosocial power motive enactment and well-being). On the other hand, mental contrasting (Oettingen et al., 2001), an established self-regulation method, is examined from a PSI perspective and evaluated as a method to support individuals that struggle with self-regulatory deficits. Further, derived from PSI theory`s assumptions, I developed and evaluated a novel method (affective shifting) that aims to support individuals in overcoming self-regulatory deficits. Thereby affective shifting supports the decisive changes in positive affect for successful intention enactment (Baumann & Scheffer, 2010). The results of the present dissertation show that self-regulated changes between high and low positive affect are crucial for efficient intention enactment and that methods such as mental contrasting and affective shifting can empower self-regulation to support individuals to successfully close the gap between intention and action.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
My dissertation is concerned with contemporary (Anglo-)Canadian immigrant fiction and proposes an analytic grid with which it may be appreciated and compared more adequately. As a starting-point serves the general observation that the works of many Canadian immigrant writers are characterised by a focus on their respective home cultures as well as on their Canadian host culture. Following the ground-breaking work of Northrop Frye, Margaret Atwood and David Staines, the categories of "there" and "here" are suggested in order to reflect this double encoding of Canadian immigrant literature. However, "here" and "there" are more than spatial configurations in that they represent a concern with issues of multiculturalism and postcolonialism. Both of which are informed by an emphasis on difference and identity, and difference and identity are also what the narratives of M.G. Vassanji, Neil Bissoondath and Rohinton Mistry are preoccupied with. My study sets out to show two things: On the one hand, it attempts to exemplify the complexity and interrelatedness of "there" and "here" in a representative fashion. Hence in their treatments of difference, M.G. Vassanji, Neil Bissoondath and Rohinton Mistry come up with comparable identity constructions "here" and "there" respectively. On the other hand, special attention is paid to the strategies by which Vassanji, Bissoondath and Mistry construct difference and corroborate their respective understandings of identity.
During pregnancy every eighth woman is treated with glucocorticoids. Glucocorticoids inhibit cell division but are assumed to accelerate the differentiation of cells. In this review animal models for the development of the human fetal and neonatal hypothalamic-pituitary-adrenal (HPA) axis are investigated. It is possible to show that during pregnancy in humans, as in most of the here-investigated animal models, a stress hyporesponsive period (SHRP) is present. In this period, the fetus is facing reduced glucocorticoid concentrations, by low or absent fetal glucocorticoid synthesis and by reduced exposure to maternal glucocorticoids. During that phase, sensitive maturational processes in the brain are assumed, which could be inhibited by high glucocorticoid concentrations. In the SHRP, species-specific maximal brain growth spurt and neurogenesis of the somatosensory cortex take place. The latter is critical for the development of social and communication skills and the secure attachment of mother and child. Glucocorticoid treatment during pregnancy needs to be further investigated especially during this vulnerable SHRP. The hypothalamus and the pituitary stimulate the adrenal glucocorticoid production. On the other hand, glucocorticoids can inhibit the synthesis of corticotropin-releasing hormone (CRH) in the hypothalamus and of adrenocorticotropic hormone (ACTH) in the pituitary. Alterations in this negative feedback are assumed among others in the development of fibromyalgia, diabetes and factors of the metabolic syndrome. In this work it is shown that the fetal cortisol surge at the end of gestation is at least partially due to reduced glucocorticoid negative feedback. It is also assumed that androgens are involved in the control of fetal glucocorticoid synthesis. Glucocorticoids seem to prevent masculinization of the female fetus by androgens during the sexual gonadal development. In this work a negative interaction of glucocorticoids and androgens is detectable.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
Floods are hydrological extremes that have enormous environmental, social and economic consequences.The objective of this thesis was a contribution to the implementation of a processing chain that integrates remote sensing information into hydraulic models. Specifically, the aim was to improve water elevation and discharge simulations by assimilating microwave remote sensing-derived flood information into hydraulic models. The first component of the proposed processing chain is represented by a fully automated flood mapping algorithm that enables the automated, objective, and reliable flood extent extraction from Synthetic Aperture Radar images, providing accurate results in both rural and urban regions. The method operates with minimum data requirements and is efficient in terms of computational time. The map obtained with the developed algorithm is still subject to uncertainties, both introduced by the flood mapping algorithm and inherent in the image itself. In this work, particular attention was given to image uncertainty deriving from speckle. By bootstrapping the original satellite image pixels, several synthetic images were generated and provided as input to the developed flood mapping algorithm. From the analysis performed on the mapping products, speckle uncertainty can be considered as a negligible component of the total uncertainty. In the final step of the proposed processing chain real event water elevations, obtained from satellite observations, were assimilated in a hydraulic model with an adapted version of the Particle Filter, modified to work with non-Gaussian distribution of observations. To deal with model structure error and possibly biased observations, a global and a local weight variant of the Particle Filter were tested. The variant to be preferred depends on the level of confidence that is attributed to the observations or to the model. This study also highlighted the complementarity of remote sensing derived and in-situ data sets. An accurate binary flood map represents an invaluable product for different end users. However, deriving from this binary map additional hydraulic information, such as water elevations, is a way of enhancing the value of the product itself. The derived data can be assimilated into hydraulic models that will fill the gaps where, for technical reasons, Earth Observation data cannot provide information, also enabling a more accurate and reliable prediction of flooded areas.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
This work is concerned with two kinds of objects: regular expressions and finite automata. These formalisms describe regular languages, i.e., sets of strings that share a comparatively simple structure. Such languages - and, in turn, expressions and automata - are used in the description of textual patterns, workflow and dependence modeling, or formal verification. Testing words for membership in any given such language can be implemented using a fixed - i.e., finite - amount of memory, which is conveyed by the phrasing finite-automaton. In this aspect they differ from more general classes, which require potentially unbound memory, but have the potential to model less regular, i.e., more involved, objects. Other than expressions and automata, there are several further formalisms to describe regular languages. These formalisms are all equivalent and conversions among them are well-known.However, expressions and automata are arguably the notions which are used most frequently: regular expressions come natural to humans in order to express patterns, while finite automata translate immediately to efficient data structures. This raises the interest in methods to translate among the two notions efficiently. In particular,the direction from expressions to automata, or from human input to machine representation, is of great practical relevance. Probably the most frequent application that involves regular expressions and finite automata is pattern matching in static text and streaming data. Common tools to locate instances of a pattern in a text are the grep application or its (many) derivatives, as well as awk, sed and lex. Notice that these programs accept slightly more general patterns, namely ''POSIX expressions''. Concerning streaming data, regular expressions are nowadays used to specify filter rules in routing hardware.These applications have in common that an input pattern is specified in form a regular expression while the execution applies a regular automaton. As it turns out, the effort that is necessary to describe a regular language, i.e., the size of the descriptor,varies with the chosen representation. For example, in the case of regular expressions and finite automata, it is rather easy to see that any regular expression can be converted to a finite automaton whose size is linear in that of the expression. For the converse direction, however, it is known that there are regular languages for which the size of the smallest describing expression is exponential in the size of the smallest describing automaton.This brings us to the subject at the core of the present work: we investigate conversions between expressions and automata and take a closer look at the properties that exert an influence on the relative sizes of these objects.We refer to the aspects involved with these consideration under the titular term of Relative Descriptional Complexity.
High-resolution projections of the future climate are required to assess climate change realistically at a regional scale. This is in particular important for climate change impact studies since global projections are much too coarse to represent local conditions adequately. A major concern is thereby the change of extreme values in a warming climate due to their severe impact on the natural environment, socio-economical systems and the human health. Regional climate models (RCMs) are, however, able to reproduce much of those local features. Current horizontal resolutions are about 18-25km, which is still too coarse to directly resolve small-scale processes such as deep-convection. For this reason, projections of a possible future climate were simulated in this study with the regional climate model COSMO-CLM at horizontal resolutions of 4.5km and 1.3km for the region of Saarland-Lorraine-Luxemburg and Rhineland-Palatinate for the first time. At a horizontal scale of about 1km deep-convection is treated explicitly, which is expected to improve particularly the simulation of convective summer precipitation and a better resolved orography is expected to improve near surface fields such as 2m temperature. These simulations were performed as 10-year long time-slice experiments for the present climate (1991"2000), the near future (2041"2050) and the end of the century (2091"2100). The climate change signals of the annual and seasonal means and the change of extremes are analysed with respect to precipitation and 2m temperature and a possible added value due to the increased resolution is investigated. To assess changes in extremes, extreme indices have been applied and 10- and 20-year return levels were estimated by "peak-over-threshold" models. Since it is generally known that model output of RCMs should not directly be used for climate change impact studies, the precipitation and temperature fields were bias-corrected with several quantile-matching methods. Among them is a new developed parametric method which includes an extension for extreme values and is hence expected to improve the correction. In addition, the impact of the bias-correction on the climate change signals and on the extreme value statistics was investigated. The results reveal a significant warming of the annual mean by about +1.7 -°C until 2041"2050 and +3.7 -°C until 2091"2100, but considerably stronger signals of up to +5 -°C in summer in the Rhine Valley. Furthermore, the daily variability increases by about +0.8 -°C in summer but decreases by about -0.8 -°C in winter. Consequently, hot extremes increase moderately until the mid of the century but strongly thereafter, in particular in the Rhine Valley. Cold extremes warm continuously in the complete domain in the next 100 years but strongest in mountainous areas. The change signals with regard to annual precipitation are of the order -±10% but not significant. Significant, however, are a predicted increase of +32% of the seasonal precipitation in autumn until 2041"2050 and a decrease of -28% in summer until 2091-2100. No significant changes were found for days with intensities > 20 mm/day, but the results indicate that extremes with return periods ≤2 years increase as well as the frequency and duration of dry periods. The bias-corrections amplified positive signals but dampened negative signals and considerably reduced the power of detection. Moreover, absolute values and frequencies of extremes were altered by the correction but change signals remained approximately constant. The new method outperformed other parametric methods, in particular with regard to extreme value correction and related extreme indices and return levels. Although the bias correction removed systematic errors, it should be treated as an additional layer of uncertainty in climate change studies. Finally, the increased resolution of 1.3km improved predominantly the representation of temperature fields and extremes in terms of spatial heterogeneity. The benefits for summer precipitation were not as clear due to a severe dry-bias in summer, but it could be shown that in principle the onset and intensity of convection improves. This work demonstrates that climate change will have severe impacts in this investigation area and that in particular extremes may change considerably. An increased resolution provides thereby an added value to the results. These findings encourage further investigations, for other variables as for example near-surface wind, which will be more feasible with growing computing resources. These analyses should, however, be repeated with longer time series, different RCMs and anthropogenic scenarios to determine the robustness and uncertainty of these results more extensively.
The fragmentation of landscapes has an important impact on the conservation of biodiversity. The genetic diversity is an important factor for a population- viability, influenced by the landscape structure. However, different species with differing ecological demands react rather differently on the same landscape pattern. To address this feature, we studied ten xerothermophilous butterfly species with differing habitat requirements (habitat specialists with low dispersal power in contrast to habitat generalists with low dispersal power and habitat generalists with higher dispersal power). We analysed allozyme loci for about 10 populations (Ã 40 individuals) of each species in a western German study region with adjoining areas in Luxemburg and north-eastern France. The genetic diversity and genetic differentiation between local populations was discussed under conservation genetic aspects. For generalists we detected a more or less panmictic structure and for species with lower abundance and sedentarily behaviour the effect of isolation by distance. On the other hand, the isolation of specialists was mostly reflected by strong genetic differentiation patterns between the investigated populations. Parameters of genetic diversity were mostly significantly higher in generalists, compared to specialists. Substructures within populations as an answer of low intrapatch migration, low population densities and high population fluctuations could be shown as well. Aspects of landscape history (the historical distribution of habitats resulting of the presence of limestone areas) and the changes of extensive sheep pasturing and the loss of potential habitats in the last few decades (recent fragmentation) are discussed against the gained genetic data-set of the ten butterflies.
Due to the breath-taking growth of the World Wide Web (WWW), the need for fast and efficient web applications becomes more and more urgent. In this doctoral thesis, the emphasis will be on two concrete tasks for improving Internet applications. On the one hand, a major problem of many of today's Internet applications may be described as the performance of the Client/Server-communication: servers often take a long time to respond to a client's request. There are several strategies to overcome this problem of high user-perceived latencies; one of them is to predict future user-requests. This way, time-consuming calculations on the server's side can be performed even before the corresponding request is being made. Furthermore, in certain situations, also the pre-fetching or the pre-sending of data might be appropriate. Those ideas will be discussed in detail in the second part of this work. On the other hand, a focus will be placed on the problem of proposing hyperlinks to improve the quality of rapid written texts, at first glance, an entirely different problem to predicting client requests. Ultra-modern online authoring systems that provide possibilities to check link-consistencies and administrate link management should also propose links in order to improve the usefulness of the produced HTML-documents. In the third part of this elaboration, we will describe a possibility to build a hyperlink-proposal module based on statistical information retrieval from hypertexts. These two problem categories do not seem to have much in common. It is one aim of this work to show that there are certain, similar solution strategies to look after both problems. A closer comparison and an abstraction of both methodologies will lead to interesting synergetic effects. For example, advanced strategies to foresee future user-requests by modeling time and document aging can be used to improve the quality of hyperlink-proposals too.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.
Evaluative conditioning (EC) refers to changes in liking that are due to the pairing of stimuli, and is one of the effects studied in order to understand the processes of attitude formation. Initially, EC had been conceived of as driven by processes that are unique to the formation of attitudes, and that occur independent of whether or not individuals engage in conscious and effortful propositional processes. However, propositional processes have gained considerable popularity as an explanatory concept for the boundary conditions observed in EC studies, with some authors going as far as to suggest that the evidence implies that EC is driven primarily by propositional processes. In this monograph I present research which questions the validity of this claim, and I discuss theoretical challenges and avenues for future EC research.
The allergic contact dermatitis (ACD) to small molecular weight compounds is a common inflammatory skin reaction. ACD is restricted to industrialized countries, has an enormous sociomedical and socioeconomic impact. About 2,800 compounds from the six million chemicals known in our environment are believed to have allergic, and to a lesser degree also contact-sensitizing or immunogenic properties causing allergic contact dermatitis. ACD results from T cell responses to harmless, low molecular weight chemicals (haptens) applied to the skin. Haptens are not directly recognized by the cells of the immune system. They need to be presented by subsets of antigen presenting cells to the cells of the immune system. In this regard, epidermal Langerhans cells (LC) and the cells into which they mature (dendritic cells) are believed to play a pivotal role in the sensitization process for ACD. LC are able to bind the haptens, internalize them, and present them to naive T cells and induce thereby the development of effector T cells. They are so-called professional antigen presenting cells. This process is initiated and maintained by the release of several mediators, which are released by various cells after their contact with the haptens. One of the first proteins secreted into the environment is interleukin (IL)-1ß. This cytokine is produced and secreted minutes after an antigen enters the cell. It is commonly believed that the large amounts of this protein and other cytokines such as granulocyte-colony stimulation factor (GM-CSF) and tumor necrosis factor alpha (TNF-ï¡) needed for the initiation and activation of ACD are coming first from other cells residing in the skin, e.g., keratinocytes, monocytes and macrophages. These cytokines provide the danger signals needed for the activation of the Langerhans cell (LC), which then produce via a positive feedback loop various cytokines themselves. In addition, other proteins such as chemokines influence the generation of danger signals, migration, homing of T cells in the local lymph nodes as well as the recruitment of T cells into the skin. Thus, a small molecular compounds or hapten needs to be able to induce danger signals in order to become immunogenic. In this study, we investigated whether para-phenylenediamine (PPD), an arylamine and common contact allergen, is able to induce danger signals and likely provide the signals needed for an initiation of an immune response[162, 163]. PPD is used as an antioxidant, an ingredient of hair dyes, intermediate of dyestuff, and PPD is found in chemicals used for photographic processing. But up to date, it has not been clearly demonstrated if PPD itself is a sensitizing agent. Thus, this study aimed on the potential of PPD to provide the danger signals by studying IL-1β, TNF-ï¡, and monocyte chemoattractant proteins (MCP-1) in human monocytes, peripheral blood mononuclear cells (PBMC) from healthy volunteers, and also in two human monocyte cell lines namely U937, and THP-1. This study found that PPD decreased dose- and time-dependently the expression and release of three relevant mediators involved in the generation of danger signals. Namely, PPD reduced the mRNA and protein levels for IL-1ß, TNF-ï¡, and MCP-1 in primary human monocytes from various donors. These findings were extended and validated by investigations using the cell line U937. The data were highly specific for PPD, and no such results were gained for its known auto oxidation product called Bandrowski- base or for meta-phenylenediamine (MPD), and ortho-phenylenediamine (OPD). Therefore, we can speculate that this effect is likely to be dependent on the para-substitution. Based on these results we conclude that PPD itself is not able to mount a cascade for the induction of danger signals. It should be mentioned that it is still possible that PPD induces danger signals for sensitization by other unknown processes. Therefore, more research is still needed focusing on this subject especially in professional antigen presenting cells in order to solve the still open question whether PPD itself sensitizes naive T cells or if PPD is solely an allergen. Independently we found unexpectedly that PPD as well as other haptens such as 2, 4-Dinitrochlorobenzene, nickelsulfate, as well as some terpenoide increased clearly the expression of CC chemokin receptor 2 (CCR2), the receptor for the chemokine MCP-1. Up to date, the main importance for the CCR2 receptor comes from results demonstrating that CCR2 is critical for the migration of monocytes after encounter with bacterial lipopolysaccharides. Under these circumstances the receptor disappears from the cell surface and is down regulated. An up regulation of CCR2 has not been reported for haptens, and deserves further investigations.
With two-thirds to three-quarters of all companies, family firms are the most common firm type worldwide and employ around 60 percent of all employees, making them of considerable importance for almost all economies. Despite this high practical relevance, academic research took notice of family firms as intriguing research subjects comparatively late. However, the field of family business research has grown eminently over the past two decades and has established itself as a mature research field with a broad thematic scope. In addition to questions relating to corporate governance, family firm succession and the consideration of entrepreneurial families themselves, researchers mainly focused on the impact of family involvement in firms on their financial performance and firm strategy. This dissertation examines the financial performance and capital structure of family firms in various meta-analytical studies. Meta-analysis is a suitable method for summarizing existing empirical findings of a research field as well as identifying relevant moderators of a relationship of interest.
First, the dissertation examines the question whether family firms show better financial performance than non-family firms. A replication and extension of the study by O’Boyle et al. (2012) based on 1,095 primary studies reveals a slightly better performance of family firms compared to non-family firms. Investigating the moderating impact of methodological choices in primary studies, the results show that outperformance holds mainly for large and publicly listed firms and with regard to accounting-based performance measures. Concerning country culture, family firms show better performance in individualistic countries and countries with a low power distance.
Furthermore, this dissertation investigates the sensitivity of family firm performance with regard to business cycle fluctuations. Family firms show a pro-cyclical performance pattern, i.e. their relative financial performance compared to non-family firms is better in economically good times. This effect is particularly pronounced in Anglo-American countries and emerging markets.
In the next step, a meta-analytic structural equation model (MASEM) is used to examine the market valuation of public family firms. In this model, profitability and firm strategic choices are used as mediators. On the one hand, family firm status itself does not have an impact on firms‘ market value. On the other hand, this study finds a positive indirect effect via higher profitability levels and a negative indirect effect via lower R&D intensity. A split consideration of family ownership and management shows that these two effects are mainly driven by family ownership, while family management results in less diversification and internationalization.
Finally, the dissertation examines the capital structure of public family firms. Univariate meta-analyses indicate on average lower leverage ratios in family firms compared to non-family firms. However, there is significant heterogeneity in mean effect sizes across the 45 countries included in the study. The results of a meta-regression reveal that family firms use leverage strategically to secure their controlling position in the firm. While strong creditor protection leads to lower leverage ratios in family firms, strong shareholder protection has the opposite effect.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
In this thesis, we consider the solution of high-dimensional optimization problems with an underlying low-rank tensor structure. Due to the exponentially increasing computational complexity in the number of dimensions—the so-called curse of dimensionality—they present a considerable computational challenge and become infeasible even for moderate problem sizes.
Multilinear algebra and tensor numerical methods have a wide range of applications in the fields of data science and scientific computing. Due to the typically large problem sizes in practical settings, efficient methods, which exploit low-rank structures, are essential. In this thesis, we consider an application each in both of these fields.
Tensor completion, or imputation of unknown values in partially known multiway data is an important problem, which appears in statistics, mathematical imaging science and data science. Under the assumption of redundancy in the underlying data, this is a well-defined problem and methods of mathematical optimization can be applied to it.
Due to the fact that tensors of fixed rank form a Riemannian submanifold of the ambient high-dimensional tensor space, Riemannian optimization is a natural framework for these problems, which is both mathematically rigorous and computationally efficient.
We present a novel Riemannian trust-region scheme, which compares favourably with the state of the art on selected application cases and outperforms known methods on some test problems.
Optimization problems governed by partial differential equations form an area of scientific computing which has applications in a variety of areas, ranging from physics to financial mathematics. Due to the inherent high dimensionality of optimization problems arising from discretized differential equations, these problems present computational challenges, especially in the case of three or more dimensions. An even more challenging class of optimization problems has operators of integral instead of differential type in the constraint. These operators are nonlocal, and therefore lead to large, dense discrete systems of equations. We present a novel solution method, based on separation of spatial dimensions and provably low-rank approximation of the nonlocal operator. Our approach allows the solution of multidimensional problems with a complexity which is only slightly larger than linear in the univariate grid size; this improves the state of the art for a particular test problem problem by at least two orders of magnitude.
Early life adversity (ELA) poses a high risk for developing major health problems in adulthood including cardiovascular and infectious diseases and mental illness. However, the fact that ELA-associated disorders first become manifest many years after exposure raises questions about the mechanisms underlying their etiology. This thesis focuses on the impact of ELA on startle reflexivity, physiological stress reactivity and immunology in adulthood.
The first experiment investigated the impact of parental divorce on affective processing. A special block design of the affective startle modulation paradigm revealed blunted startle responsiveness during presentation of aversive stimuli in participants with experience of parental divorce. Nurture context potentiated startle in these participants suggesting that visual cues of childhood-related content activates protective behavioral responses. The findings provide evidence for the view that parental divorce leads to altered processing of affective context information in early adulthood.
A second investigation was conducted to examine the link between aging of the immune system and long-term consequences of ELA. In a cohort of healthy young adults, who were institutionalized early in life and subsequently adopted, higher levels of T cell senescence were observed compared to parent-reared controls. Furthermore, the results suggest that ELA increases the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell-specific immunosenescence.
The third study addresses the effect of ELA on stress reactivity. An extended version of the Cold Pressor Test combined with a cognitive challenging task revealed blunted endocrine response in adults with a history of adoption while cardiovascular stress reactivity was similar to control participants. This pattern of response separation may best be explained by selective enhancement of central feedback-sensitivity to glucocorticoids resulting from ELA, in spite of preserved cardiovascular/autonomic stress reactivity.
The reduction of information contained in model time series through the use of aggregating statistical performance measures is very high compared to the amount of information that one would like to draw from it for model identification and calibration purposes. It is readily known that this loss imposes important limitations on model identification and -diagnostics and thus constitutes an element of the overall model uncertainty as essentially different model realizations with almost identical performance measures (e.g. r-² or RMSE) can be generated. In three consecutive studies the present work proposes an alternative approach towards hydrological model evaluation based on the application of Self-Organizing Maps (SOM; Kohonen, 2001). The Self-Organizing Map is a type of artificial neural network and unsupervised learning algorithm that is used for clustering, visualization and abstraction of multidimensional data. It maps vectorial input data items with similar patterns onto contiguous locations of a discrete low-dimensional grid of neurons. The iterative training of the SOM causes the neurons to form a discrete, data-compressed representation of the high-dimensional input data. Using appropriate visualization techniques, information on distributions, patterns and relationships in complex data sets can be extracted. Irrespective of their potential, SOM applications have earned very little attention in hydrological modelling compared to other artificial neural network techniques. Therefore, the aim of the present work is to demonstrate that the application of Self-Organizing Maps has very high potential to address fundamental issues of model evaluation: It is shown that the clustering and classification of model time series by means of SOM can provide useful insights into model behaviour. In combination with the diagnostic properties of Signature Indices (Gupta et al., 2008; Yilmaz et al., 2008) SOM provides a novel tool for interpreting the model parameters in the hydrological context and identifying parameter sets that simultaneously meet multiple objectives, even if the corresponding model realizations belong to different models. Moreover, the presented studies and reviews also encourage further studies on the application of SOM in hydrological modelling.
There is ample evidence that the personality trait of extraversion is associated with frequent experiences of positive affect whereas introversion is associated with less frequent experiences of positive affect. According to a theory of Watson et al. (1997), these findings demonstrate that positive affect forms the conceptual core of extraversion. In contrast, several other researchers consider sociability - and not positive affect - as the core of extraversion. The aim of the present work is to examine the relation between extraversion and dispositional positive affect on the neurobiological level. In 38 participants resting cerebral blood flow was measured with continuous arterial spin labeling (CASL). Each participant was scanned on two measurement occasions separated by seven weeks. In addition, questionnaire measures of extraversion and dispositional positive affect were collected. To employ CASL for investigating the biological basis of personality traits, the psychometric properties of CASL blood flow measurements were examined in two studies. The first study was conducted to validate the CASL technique. Using a visual stimulation paradigm, the expected pattern of activity was found, i.e. there were specific differences in blood flow in the primary and secondary visual areas. Moreover, the results in the first measurement occasion could be reproduced in the second. Thus, these results suggest that CASL blood flow measurements have a high degree of validity. The aim of the second psychometric study was to examine whether resting blood flow measurements are characterized by a sufficient trait stability to be used as a marker for personality traits. Employing the latent state-trait theory developed by Steyer and colleagues, it was shown that about 70 % of the variance of regional blood flow could be explained by individual differences in a latent trait. This suggests that blood flow measurements have sufficient trait stability for investigating the biological basis of personality traits. In the third study, the relation between extraversion and dispositional positive affect was investigated on the neurobiological level. Voxel-based analyses showed that dispositional positive affect was correlated with resting blood flow in the ventral striatum, i.e. a brain structure that is associated with approach behavior and reward processing. This biological basis was also found for extraversion. In addition, when extraversion was statistically controlled, the association between dispositional positive affect and blood flow in the ventral striatum was still present. However, when dispositional positive affect was statistically controlled, the relation between extraversion and the ventral striatum disappeared. Taken together, these results suggest that positive affect forms a core of extraversion on the neurobiological level. The present findings thus add psychophysiological evidence to the theory of Watson et al. (1997), which suggests that positive affect forms the conceptual core of extraversion.
Fibromyalgia is a disorder of unknown etiology characterized by widespread, chronic musculoskeletal pain of at least three month duration and pressure hyperalgesia at specific tender points on clinical examination. The disorder is accompanied by a multitude of additional symptoms such as fatigue, sleep disturbances, morning stiffness, depression, and anxiety. In terms of biological disturbances, low cortisol concentrations have been repeatedly observed in blood and urine samples of fibromyalgia patients, both under basal and stress-induced conditions. The aim of this dissertation was to investigate the presence of low cortisol concentrations (hypocortisolism) and potential accompanying alterations on sympathetic and immunological levels in female fibromyalgia patients. Beside the expected hypocortisolism, a higher norepinephrine secretion and lower natural killer cell levels were found in the patient group compared to a control group consisting of healthy, age-matched women. In addition, an increased activity of some pro-inflammatory markers was observed thus leading to alterations in the balance of pro-/anti-inflammatory activity. The results underline the relevance of simultaneous investigations of interacting bodily systems for a better understanding of underlying biological mechanisms in stress-related disorders.
Zeitgleich mit stetig wachsenden gesellschaftlichen Herausforderungen haben im vergangenen Jahrzehnt Sozialunternehmen stark an Bedeutung gewonnen. Sozialunternehmen verfolgen das Ziel, mit unternehmerischen Mitteln gesellschaftliche Probleme zu lösen. Da der Fokus von Sozialunternehmen nicht hauptsächlich auf der eigenen Gewinnmaximierung liegt, haben sie oftmals Probleme, geeignete Unternehmensfinanzierungen zu erhalten und Wachstumspotenziale zu verwirklichen.
Zur Erlangung eines tiefergehenden Verständnisses des Phänomens der Sozialunternehmen untersucht der erste Teil dieser Dissertation anhand von zwei Studien auf der Basis eines Experiments das Entscheidungsverhalten der Investoren von Sozialunternehmen. Kapitel 2 betrachtet daher das Entscheidungsverhalten von Impact-Investoren. Der von diesen Investoren verfolgte Investmentansatz „Impact Investing“ geht über eine reine Orientierung an Renditen hinaus. Anhand eines Experiments mit 179 Impact Investoren, die insgesamt 4.296 Investitionsentscheidungen getroffen haben, identifiziert eine Conjoint-Studie deren wichtigste Entscheidungskriterien bei der Auswahl der Sozialunternehmen. Kapitel 3 analysiert mit dem Fokus auf sozialen Inkubatoren eine weitere spezifische Gruppe von Unterstützern von Sozialunternehmen. Dieses Kapitel veranschaulicht auf der Basis des Experiments die Motive und Entscheidungskriterien der Inkubatoren bei der Auswahl von Sozialunternehmen sowie die von ihnen angebotenen Formen der nichtfinanziellen Unterstützung. Die Ergebnisse zeigen unter anderem, dass die Motive von sozialen Inkubatoren bei der Unterstützung von Sozialunternehmen unter anderem gesellschaftlicher, finanzieller oder reputationsbezogener Natur sind.
Der zweite Teil erörtert auf der Basis von zwei quantitativ empirischen Studien, inwiefern die Registrierung von Markenrechten sich zur Messung sozialer Innovationen eignet und mit finanziellem und sozialem Wachstum von sozialen Startups in Verbindung steht. Kapitel 4 erörtert, inwiefern Markenregistrierungen zur Messung von sozialen Innovationen dienen können. Basierend auf einer Textanalyse der Webseiten von 925 Sozialunternehmen (> 35.000 Unterseiten) werden in einem ersten Schritt vier Dimensionen sozialer Innovationen (Innovations-, Impact-, Finanz- und Skalierbarkeitsdimension) ermittelt. Darauf aufbauend betrachtet dieses Kapitel, wie verschiedene Markencharakteristiken mit den Dimensionen sozialer Innovationen zusammenhängen. Die Ergebnisse zeigen, dass insbesondere die Anzahl an registrierten Marken als Indikator für soziale Innovationen (alle Dimensionen) dient. Weiterhin spielt die geografische Reichweite der registrierten Marken eine wichtige Rolle. Aufbauend auf den Ergebnissen von Kapitel 4 untersucht Kapitel 5 den Einfluss von Markenregistrierungen in frühen Unternehmensphasen auf die weitere Entwicklung der hybriden Ergebnisse von sozialen Startups. Im Detail argumentiert Kapitel 5, dass sowohl die Registrierung von Marken an sich als auch deren verschiedene Charakteristiken unterschiedlich mit den sozialen und ökonomischen Ergebnissen von sozialen Startups in Verbindung stehen. Anhand eines Datensatzes von 485 Sozialunternehmen zeigen die Analysen aus Kapitel 5, dass soziale Startups mit einer registrierten Marke ein vergleichsweise höheres Mitarbeiterwachstum aufweisen und einen größeren gesellschaftlichen Beitrag leisten.
Die Ergebnisse dieser Dissertation weiten die Forschung im Social Entrepreneurship-Bereich weiter aus und bieten zahlreiche Implikationen für die Praxis. Während Kapitel 2 und 3 das Verständnis über die Eigenschaften von nichtfinanziellen und finanziellen Unterstützungsorganisationen von Sozialunternehmen vergrößern, schaffen Kapitel 4 und 5 ein größeres Verständnis über die Bedeutung von Markenanmeldungen für Sozialunternehmen.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Teamwork is ubiquitous in the modern workplace. However, it is still unclear whether various behavioral economic factors de- or increase team performance. Therefore, Chapters 2 to 4 of this thesis aim to shed light on three research questions that address different determinants of team performance.
Chapter 2 investigates the idea of an honest workplace environment as a positive determinant of performance. In a work group, two out of three co-workers can obtain a bonus in a dice game. By misreporting a secret die roll, cheating without exposure is an option in the game. Contrary to claims on the importance of honesty at work, we do not observe a reduction in the third co-worker's performance, who is an uninvolved bystander when cheating takes place.
Chapter 3 analyzes the effect of team size on performance in a workplace environment in which either two or three individuals perform a real-effort task. Our main result shows that the difference in team size is not harmful to task performance on average. In our discussion of potential mechanisms, we provide evidence on ongoing peer effects. It appears that peers are able to alleviate the potential free-rider problem emerging out of working in a larger team.
In Chapter 4, the role of perceived co-worker attractiveness for performance is analyzed. The results show that task performance is lower, the higher the perceived attractiveness of co-workers, but only in opposite-sex constellations.
The following Chapter 5 analyzes the effect of offering an additional payment option in a fundraising context. Chapter 6 focuses on privacy concerns of research participants.
In Chapter 5, we conduct a field experiment in which, participants have the opportunity to donate for the continuation of an art exhibition by either cash or cash and an additional cashless payment option (CPO). The treatment manipulation is completed by framing the act of giving either as a donation or pay-what-you-want contribution. Our results show that donors shy away from using the CPO in all treatment conditions. Despite that, there is no negative effect of the CPO on the frequency of financial support and its magnitude.
In Chapter 6, I conduct an experiment to test whether increased transparency of data processing affects data disclosure and whether the results change if it is indicated that the implementation of the GDPR happened involuntarily. I find that increased transparency raises the number of participants who do not disclose personal data by 21 percent. However, this is not the case in the involuntary-signal treatment, where the share of non-disclosures is relatively high in both conditions.
Stiftungsunternehmen sind Unternehmen, die sich ganz oder teilweise im Eigentum einer gemeinnützigen oder privaten Stiftung befinden. Die Anzahl an Stiftungsunternehmen in Deutschland ist in den letzten Jahren deutlich gestiegen. Bekannte deutsche Unternehmen wie Aldi, Bosch, Bertelsmann, LIDL oder Würth befinden sich im Eigentum von Stiftungen. Einige von ihnen, wie beispielsweise Fresenius, ZF Friedrichshafen oder Zeiss, sind sogar an der Börse notiert. Die Mehrzahl der Stiftungsunternehmen entsteht dadurch, dass Unternehmensgründer oder Unternehmerfamilien ihr Unternehmen in eine Stiftung einbringen, anstatt es zu vererben oder zu verkaufen.
Die Motive hierfür sind vielfältig und können familiäre Gründe (z. B. Kinderlosigkeit, Vermeidung von Familienstreit), unternehmensbezogene Gründe (z. B. Möglichkeit der langfristigen Planung durch stabile Eigentümerstruktur) und steuerliche Gründe (Vermeidung oder Reduzierung der Erbschaftssteuer) haben oder sind durch die Person des Gründers motiviert (Möglichkeit, das Unternehmen auch nach dem eigenen Tod über die Stiftung noch weiterhin zu prägen). Aufgrund der Tatsache, dass Stiftungsunternehmen zumeist aus Familienunternehmen hervorgehen, wird in der Forschung häufig nicht zwischen Familien- und Stiftungsunternehmen differenziert. Aus diesem Grund werden in dieser Dissertation zu Beginn anhand des Drei-Kreis-Modells für Familienunternehmen die Unterschiede zwischen Stiftungs- und Familienunternehmen dargestellt. Die Ergebnisse zeigen, dass nur eine sehr geringe Anzahl von Stiftungsunternehmen eine große Ähnlichkeit zu klassischen Familienunternehmen aufweist. Die meisten Stiftungsunternehmen unterscheiden sich zum Teil sehr stark von Familienunternehmen. Diese Ergebnisse verdeutlichen, dass Stiftungsunternehmen als separates Forschungsfeld betrachtet werden sollten.
Da innerhalb der Gruppe der Stiftungsunternehmen ebenfalls eine starke Heterogenität herrscht, werden im Anschluss Performanceunterschiede innerhalb der Gruppe der Stiftungsunternehmen untersucht. Hierzu wurden die Daten von 142 deutschen Stiftungsunternehmen für die Jahre 2006-2016 erhoben und mittels einer lineareren Regression ausgewertet. Die Ergebnisse zeigen, dass zwischen den verschiedenen Typen signifikante Unterschiede herrschen. Unternehmen, die von einer gemeinnützigen Stiftung gehalten werden, weisen eine signifikant schlechtere Performance auf, als Unternehmen die eine private Stiftung als Shareholder haben.
Im nächsten Schritt wird die Gruppe der börsennotierten Stiftungsunternehmen untersucht. Mittels einer Ereignisstudie wird getestet, wie sich die Stiftung als Eigentümer eines börsennotierten Unternehmens auf den Shareholder Value auswirkt. Die Ergebnisse zeigen, dass eine Anteilsverringerung einer Stiftung einen positiven Einfluss auf den Shareholder Value hat. Stiftungen werden vom Kapitalmarkt dementsprechend negativ bewertet. Aufgrund der divergierenden Ziele von Stiftung und Unternehmen birgt die Verbindung zwischen Stiftung und Unternehmen potentielle Konflikte und Herausforderungen für die beteiligten Personen. Mittels eines qualitativen explorativen Ansatzes, wird basierend auf Interviews, ein Modell entwickelt, welches die potentiellen Konflikte in Stiftungsunternehmen anhand des Beispiels der Doppelstiftung aufzeigt.
Im letzten Schritt werden Handlungsempfehlungen in Form eines Entwurfs für einen Corporate Governance Kodex erarbeitet, die (potentiellen) Stifterinnen und Stiftern helfen sollen, mögliche Konflikte entweder zu vermeiden oder bereits bestehende Probleme zu lösen.
Die Ergebnisse dieser Dissertation sind relevant für Theorie und Praxis. Aus theoretischer Sicht liegt der Wert dieser Untersuchungen darin, dass Forscher künftig besser zwischen Stiftungs- und Familienunternehmen unterscheiden können. Zudem bringt diese Arbeit den aktuellen Forschungsstand zum Thema Stiftungsunternehmen weiter. Außerdem bietet diese Dissertation insbesondere potentiellen Stiftern einen Überblick über die verschiedenen Ausgestaltungsmöglichkeiten und die Vor- und Nachteile, die diese Konstruktionen mit sich bringen. Die Handlungsempfehlungen ermöglichen es Stiftern, vorab potentielle Gefahren erkennen zu können und diese zu umgehen.
The main purpose of this dissertation is to solve the following question: How will the emergence of the Euro influence the currency composition of the NICs?monetary reserves? Taiwan and Thailand are chosen as our investigation subjects. There are two sorts of motives for central banks' reserve holdings, i.e., intervention-related motives and portfolio-related motives. The need for reserve holdings resulting from intervention-related motives are justified because of the costs resulting from exchange rate instability. On the other hand, we use the Tobin-Markowitz model to justify the need for monetary reserves held for portfolio-related motives. The operational implication of this distinction is the separation of monetary reserves into two tranches corresponding to different objectives. An analysis of a central bank's transaction balance is a money quality analysis. Such an analysis has to do with transaction costs and non-pecuniary rates of return. The facts point out, that the Euro's emergence will not change the fact that the USD will continue to be the major currency of transaction balances of the central banks in Taiwan and Thailand. In order to answer the question about diversification of monetary reserves as idle balance in the two NICs, we carry out an analysis of the portfolio approach, which is based on the basic ideas of the Tobin-Markowitz model. This analysis shows that Taiwan and/or Thailand respectively cannot reduce risk at a given rate of return or increase the rate of return at a given risk by diversifying their monetary reserves as idle balance from the USD to the Euro.
ASEAN and ASEAN Plus Three: Manifestations of Collective Identities in Southeast and East Asia?
(2003)
East Asia is a region undergoing vast structural changes. As the region moved closer together economically and politically following the breakdown of the bipolar world order and the ensuing expansion of intra-regional interdependencies, the states of the region faced the challenge of having to actively recast their mutual relations. At the same time, throughout the 1990s, the West became increasingly interested in trans- and inter-regional dialogue and cooperation with the emerging economies of East Asia. These developments gave rise to a "new regionalism", which eventually also triggered debates on Asian identities and the region's potential to integrate. Before this backdrop, this thesis analyzes in how far both the Association of Southeast Asian Nations (ASEAN), which has been operative since 1967 and thus embodies the "old regionalism" of Southeast Asia, and the ASEAN Plus Three forum (APT: the ASEAN states plus China, Japan and South Korea), which has come into existence in the aftermath of the Asian economic crisis of 1997, can be said to represent intergovernmental manifestations of specific collective identities in Southeast and East Asia, respectively. Based on profiles of the respective discursive, behavioral and motivational patterns as well as the integrative potential of ASEAN and APT, this study establishes in how far the member states adhere to sustainable collective patterns of interaction, expectations and objectives, and assesses in how far they can be said to form specific 'ingroups'. Four studies on collective norms, readiness to pool sovereignty, solidarity and attitudes vis-Ã -vis relevant third states show that ASEAN has evolved a certain degree of collective identity, though the Association's political relevance and coherence is frequently thwarted by changes in its external environment. A study on the cooperative and integrative potential of APT yields no manifest evidence of an ongoing or incipient pan-East Asian identity formation process.
The efficacy and effectiveness of psychotherapeutic interventions have been proven time and again. We therefore know that, in general, evidence-based treatments work for the average patient. However, it has also repeatedly been shown that some patients do not profit from or even deteriorate during treatment. Patient-focused psychotherapy research takes these differences between patients into account by focusing on the individual patient. The aim of this research approach is to analyze individual treatment courses in order to evaluate when and under which circumstances a generally effective treatment works for an individual patient. The goal is to identify evidence based clinical decision rules for the adaptation of treatment to prevent treatment failure. Patient-focused research has illustrated how different intake indicators and early change patterns predict the individual course of treatment, but they leave a lot of variance unexplained. The thesis at hand analyzed whether Ecological Momentary Assessment (EMA) strategies could be integrated into patient-focused psychotherapy research in order to improve treatment response prediction models. EMA is an electronically supported diary approach, in which multiple real-time assessments are conducted in participants" everyday lives. We applied EMA over a two-week period before treatment onset in a mixed sample of patients seeking outpatient treatment. The four daily measurements in the patients" everyday environment focused on assessing momentary affect and levels of rumination, perceived self-efficacy, social support and positive or negative life events since the previous assessment. The aim of this thesis project was threefold: First, to test the feasibility of EMA in a routine care outpatient setting. Second, to analyze the interrelation of different psychological processes within patients" everyday lives. Third and last, to test whether individual indicators of psychological processes during everyday life, which were assessed before treatment onset, could be used to improve prediction models of early treatment response. Results from Study I indicate good feasibility of EMA application during the waiting period for outpatient treatment. High average compliance rates over the entire assessment period and low average burdens perceived by the patients support good applicability. Technical challenges and the results of in-depth missing analyses are reported to guide future EMA applications in outpatient settings. Results from Study II shed further light on the rumination-affect link. We replicated results from earlier studies, which identified a negative association between state rumination and affect on a within-person level and additionally showed a) that this finding holds for the majority but not every individual in a diverse patient sample with mixed Axis-I disorders, b) that rumination is linked to negative but also to positive affect and c) that dispositional rumination significantly affects the state rumination-affect association. The results provide exploratory evidence that rumination might be considered a transdiagnostic mechanism of psychological functioning and well-being. Results from Study III finally suggest that the integration of indicators derived from EMA applications before treatment onset can improve prediction models of early treatment response. Positive-negative affect ratios as well as fluctuations in negative affect measured during patients" daily lives allow the prediction of early treatment response. Our results indicate that the combination of commonly applied intake predictors and EMA indicators of individual patients" daily experiences can improve treatment response predictions models. We therefore conclude that EMA can successfully be integrated into patient-focused research approaches in routine care settings to ameliorate or optimize individual care.
The midcingulate cortex has become the focus of scientific interest as it has been associated with a wide range of attentional phenomena. This survey found evidence indicating the relevance of gender and handedness for measures of regional cortical morphology. Although gender was associated with structural variations concerning the neuroanatomy of the midcingulum bundle as well, handedness did not emerge in the analyses of white matter characteristics as significant factor. Hemispheric differences were found at the level of both gray and white matter. Turning to the functional implications of neuroanatomical variations and comparing subjects with a pronounced and a low degree of midcingulate folding, which indicates differential expansions of cytoarchitectural areas, behavioral and electrophysiological differences in the processing of interference became evident. A high degree of leftward midcingulate fissurization was associated with better behavioral performance, presumably caused by a more effective conflict-monitoring system triggering fast and automatic attentional filtering mechanisms. Subjects exhibiting a lower degree of midcingulate fissurization rather seem to rely on more effortful control processes. These results carry implications not only concerning neuronal representations of individual differences in attentional processes, but might also be of relevance for the refinement of models for mental disorders.
Extension of inexact Kleinman-Newton methods to a general monotonicity preserving convergence theory
(2011)
The thesis at hand considers inexact Newton methods in combination with algebraic Riccati equation. A monotone convergence behaviour is proven, which enables a non-local convergence. Above relation is transferred to a general convergence theory for inexact Newton methods securing the monotonicity of the iterates for convex or concave mappings. Several application prove the pratical benefits of the new developed theory.
The vision of a future information and communication society has prompted leading politicians in the United States, the European Union and Japan to influence or even lead the economic and social transition in the context of an active technology policy. The technological development of society, however, is a product of a complex interplay of technological, economic and socio-political constraints. These constraints limit the political decision-making and implementation abilities. Moreover, facts and information are continuously changing during a paradigmatic technological, economic and social shift, which limits political decision-making abilities. This study compares political decision-making to promote computer-mediated communications in the Triad since the beginning of the 1980s, on four levels: the development of a political vision, the long-term aims and strategies, technology policy (e.g. the promotion of technological development and competition policy) and regulatory policy (e.g. universal access, protection of privacy and intellectual property). While technology policy tends to be uncontroversial, during a paradigmatic shift regulatory policy is difficult and lengthy. Nevertheless, the inclusion of interest groups, which rise during this paradigmatic shift and which are close to the technologies and their societal consequences, help to aid decision-making processes. In this context, politics in the United States has been more successful that in the European Union and especially Japan. Although this study predates the rise of eCommerce over the Internet, it addresses many of the themes underlying it. Of these themes, many remain politically unsettled, both on national, supranational and especially international levels. For example, for encryption and secure payments, which are necessary for eCommerce, no international standards do yet exist. The issue of taxation has hardly been opened for discussions. In sum, this study does not only offer a historical overview of the development of the Internet, but it also discusses issues of continuing present concern.
Phase-amplitude cross-frequency coupling is a mechanism thought to facilitate communication between neuronal ensembles. The mechanism could underlie the implementation of complex cognitive processes, like executive functions, in the brain. This thesis contributes to answering the question, whether phase-amplitude cross-frequency coupling - assessed via electroencephalography (EEG) - is a mechanism by which executive functioning is implemented in the brain and whether an assumed performance effect of stress on executive functioning is reflected in phase-amplitude coupling strength. A huge body of studies shows that stress can influence executive functioning, in essence having detrimental effects. In two independent studies, each being comprised of two core executive function tasks (flexibility and behavioural inhibition as well as cognitive inhibition and working memory), beta-gamma phase-amplitude coupling was robustly detected in the left and right prefrontal hemispheres. No systematic pattern of coupling strength modulation by either task demands or acute stress was detected. Beta-gamma coupling might also be present in more basic attention processes. This is the first investigation of the relationship between stress, executive functions and phase-amplitude coupling. Therefore, many aspects have not been explored yet. For example, studying phase precision instead of coupling strength as an indicator for phase-amplitude coupling modulations. Furthermore, data was analysed in source space (independent component analysis); comparability to sensor space has still to be determined. These as well as other aspects should be investigated, due to the promising finding of very robust and strong beta-gamma coupling for all executive functions. Additionally, this thesis tested the performance of two widely used phase-amplitude coupling measures (mean vector length and modulation index). Both measures are specific and sensitive to coupling strength and coupling width. The simulation study also drew attention to several confounding factors, which influence phase-amplitude coupling measures (e. g. data length, multimodality).
The development of our society contributed to increased occurrence of emerging substances (pesticides, pharmaceuticals, personal care products, etc.) in wastewater. Because of their potential hazard on ecosystems and humans, Wastewater Treatment Plants (WWTPs) need to adapt to better remove these compounds. Technology or policy development should however comply with sustainable development, e.g. based on Life Cycle Assessment (LCA) metrics. Nevertheless, the reliability or consistency of LCA results can sometimes be debatable. The main objective of this work was to explore how LCA can better support the implementation of innovative wastewater treatment options, in particular including removal benefits. The method was applied to support solutions for pharmaceuticals elimination from wastewater, regarding: (i) UV technology design, (ii) choice of advanced technology and (iii) centralized or decentralized treatment policy. The assessment approach followed by previous authors based on net impacts calculation seemed very promising to consider both environmental effects induced by treatment plant operation and environmental benefits obtained from pollutants removal. It was therefore applied to compare UV configuration types. LCA outcomes were consistent with degradation kinetics analysis. For the comparison of advanced technologies and policy scenarios, the common practice (net impacts based on EDIP method) was compared to other assessments, to better consider elimination benefits. First, USEtox consensus was applied for the avoided (eco)toxicity impacts, in combination with the recent method ReCiPe for generated impacts. Then, an eco-efficiency indicator (EFI) was developed to weigh the treatment efforts (generated impacts based on EDIP and ReCiPe methods) by the average removal efficiency (overcoming (eco)toxicity uncertainty issues). In total, the four types of comparative assessment showed the same trends: (i) ozonation and activated carbon perform better than UV irradiation, and (ii) no clear advantage distinguished between policy scenarios. It cannot be however concluded that advanced treatment of pharmaceuticals is not necessary because other criteria should be considered (risk assessment, bacterial resistance, etc.) and large uncertainties were embedded in calculations. Indeed, a significant part of this work was dedicated to the discussion of uncertainty and limitations of the LCA outcomes. At the inventory level, it was difficult to model technology operation at development stage. For impact assessment, the newly developed characterization factors for pharmaceuticals (eco)toxicity showed large uncertainties, mainly due to the lack of data and quality for toxicity tests. The use of information made available under REACH framework to develop CFs for detergent ingredients tried to cope with this issue but the benefits were limited due to the mismatch of information between REACH and USEtox method. The highlighted uncertainties were treated with sensitivity analyses to understand their effects on LCA results. This research work finally presents perspectives on the use of transparently generated data (technology inventory and (eco)toxicity factors) and further development of EFI indicator. Also, an accent is made on increasing the reliability of LCA outcomes, in particular through the implementation of advanced techniques for uncertainty management. To conclude, innovative technology/product development (e.g. based on circular economy approach) needs the involvement of all types of actors and the support from sustainability metrics.
Why they rebel peacefully: On the violence-reducing effects of a positive attitude towards democracy
Under the impression of Europe’s drift into Nazism and Stalinism in the first half of the 20th century, social psychological research has focused strongly on dangers inherent in people’s attachment to a political system. The dissertation at hand contributes to a more differentiated perspective by examining violence-reducing aspects of political system attachment in four consecutive steps: First, it highlights attachment to a social group as a resource for violence prevention on an intergroup level. The results suggest that group attachment fosters self-control, a well-known protective factor against violence. Second, it demonstrates violence-reducing influences of attachment on a societal level. The findings indicate that attachment to a democracy facilitate peaceful and prevent violent protest tendencies. Third, it introduces the concept of political loyalty, defined as a positive attitude towards democracy, in order to clarify the different approaches of political system attachment. A set of three studies show the reliability and validity of a newly developed political loyalty questionnaire that distinguishes between affective and cognitive aspects. Finally, the dissertation differentiates former findings with regard to protest tendencies using the concept of political loyalty. A set of two experiments show that affective rather than cognitive aspects of political loyalty instigate peaceful protest tendencies and prevent violent ones. Implications of this dissertation for political engagement and peacebuilding as well as avenues for future research are discussed.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
In recent years, the study of dynamical systems has developed into a central research area in mathematics. Actually, in combination with keywords such as "chaos" or "butterfly effect", parts of this theory have been incorporated in other scientific fields, e.g. in physics, biology, meteorology and economics. In general, a discrete dynamical system is given by a set X and a self-map f of X. The set X can be interpreted as the state space of the system and the function f describes the temporal development of the system. If the system is in state x ∈ X at time zero, its state at time n ∈ N is denoted by f^n(x), where f^n stands for the n-th iterate of the map f. Typically, one is interested in the long-time behaviour of the dynamical system, i.e. in the behaviour of the sequence (f^n(x)) for an arbitrary initial state x ∈ X as the time n increases. On the one hand, it is possible that there exist certain states x ∈ X such that the system behaves stably, which means that f^n(x) approaches a state of equilibrium for n→∞. On the other hand, it might be the case that the system runs unstably for some initial states x ∈ X so that the sequence (f^n(x)) somehow shows chaotic behaviour. In case of a non-linear entire function f, the complex plane always decomposes into two disjoint parts, the Fatou set F_f of f and the Julia set J_f of f. These two sets are defined in such a way that the sequence of iterates (f^n) behaves quite "wildly" or "chaotically" on J_f whereas, on the other hand, the behaviour of (f^n) on F_f is rather "nice" and well-understood. However, this nice behaviour of the iterates on the Fatou set can "change dramatically" if we compose the iterates from the left with just one other suitable holomorphic function, i.e. if we consider sequences of the form (g∘f^n) on D, where D is an open subset of F_f with f(D)⊂ D and g is holomorphic on D. The general aim of this work is to study the long-time behaviour of such modified sequences. In particular, we will prove the existence of holomorphic functions g on D having the property that the behaviour of the sequence of compositions (g∘f^n) on the set D becomes quite similarly chaotic as the behaviour of the sequence (f^n) on the Julia set of f. With this approach, we immerse ourselves into the theory of universal families and hypercyclic operators, which itself has developed into an own branch of research. In general, for topological spaces X, Y and a family {T_i: i ∈ I} of continuous functions T_i:X→Y, an element x ∈ X is called universal for the family {T_i: i ∈ I} if the set {T_i(x): i ∈ I} is dense in Y. In case that X is a topological vector space and T is a continuous linear operator on X, a vector x ∈ X is called hypercyclic for T if it is universal for the family {T^n: n ∈ N}. Thus, roughly speaking, universality and hypercyclicity can be described via the following two aspects: There exists a single object which allows us, via simple analytical operations, to approximate every element of a whole class of objects. In the above situation, i.e. for a non-linear entire function f and an open subset D of F_f with f(D)⊂ D, we endow the space H(D) of holomorphic functions on D with the topology of locally uniform convergence and we consider the map C_f:H(D)→H(D), C_f(g):=g∘f|_D, which is called the composition operator with symbol f. The transform C_f is a continuous linear operator on the Fréchet space H(D). In order to show that the above-mentioned "nice" behaviour of the sequence of iterates (f^n) on the set D ⊂ F_f can "change dramatically" if we compose the iterates from the left with another suitable holomorphic function, our aim consists in finding functions g ∈ H(D) which are hypercyclic for C_f. Indeed, for each hypercyclic function g for C_f, the set of compositions {g∘f^n|_D: n ∈ N} is dense in H(D) so that the sequence of compositions (g∘f^n|_D) is kind of "maximally divergent" " meaning that each function in H(D) can be approximated locally uniformly on D via subsequences of (g∘f^n|_D). This kind of behaviour stands in sharp contrast to the fact that the sequence of iterates (f^n) itself converges, behaves like a rotation or shows some "wandering behaviour" on each component of F_f. To put it in a nutshell, this work combines the theory of non-linear complex dynamics in the complex plane with the theory of dynamics of continuous linear operators on spaces of holomorphic functions. As far as the author knows, this approach has not been investigated before.
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
As an interface between an individual and its environment, the skin is a major site of direct exposure to exogenous substances. Once absorbed, these substances may interact with different biomolecules within the skin. The aryl hydrocarbon receptor (AhR) signaling pathway is one mechanism whereby the skin responds to exposures, predominantly through the induction or upregulation of metabolizing enzymes. One known physiological role of the AhR in many tissues is its involvement in the control of cell cycle progression. In skin, almost nothing is known about this physiological function. Moreover, the question whether frequently used naturally occurring phenolic derivatives like eugenol and isoeugenol impact on the AhR within the skin has rarely been studied so far. Eugenol and isoeugenol are due to their odour referred to as fragrances. The ubiquitous distribution of eugenol and isoeugenol results in an almost unavoidable contact with these substances in our daily lives. Despite this fact, their molecular mechanisms of action in skin are poorly understood. There is evidence supporting the hypothesis that these substances may impact on the AhR. On the one hand, eugenol is shown to induce cytochrome P450 1A1 (CYP1A1), a well-known target gene of the AhR. On the other hand, their known anti-proliferative properties might also be mediated by the AhR, based on its physiological function. In order to proof this hypothesis, it was investigated whether eugenol and isoeugenol impact on the AhR signaling pathway in skin cells. Results revealed that eugenol as well as isoeugenol impact on the AhR signaling pathway in skin cells. Both substances caused the translocation of the AhR into the nucleus, induced the expression of the well-known AhR target genes CYP1A1 and AhR repressor (AhRR) and exhibited impact on cell cycle progression. Both substances caused an AhR-dependent cell cycle arrest in skin cells, modulated protein levels of several cell cycle regulatory proteins, inhibited DNA synthesis and thereby reduced cell numbers. The comparison of wildtype cells to AhR knockdown cells revealed an influence of the AhR on cell cycle progression in skin cells in the absence of exogenous ligands. AhR knockdown cells exhibited a slower progression through the cell cycle caused by an accumulation of cells in the G0/G1 phase of the cell cycle and a decreased DNA synthesis rate. Modulation of cell cycle regulatory proteins involved in the transition from the G0/G1 to the S phase of the cell cycle was altered in AhR knockdown cells as well. To conclude, eugenol as well as isoeugenol were able to impact on the AhR signaling pathway in skin cells. Their molecular mechanisms of action are similar to those of classical AhR ligands, although their structural characteristics strongly differ from that of these ligands. In the absence of exogenous ligands the AhR promotes cell cycle progression in many tissues and this knowledge could be expanded on skin-derived cells within the scope of this thesis.
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.
Evaluation of desalination techniques for treating the brackish water of Olushandja sub-basin
(2014)
The groundwater of Olushandja sub-basin as part of Cuvelai basin in central-northern Namibia is saline with TDS content varying between 4,000ppm to 90,000ppm. Based on climatic conditions, this region can be classified as a semi-arid to arid region with an annual rainfall during summer time varying between 200mm to 500mm. The mean annual evaporation potential is about 2,800mm, which is much higher than the annual rainfall. The southern block of this sub-basin is of low population density. It has not been covered by the supply networks for electricity and water. Therefore, the inhabitants are forced to use the untreated groundwater from the hand-dug wells for their daily purposes. This groundwater is not safe for human consumption and therefore needs to be desalinated for that purpose. The goal of this thesis has been to select a suitable desalination technology for that region. The technology to be selected is from those which use renewable energy sources, which have capacity of production from 10m3 to 100m3 per day, which are simple and robust against existing harsh environmental conditions and have already been implemented successfully in some place. Based on these criteria, the technologies which emerged from the literature are: multistage flashing (MSF), multi effect distillation (MED), multi effect humidification (MEH), membrane distillation (MD), reverse osmosis (RO) and electro dialysis reversed (ED). Out of these technologies, RO &amp; ED are based on membrane techniques and MSF, MED &amp; MEH use thermal processes whereas MD technology uses a hybrid process of thermal and membrane techniques for desalinating the water. For evaluation of technical performance, environmental sustainability and financial feasibility of the above mentioned desalination techniques, the following criteria have been used: gained output ratio, recovery rate, pretreatment requirements, sensitivity to feed water quality, post treatment, operating temperature, operating pressure, scaling and fouling potential, corrosion susceptibility, brine disposal, prime energy requirement, mechanical and electrical power output, heat energy, running costs and water generation costs. The data regarding the performance standards of the successfully implemented desalination techniques have been obtained from the literature of performance benchmarks. The Utility Value Analysis Tool of the Rafter-Group of Multi-Criteria Analysis (MCA) has been used for measuring the performance score of a technology. To perform the utility analysis, an evaluation matrix has to be constructed through the following procedures: selection of the decision options (or assessment groups), identification of the evaluation criteria, measurement of performance and transformation of the units. Then the criteria under the objective groups are assigned a level of importance for determining their weights.To perform the sensitivity analysis the level of importance of a criterion is changed by giving more weight or rate to the assessment group of interest (or study). Within the assessment group of interests, the best performing desalination technology has been selected according to the outcome of the sensitivity analysis. The important conclusions of this study are the identification of the capabilities of thermal and membrane based small scale desalination technologies and their applicability based on site specific needs. The sensitivity analysis indicates that the MED technology is the most environmental friendly technology that uses minimum energy and produces least concentrated brine for disposal. The ED technology has emerged to be technically suitable, but it is only applicable when source water has less than 12.000 ppm salt content. The MSF process has favorable thermal efficiency and it is insensitive to feed water quality. Its major drawbacks are energy needs and post treatment requirements that affected its net score. The MD and MSF process have scored the lowest for the technical and economic assessment groups and are concluded not to be suitable for Olushandja sub-basin. The MEH process is cheaper and technically more appropriate than the MED in the two assessment groups. Based on the above mentioned evaluations, this study concluded that Olushandja sub-basin needs more data collection on the geological profile, distinctive identification of aquifers and evidence on the interaction between the aquifers. From the best available data obtained, it could not be established with certainty where the highest level of salinity can be found in the profile, or how the geological profile is layered. More data on ground water quality for spatial overview of the trends and pattern of the sub-basin will be useful in drawing better conclusion on the specific desalination technology needed which is suitable for a specified village or living space.
Social entrepreneurship is a successful activity to solve social problems and economic
challenges. Social entrepreneurship uses for-profit industry techniques and tools to build
financially sound businesses that provide nonprofit services. Social entrepreneurial activities
also lead to the achievement of sustainable development goals. However, due to the complex,
hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel
determinants. To expand our knowledge of how beneficial macro-level determinants can
be, this work examines empirical evidence about the impact of macro-level determinants on
social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro
level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning,
the introductory section is explained in Chapter 1, which contains the motivation for the
research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship.
Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in
the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that
a variety of determinants at the micro and macro levels are essential for the emergence of social
entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et
al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the
determinants and consequences of social entrepreneurship from different methodological
perspectives. The theoretical foundations of the micro- and macro-level determinants
influencing social entrepreneurial activities were discussed in Chapter 3
The purpose of reproducibility in research is to confirm previously published results
(Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of
transparency of methodology, reluctance to publish, and lack of interest from researchers, there
is a lack of promoting replication of the existing research study (Baker, 2016; Hedges &
Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business
and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that
provide replicability of the reported results are considered rare in previous research (Burman
et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an
empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand
macro-level determinants on social inclusion, although it is still unclear whether these
studies accurately reflect reality. It is important to provide conceptual underpinnings to the
field through a reassessment of published results (Bettis et al., 2016). The results of their
research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention,
Chapter 5 considered the reproducibility of previous results, particularly on the topic of social
entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of
reproducibility and validate the specific conclusions they drew. The literal and constructive
replication in the dissertation inspired us to explore technical replication research on social
entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the
growth of social ventures. The current debate reviews and references literature that has
specifically focused on the development of social entrepreneurship. An empirical analysis of
factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter
6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as
consequences at the micro level. This study examined many characteristics of social and
commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the
growth of social entrepreneurship differs from commercial entrepreneurial activities due to
objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative
research has been used in studies to support the evidence on related topics, including Gupta et
al (2020) emphasized that research needs to focus on specific concepts of social
entrepreneurship for the field to advance. Therefore, this study provides a quantitative,
analysis-based assessment of facts and data. For this purpose, a data set from the Global
Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from
38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence
of various social and commercial characteristics of entrepreneurship on economic growth in
developing countries. Chapter 7 briefly explains future directions and practical/theoretical
implications.
This thesis centers on formal tree languages and on their learnability by algorithmic methods in abstractions of several learning settings. After a general introduction, we present a survey of relevant definitions for the formal tree concept as well as special cases (strings) and refinements (multi-dimensional trees) thereof. In Chapter 3 we discuss the theoretical foundations of algorithmic learning in a specific type of setting of particular interest in the area of Grammatical Inference where the task consists in deriving a correct formal description for an unknown target language from various information sources (queries and/or finite samples) in a polynomial number of steps. We develop a parameterized meta-algorithm that incorporates several prominent learning algorithms from the literature in order to highlight the basic routines which regardless of the nature of the information sources have to be run through by all those algorithms alike. In this framework, the intended target descriptions are deterministic finite-state tree automata. We discuss the limited transferability of this approach to another class of descriptions, residual finite-state tree automata, for which we propose several learning algorithms as well. The learnable class by these techniques corresponds to the class of regular tree languages. In Chapter 4we outline a recent range of attempts in Grammatical Inference to extend the learnable language classes beyond regularity and even beyond context-freeness by techniques based on syntactic observations which can be subsumed under the term 'distributional learning', and we describe learning algorithms in several settings for the tree case taking this approach. We conclude with some general reflections on the notion of learning from structural information.
Until today the effects of many chlorinated hydrocarbons (e.g. DDT, PCBs) against the specific organisms are still a subject of controversial discussions. It was also the case for potential endocrine effects to influence the spermatogenesis correlated with possible changes of the population's vitality. To clear this situation, three questions could be at the centre of attention: 1) Do the chemicals cause a special harmful effect on the male reproductive tract? 2) Could some particular chemical mixtures act to bind and activate the human estrogen receptor (hER)? 3) Are the life stages of an organism specially sensitive to the effects of chemicals and therefore be established as Screening-Test-System? the connected effects of DDT and Arochlor 1254 as single substance and in 1:1 mixture according to their estrogenic effectiveness on zebrafish (Brachydanio rerio) were therefore investigated. the concentrations of the pesticides and their mixture ranged between 0.05-µg/l and 500-µg/l and separated by a factor of 10. It was turned out that the test concentrations of 500-µg/l were too toxic to zebrafish in all the cases. The experiment was followed up with four concentrations of DDT, A54 as well as their 1:1 mixture anew each separated by a factor of 10 and ranging between 0.05-µg/l and 50-µg/l. The bioaccumulation test within 8 days showed that the zebrafish accumulated the chemicals, but no equilibrum was reached and the concentration 0.05-µg/l was established as No Observed Effect Concentration (NOEC). Putting up on these analyses, the investigation of the life cycle (LC) starting with fertilized eggs demonstrated a reduction in the rate of hatchability, reproduction and length of fish emerged. These reductions involved the duration of the life cycle stages (LCS) which consequently lasted longer than expected. Exposure time and level of the tested chemicals accelerated the occurrence of these effects which were more significant when the chemical mixtures were used too. To establish whether the parameter assessed were correlated to the male reproductive tract, the quality, quantity and life span of sperm were assessed using the methods of Leong (1988) and Shapiro et al (1994). The sperm degeneration observed, led us to investigate the spermatogenesis and the ultrastructure of the testes. This last experiment showed a significant reduction of the late stage of spermatogenesis and the heterophagic vacuoles which play an important role in the spermatid maturation. It could therefore be concluded that, DDT and A54 could act synergically and cause disorders of the male reproductive tract of male zebrafish and influence also their growth.
The collapse of the tailings pond of the Aznalcállar open pit mine (West of Seville, Spain) in April 1998 left more than 4000 ha of arable land and floodplains contaminated with heavy metal containing pyrite sludge. After a first remediation campaign a considerable contamination remained in the soil. The present study evaluates the possibilities of reflectance spectroscopy and airborne hyperspectral remote sensing for the qualitative and quantitative assessment of heavy metal contamination and the acidification risk related to the mining accident. Based on an extensive data set consisting of geochemical analyses and reflectance measurements of more than 300 soil samples different chemometrics methods (multiple linear regression, partial least squares and artificial neural networks) are tested for computation of concentrations of soil constituents on the basis of the spectral reflectance. Spectral mixture analysis is applied for the analysis of the spatial distribution of the contamination. The abundance information derived from spectral mixture analysis is turned into quantitative information incorporating an artificial mixture experiment. The results of this experiment provide a link between sludge abundance and sludge weight, allowing as a consequence calculation of the amount of residual sludge per pixel, the acidification potential and other parameters important for remediation planning. The application of laboratory, field and imaging spectroscopy for providing quantitative information about the contamination levels in their spatial context is a good complement to conventional methods. The advantage is the reduction of the time and labour-intensive geochemical analysis, because after the model calibration, further samples can be analysed directly with the chemometric models. Furthermore, the spatial distribution can be mapped with imaging spectroscopy data helping in a more precise remediation planning.
One of the main tasks in mathematics is to answer the question whether an equation possesses a solution or not. In the 1940- Thom and Glaeser studied a new type of equations that are given by the composition of functions. They raised the following question: For which functions Ψ does the equation F(Ψ)=f always have a solution. Of course this question only makes sense if the right hand side f satisfies some a priori conditions like being contained in the closure of the space of all compositions with Ψ and is easy to answer if F and f are continuous functions. Considering further restrictions to these functions, especially to F, extremely complicates the search for an adequate solution. For smooth functions one can already find deep results by Bierstone and Milman which answer the question in the case of a real-analytic function Ψ. This work contains further results for a different class of functions, namely those Ψ that are smooth and injective. In the case of a function Ψ of a single real variable, the question can be fully answered and we give three conditions that are both sufficient and necessary in order for the composition equation to always have a solution. Furthermore one can unify these three conditions to show that they are equivalent to the fact that Ψ has a locally Hölder-continuous inverse. For injective functions Ψ of several real variables we give necessary conditions for the composition equation to be solvable. For instance Ψ should satisfy some form of local distance estimate for the partial derivatives. Under the additional assumption of the Whitney-regularity of the image of Ψ, we can give sufficient conditions for flat functions f on the critical set of Ψ to possess a solution F(Ψ)=f.
Stress related disorders increase continuously. It is not yet clear if stress also promotes breast cancer. This dissertation provides an analyses of the current state of research and focuses on the significance of pre-/postnatal stress factors and chronic stress. The derived hypotheses are empirically examined on breast cancer patients. The clinical study investigates the links between those factors and prognosis and outcome.
Climate fluctuations and the pyroclastic depositions from volcanic activity both influence ecosystem functioning and biogeochemical cycling in terrestrial and marine environments globally. These controlling factors are crucial for the evolution and fate of the pristine but fragile fjord ecosystem in the Magellanic moorlands (~53°S) of southernmost Patagonia, which is considered a critical hotspot for organic carbon burial and marine bioproductivity. At this active continental margin in the core zone of the southern westerly wind belt (SWW), frequent Plinian eruptions and the extremely variable, hyper-humid climate should have efficiently shaped ecosystem functioning and land-to-fjord mass transfer throughout the Late Holocene. However, a better understanding of the complex process network defining the biogeochemical cycling at this land-to-fjord continuum principally requires a detailed knowledge of substrate weathering and pedogenesis in the context of the extreme climate. Yet, research on soils, the ubiquitous presence of tephra and the associated chemical weathering, secondary mineral (trans)formation and organic matter (OM) turnover processes is rare in this remote region. This complicates an accurate reconstruction of the ecosystem´s potentially sensitive response to past environmental impacts, including the dynamics of Late Holocene land-to-fjord fluxes as a function of volcanic activity and strong hydroclimate variability.
Against this background, this PhD thesis aims to disentangle the controlling factors that modulate the terrigenous element mobilization and export mechanisms in the hyper-humid Patagonian Andes and assesses their significance for fjord primary productivity over the past 4.5 kyrs BP. For the first time, distinct biogeochemical characteristics of the regional weathering system serve as major criterion in paleoenvironmental reconstruction in the area. This approach includes broad-scale mineralogical and geochemical analyses of basement lithologies, four soil profiles, volcanic ash deposits, the non-karst stalagmite MA1 and two lacustrine sediment cores. In order to pay special attention to the possibly important temporal variations of pedosphere-atmosphere interaction and ecological consequences initiated by volcanic eruptions, the novel data were evaluated together with previously published reconstructions of paleoclimate and paleoenvironmental conditions.
The devastative high-tephra loading of a single eruption from Mt. Burney volcano (MB2 at 4.216 kyrs BP) sustainably transformed this vulnerable fjord ecosystem, while acidic peaty Andosols developed from ~2.5 kyrs BP onwards after the recovery from millennium-scale acidification. The special setting is dominated by most variable redox-pH conditions, profound volcanic ash weathering and intense OM turnover processes, which are closely linked and ultimately regulated by SWW-induced water-level fluctuations. Constant nutrient supply though sea spray deposition represents a further important control on peat accumulation and OM turnover dynamics. These extreme environmental conditions constrain the biogeochemical framework for an extended land-to-fjord export of leachates comprising various organic and inorganic colloids (i.e., Al-humus complexes and Fe-(hydr)oxides). Such tephra- and/or Andosol-sourced flux contains high proportions of terrigenous organic carbon (OCterr) and mobilized essential (micro)nutrients, e.g., bio-available Fe, that are beneficial for fjord bioproductivity. It can be assumed that this supply of bio-available Fe produced by specific Fe-(hydr)oxide (trans)formation processes from tephra components may outlast more than 6 kyrs and surpasses the contribution from basement rock weathering and glacial meltwaters. However, the land-to-fjord exports of OCterr and bio-available Fe occur mostly asynchronous and are determined by the frequency and duration of redox cycles in soils or are initiated by SWW-induced extreme weather events.
The verification of (crypto)tephra layers embedded stalagmite MA1 enabled the accurate dating of three smaller Late Holocene eruptions from Mt. Burney (MB3 at 2.291 kyrs BP and MB4 at 0.853 kyrs BP) and Aguilera (A1 at 2.978 kyrs BP) volcanoes. Irrespective of the improvement of the regional tephrochronology, the obtained precise 230Th/U-ages allowed constraints on the ecological consequences caused by these Plinian eruptions. The deposition of these thin tephra layers should have entailed a very beneficial short-term stimulation of fjord bioproductivity with bio-available Fe and other (micro)nutrients, which affected the entire area between 52°S and 53°S 30´, respectively. For such beneficial effects, the thickness of tephra deposited to this highly vulnerable peatland ecosystem should be below a threshold of 1 cm.
The Late Holocene element mobilization and land-to-fjord transport was mainly controlled by (i) volcanic activity and tephra thickness, (ii) SWW-induced and southern hemispheric climate variability and (iii) the current state of the ecosystem. The influence of cascading climate and environmental impacts on OCterr and Fe-(hydr)oxide fluxes to can be categorized by four individual, in part overlapping scenarios. These different scenarios take into account the previously specified fundamental biogeochemical mechanisms and define frequently recurring patterns of ecosystem feedbacks governing the land-to-fjord mass transfer in the hyper-humid Patagonian Andes on the centennial-scale. This PhD thesis provides first evidence for a primarily tephra-sourced, continuous and long-lasting (micro)nutrient fertilization for phytoplankton growth in South Patagonian fjords, which is ultimately modulated by variations in SWW-intensity. It highlights the climate sensitivity of such critical land-to-fjord element transport and particularly emphasizes the important but so far underappreciated significance of volcanic ash inputs for biogeochemical cycles at active continental margins.
Exposure to fine and ultra-fine environmental particles is still a problem of concern in many industrialized parts of the world and the intensified use of nanotechnology may further increase exposure to small particles. Since many years air pollution is recognized as a critical problem in western countries, which led to rigorous regulation of air quality and the introduction of strict guidelines. However, the upper thresholds for particulates in ambient air recommended by the world health organization are often exceeded several times in newly industrialized countries. Such high levels of air pollution have the potential to induce adverse effects on human health. The response triggered by air pollutants is not limited to local effects of the respiratory system but is often systemic, resulting in endothelial dysfunction or atherosclerotic malady. The link between air pollution and cardiovascular disease is now accepted by the scientific community but the underlying mechanisms responsible for the pro-atherogenic potential still need to be unraveled in detail. Based on the results from in- vivo and in vitro studies the production of reactive oxygen species due to exposure to particles is the most important mechanism to explain the observed adverse effects. However, the doses that were applied in many in vivo and in vitro studies are far beyond the range of what humans are exposed to and there is the need for more realistic exposure studies. Complex in vitro coculture systems may be valuable tools to study particle-induced processes and to extrapolate effects of particles on the lung. One of the objectives of this PhD thesis was the establishment and further improvement of a complex coculture system initially described by Alfaro-Moreno et al. [1]. The system is composed of an alveolar type-II cell line (A549), differentiated macrophage-like cells (THP-1), mast cells (HMC-1) and endothelial cells (EA.hy 926), seeded in a 3D-orientation on a microporous membrane to mimic the cell response of the alveolar surface in vitro in conjunction with native aerosol exposure (VitrocellTM chamber). The tetraculture system was carefully characterized to ensure its performance and repeatability of results. The spatial distribution of the cells in the tetraculture was analyzed by confocal laser scanning microscopy (CLSM), showing a confluent layer of endothelial and epithelial cells on both sides of the Transwellâ„¢. Macrophage-like cells and mast cells can be found on top of the epithelial cells. The latter cells formed colonies under submerged conditions, which disappeared at the air-liquid-interface (ALI). The VitrocellTM aerosol exposure system was not significantly influencing the viability. Using this system, cells were exposed to an aerosol of 50 nm SiO2-Rhodamine nanoparticles (NPs) in PBS. The distribution of the NPs in the tetraculture after exposure was evaluated by CLSM. Fluorescence from internalized particles was detected in CD11b-positive THP-1 cells only. Furthermore, all cell lines were found to be able to respond to xenobiotic model compounds, such as benzo[a]pyrene (B[a]P) or 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) with the upregulation of CYP1 mRNA. With this tetraculture system the response of the endothelial part of the alveolar barrier was studied in- vitro in a still realistic exposure scenario representing the conditions for a polluted situation without direct exposure of endothelial cells. After exposure to diesel exhaust particulate matter (DEPM) the expression of different anti-oxidant target genes and inflammatory genes such as NAD(P)H dehydrogenase quinone 1 (NQO1), superoxide dismutase 1 (SOD1) and heme oxygenase 1 (HMOX1), as well as the nuclear translocation nuclear factor erythroid-derived 2 (Nrf2) was evaluated. In addition, the potential of DEPM to induce the upregulation of CYP1A1 mRNA in the endothelium was analyzed. DEPM exposure led not to an upregulation of the anti-oxidant or inflammatory target genes, but to clear nuclear translocation of Nrf2. The endothelial cells responded to the DEPM treatment also with the upregulation of CYP1A1 mRNA and nuclear translocation of the aryl hydrocarbon receptor (AhR). Overall, DEPM triggered a response in the endothelial cells after indirect exposure of the tetraculture system to low doses of DEPM, underlining the sensitivity of ALI exposure systems. The use of the tetraculture together with the native aerosol exposure equipment may finally lead to a more realistic judgment regarding the hazard of new compounds and/or new nano-scaled materials in the future. For the first time, it was possible to study the response of the endothelial cells of the alveolar barrier in vitro in a realistic exposure scenario avoiding direct exposure of endothelial cells to high amounts of particulates.
The first part of this thesis offers a theoretical foundation for the analysis of Tolkien- texts. Each of the three fields of interest, nostalgia, utopia, and the pastoral tradition, are introduced in separate chapters. Special attention is given to the interrelations of the three fields. Their history, meaning, and functions are shortly elaborated and definitions applicable to their occurrences in fantasy texts are reached. In doing so, new categories and terms are proposed that enable a detailed analysis of the nostalgic, pastoral, and utopian properties of Tolkien- works. As nostalgia and utopia are important ingredients of pastoral writing, they are each introduced first and are finally related to a definition of the pastoral. The main part of this thesis applies the definitions and insights reached in the theoretical chapters to Tolkien- The Lord of the Rings and The Hobbit. This part is divided into three main sections. Again, the order of the chapters follows the line of argumentation. The first section contains the analysis of pastoral depictions in the two texts. Given the separation of the pastoral into different categories, which were outlined in the theoretical part, the chapters examine bucolic and georgic pastoral creatures and landscapes before turning to non-pastoral depictions, which are sub-divided into the antipastoral and the unpastoral. A separate chapter looks at the bucolic and georgic pastoral- positions and functions in the primary texts. This analysis is followed by a chapter on men- special position in Tolkien- mythology, as their depiction reveals their potential to be both pastoral and antipastoral. The second section of the analytical part is concerned with the role of nostalgia within pastoral culture. The focus is laid on the meaning and function of the different kinds of nostalgia, which were defined in the theoretical part, detectable in bucolic and georgic pastoral cultures. Finally, the analysis turns to the utopian potential of Tolkien- mythology. Again, the focus lies on the pastoral and non-pastoral creatures. Their utopian and dystopian visions are presented and contrasted. This way, different kinds of utopian vision are detected and set in relation to the overall dystopian fate of Tolkien- fictional universe. Drawing on the results of this thesis and on Terry Gifford- ecocritical work, the final chapter argues that Tolkien- texts can be defined as modern pastorals. The connection between Tolkien- work and pastoral literature made explicit in the analysis is thus cemented in generic terms. The conclusion presents a summary of the central findings of this thesis and introduces questions for further study.
This dissertation deals with consistent estimates in household surveys. Household surveys are often drawn via cluster sampling, with households sampled at the first stage and persons selected at the second stage. The collected data provide information for estimation at both the person and the household level. However, consistent estimates are desirable in the sense that the estimated household-level totals should coincide with the estimated totals obtained at the person-level. Current practice in statistical offices is to use integrated weighting. In this approach consistent estimates are guaranteed by equal weights for all persons within a household and the household itself. However, due to the forced equality of weights, the individual patterns of persons are lost and the heterogeneity within households is not taken into account. In order to avoid the negative consequences of integrated weighting, we propose alternative weighting methods in the first part of this dissertation that ensure both consistent estimates and individual person weights within a household. The underlying idea is to limit the consistency conditions to variables that emerge in both the personal and household data sets. These common variables are included in the person- and household-level estimator as additional auxiliary variables. This achieves consistency more directly and only for the relevant variables, rather than indirectly by forcing equal weights on all persons within a household. Further decisive advantages of the proposed alternative weighting methods are that original individual rather than the constructed aggregated auxiliaries are utilized and that the variable selection process is more flexible because different auxiliary variables can be incorporated in the person-level estimator than in the household-level estimator.
In the second part of this dissertation, the variances of a person-level GREG estimator and an integrated estimator are compared in order to quantify the effects of the consistency requirements in the integrated weighting approach. One of the challenges is that the estimators to be compared are of different dimensions. The proposed solution is to decompose the variance of the integrated estimator into the variance of a reduced GREG estimator, whose underlying model is of the same dimensions as the person-level GREG estimator, and add a constructed term that captures the effects disregarded by the reduced model. Subsequently, further fields of application for the derived decomposition are proposed such as the variable selection process in the field of econometrics or survey statistics.
The optimal control of fluid flows described by the Navier-Stokes equations requires massive computational resources, which has led researchers to develop reduced-order models, such as those derived from proper orthogonal decomposition (POD), to reduce the computational complexity of the solution process. The object of the thesis is the acceleration of such reduced-order models through the combination of POD reduced-order methods with finite element methods at various discretization levels. Special stabilization methods required for high-order solution of flow problems with dominant convection on coarse meshes lead to numerical data that is incompatible with standard POD methods for reduced-order modeling. We successfully adapt the POD method for such problems by introducing the streamline diffusion POD method (SDPOD). Using the novel SDPOD method, we experiment with multilevel recursive optimization at Reynolds numbers of Re=400 and Re=10,000.
Comparing the results of the phylogeographies of the four species included in this thesis, some accordances have been found, even though certain patterns are only represented in one or two species. In all cases, the findings of the studied species strongly support the existence of forests or forest-like ecosystems beyond the classic forest refugia in the Mediterranean areas (Iberian, Apennine and Balkan peninsulas) during glacial times. However, evidence of glacial refugial areas in Southeastern Europe, especially the Balkans, have been found in this study as well. The analysed populations of Aposeris foetida, Melampyrum sylvaticum and Erebia euryale showed high genetic diversity values and mostly higher private fragments in this area, which is a strong indicator for centres of glacial survival during Würm and, regarding the results of M. sylvaticum, even during the Riss ice age. Three of the analysed species (A. foetida, M. sylvaticum and Colias palaeno) supported a second main glacial refuge area located along the Northern Alps. Again, high genetic diversity values and the uniqueness of the populations living in this region today prove the importance of this area as a glacial centre of survival. Those results confirm several recently published studies on forest species and strongly indicate the persistence of forest-like structures or even forests during the ice ages along the foothills of the Northern Alps. Additionally, the persistence of C. palaeno in this area furthermore supports the existence of peatlands north of the Alps, at least during the last glacial. The results of M. sylvaticum and E. euryale further indicate the vicinity of the Tatra Mountains as core areas for glacial survival. However, the genetic patterns found for E. euryale are ambiguous. Due to an intermediate position of two genetic lineages (originating in the Eastern Alps and Southeastern Europe), the Tatras could also reflect a postglacial mixture zone of those lineages. Moreover, the glacial and postglacial importance of this area for woodland species was accentuated, supporting other phylogeographic studies published. Besides the congruities among the results of the study species, some unique patterns and therefore further potential glacial refugia have also been illuminated in this thesis. For instance, the calcicole species, A. foetida, most probably had further survival area at both sides of the Dinaric Alps, supported by high genetic diversity values and a high number of private fragments found in Croatian populations. Furthermore, the surroundings of the German Uplands and the margin of the Southern Alps provided suitable conditions for glacial survival for M. sylvaticum, while the Eastern and Southeastern Alpine region most probably sheltered the Large Ringlet E. euryale during ice ages. Additionally, this butterfly species survived at least the glaciation along the foothills of the Massif Central, whose present populations showed a unique genetic lineage and their genetic diversity values have been measurably higher than in other populations for this species. Finally, a large and continuous Würm distribution is highly likely south of the Fennoscandian glaciers in Central Europe for C. palaeno, which might indicate extended peatland areas during Würm glacial. With all the patterns found in this study, the understanding of glacial persistence of forest, respectively forest-like structures and peatlands during Würm or even Riss glacial in Europe could be advanced. The congruencies among the analysed woodland and bog species illustrate the importance and location of extra-Mediterranean refugia for European mountain forests and the glacial presence of Central European peatlands. Thus, already postulated theories could be supported and further pieces of the overall puzzle could be added. The varieties of the different survival centres once more clarified that further phylogeographic studies on mountain forest of different habitat requirements and especially peatland species have to be implemented to get a clearer picture of the glacial history of these habitats.
The contribution of three genes (C15orf53, OXTR and MLC1) to the etiology of chromosome 15-bound schizophrenia (SCZD10), bipolar disorder (BD) and autism spectrum disorder (ASD) were studied. At first, the uncharacterized gene C15orf53 was comprehensively analyzed. Previous genome-wide association studies (GWAS) in bipolar disorder samples have identified an association signal in close vicinity to C15orf53 on chromosome 15q14. This gene is located in exactly the genomic region, which is segregating in our SCZD10 families. An association study with bipolar disorder (BD) and SCZD10 individual samples did not reveal any association of single nucleotide polymorphisms (SNPs) in C15orf53. Mutational analysis of C15orf53 in SCZD10-affected individuals from seven multiplex families did not show any mutations in the 5'-untranslated region, the coding region and the intron-exon boundaries. Gene expression analysis revealed that C15orf53 was expressed in a subpopulation of leukocytes, but not in human post-mortem limbic brain tissue. Summarizing these studies, C15orf53 is unlikely to be a strong candidate gene for the etiology of BD or SCZD10. The second investigated gene was the human oxytocin receptor gene (OXTR). Five well described SNPs located in the OXTR gene were taken for a transmission-disequilibrium test (TDT) in parents-child trios with ASD-affected children. Neither in the complete sample nor in a subgroup with children that had an intelligence quotient (IQ) above 70, association was found, independent from the application of Haploview or UNPHASED for analysis. The third gene, MLC1, was investigated with regards to its implication in the etiology of SCZD10. Mutations in the MLC1 gene lead to megalencephalic leukoencephalopathy with subcortical cysts (MLC) and one variant coding for the amino acid methionine (Met) instead of leucine (Leu) at position 309 was identified to segregate in a family affected with SCZD10. For further investigation of MLC1 and its possible implication in the etiology of SCZD10, a constitutive Mlc1 knockout mouse model should be created. Mouse embryonic stem cells (mES) were electroporated with a knockout vector construct and analyzed with respect to homologous recombination of the knockout construct with the genomic DNA (gDNA) of the mES. Polymerase chain reaction (PCR) on the available stem cell clones did not reveal any homologous recombined ES. Additionally, we conducted experiments to knockdown MLC1 and using microRNAs. The 3'-untranslated region of the MLC1 gene was analyzed with the bioinformatics tool TargetScan to screen for potential microRNA target sites. In the 3'-untranslated region of the MLC1 gene, a potential binding site for miR-137 was identified. The gene expression level of genes that had been linked to psychiatric disorders and carried a predicated miR-137 binding site has been proven to be immediately responsive to miR-137. Thus, there is new evidence that MLC1 is a candidate gene for the etiology of SCZD10.
In the modeling context, non-linearities and uncertainty go hand in hand. In fact, the utility function's curvature determines the degree of risk-aversion. This concept is exploited in the first article of this thesis, which incorporates uncertainty into a small-scale DSGE model. More specifically, this is done by a second-order approximation, while carrying out the derivation in great detail and carefully discussing the more formal aspects. Moreover, the consequences of this method are discussed when calibrating the equilibrium condition. The second article of the thesis considers the essential model part of the first paper and focuses on the (forward-looking) data needed to meet the model's requirements. A large number of uncertainty measures are utilized to explain a possible approximation bias. The last article keeps to the same topic but uses statistical distributions instead of actual data. In addition, theoretical (model) and calibrated (data) parameters are used to produce more general statements. In this way, several relationships are revealed with regard to a biased interpretation of this class of models. In this dissertation, the respective approaches are explained in full detail and also how they build on each other.
In summary, the question remains whether the exact interpretation of model equations should play a role in macroeconomics. If we answer this positively, this work shows to what extent the practical use can lead to biased results.
Startups are essential agents for the evolution of economies and the creative destruction of established market conditions for the benefit of a more effective and efficient economy. Their significance is manifested in their drive for innovation and technological advancements, their creation of new jobs, their contribution to economic growth, and their impact on increased competition and increased market efficiency. By reason of their attributes of newness and smallness, startups often experience a limitation in accessing external financial resources. Extant research on entrepreneurial finance examines the capital structure of startups, various funding tools, financing environments in certain regions, and investor selection criteria among other topics. My dissertation contributes to this research area by examining the becoming increasingly important funding instrument of venture debt. Prior research on venture debt only investigated the business model of venture debt, the concept of venture debt, the selection criteria of venture debt providers, and the role of patents in the venture debt provider’s selection process. Based on qualitative and quantitative methods, the dissertation outlines the emergence of venture debt in Europe as well as the impact of venture debt on startups to open up a better understanding of venture debt.
The results of the qualitative studies indicate that venture debt was formed based on a ‘Kirznerian’ entrepreneurial opportunity and venture debt impacts startups positive and negative in their development via different impact mechanisms.
Based on these results, the dissertation analyzes the empirical impact of venture debt on a startup’s ability to acquire additional financial resources as well as the role of the reputation of venture debt providers. The results suggest that venture debt increases the likelihood of acquiring additional financial resources via subsequent funding rounds and trade sales. In addition, a higher venture debt provider reputation increases the likelihood of acquiring additional financial resources via IPOs.
Die vorgelegte Dissertation trägt den Titel Regularization Methods for Statistical Modelling in Small Area Estimation. In ihr wird die Verwendung regularisierter Regressionstechniken zur geographisch oder kontextuell hochauflösenden Schätzung aggregatspezifischer Kennzahlen auf Basis kleiner Stichproben studiert. Letzteres wird in der Fachliteratur häufig unter dem Begriff Small Area Estimation betrachtet. Der Kern der Arbeit besteht darin die Effekte von regularisierter Parameterschätzung in Regressionsmodellen, welche gängiger Weise für Small Area Estimation verwendet werden, zu analysieren. Dabei erfolgt die Analyse primär auf theoretischer Ebene, indem die statistischen Eigenschaften dieser Schätzverfahren mathematisch charakterisiert und bewiesen werden. Darüber hinaus werden die Ergebnisse durch numerische Simulationen veranschaulicht, und vor dem Hintergrund empirischer Anwendungen kritisch verortet. Die Dissertation ist in drei Bereiche gegliedert. Jeder Bereich behandelt ein individuelles methodisches Problem im Kontext von Small Area Estimation, welches durch die Verwendung regularisierter Schätzverfahren gelöst werden kann. Im Folgenden wird jedes Problem kurz vorgestellt und im Zuge dessen der Nutzen von Regularisierung erläutert.
Das erste Problem ist Small Area Estimation in der Gegenwart unbeobachteter Messfehler. In Regressionsmodellen werden typischerweise endogene Variablen auf Basis statistisch verwandter exogener Variablen beschrieben. Für eine solche Beschreibung wird ein funktionaler Zusammenhang zwischen den Variablen postuliert, welcher durch ein Set von Modellparametern charakterisiert ist. Dieses Set muss auf Basis von beobachteten Realisationen der jeweiligen Variablen geschätzt werden. Sind die Beobachtungen jedoch durch Messfehler verfälscht, dann liefert der Schätzprozess verzerrte Ergebnisse. Wird anschließend Small Area Estimation betrieben, so sind die geschätzten Kennzahlen nicht verlässlich. In der Fachliteratur existieren hierfür methodische Anpassungen, welche in der Regel aber restriktive Annahmen hinsichtlich der Messfehlerverteilung benötigen. Im Rahmen der Dissertation wird bewiesen, dass Regularisierung in diesem Kontext einer gegen Messfehler robusten Schätzung entspricht - und zwar ungeachtet der Messfehlerverteilung. Diese Äquivalenz wird anschließend verwendet, um robuste Varianten bekannter Small Area Modelle herzuleiten. Für jedes Modell wird ein Algorithmus zur robusten Parameterschätzung konstruiert. Darüber hinaus wird ein neuer Ansatz entwickelt, welcher die Unsicherheit von Small Area Schätzwerten in der Gegenwart unbeobachteter Messfehler quantifiziert. Es wird zusätzlich gezeigt, dass diese Form der robusten Schätzung die wünschenswerte Eigenschaft der statistischen Konsistenz aufweist.
Das zweite Problem ist Small Area Estimation anhand von Datensätzen, welche Hilfsvariablen mit unterschiedlicher Auflösung enthalten. Regressionsmodelle für Small Area Estimation werden normalerweise entweder für personenbezogene Beobachtungen (Unit-Level), oder für aggregatsbezogene Beobachtungen (Area-Level) spezifiziert. Doch vor dem Hintergrund der stetig wachsenden Datenverfügbarkeit gibt es immer häufiger Situationen, in welchen Daten auf beiden Ebenen vorliegen. Dies beinhaltet ein großes Potenzial für Small Area Estimation, da somit neue Multi-Level Modelle mit großem Erklärungsgehalt konstruiert werden können. Allerdings ist die Verbindung der Ebenen aus methodischer Sicht kompliziert. Zentrale Schritte des Inferenzschlusses, wie etwa Variablenselektion und Parameterschätzung, müssen auf beiden Levels gleichzeitig durchgeführt werden. Hierfür existieren in der Fachliteratur kaum allgemein anwendbare Methoden. In der Dissertation wird gezeigt, dass die Verwendung ebenenspezifischer Regularisierungsterme in der Modellierung diese Probleme löst. Es wird ein neuer Algorithmus für stochastischen Gradientenabstieg zur Parameterschätzung entwickelt, welcher die Informationen von allen Ebenen effizient unter adaptiver Regularisierung nutzt. Darüber hinaus werden parametrische Verfahren zur Abschätzung der Unsicherheit für Schätzwerte vorgestellt, welche durch dieses Verfahren erzeugt wurden. Daran anknüpfend wird bewiesen, dass der entwickelte Ansatz bei adäquatem Regularisierungsterm sowohl in der Schätzung als auch in der Variablenselektion konsistent ist.
Das dritte Problem ist Small Area Estimation von Anteilswerten unter starken verteilungsbezogenen Abhängigkeiten innerhalb der Kovariaten. Solche Abhängigkeiten liegen vor, wenn eine exogene Variable durch eine lineare Transformation einer anderen exogenen Variablen darstellbar ist (Multikollinearität). In der Fachliteratur werden hierunter aber auch Situationen verstanden, in welchen mehrere Kovariate stark korreliert sind (Quasi-Multikollinearität). Wird auf einer solchen Datenbasis ein Regressionsmodell spezifiziert, dann können die individuellen Beiträge der exogenen Variablen zur funktionalen Beschreibung der endogenen Variablen nicht identifiziert werden. Die Parameterschätzung ist demnach mit großer Unsicherheit verbunden und resultierende Small Area Schätzwerte sind ungenau. Der Effekt ist besonders stark, wenn die zu modellierende Größe nicht-linear ist, wie etwa ein Anteilswert. Dies rührt daher, dass die zugrundeliegende Likelihood-Funktion nicht mehr geschlossen darstellbar ist und approximiert werden muss. Im Rahmen der Dissertation wird gezeigt, dass die Verwendung einer L2-Regularisierung den Schätzprozess in diesem Kontext signifikant stabilisiert. Am Beispiel von zwei nicht-linearen Small Area Modellen wird ein neuer Algorithmus entwickelt, welche den bereits bekannten Quasi-Likelihood Ansatz (basierend auf der Laplace-Approximation) durch Regularisierung erweitert und verbessert. Zusätzlich werden parametrische Verfahren zur Unsicherheitsmessung für auf diese Weise erhaltene Schätzwerte beschrieben.
Vor dem Hintergrund der theoretischen und numerischen Ergebnisse wird in der Dissertation demonstriert, dass Regularisierungsmethoden eine wertvolle Ergänzung der Fachliteratur für Small Area Estimation darstellen. Die hier entwickelten Verfahren sind robust und vielseitig einsetzbar, was sie zu hilfreichen Werkzeugen der empirischen Datenanalyse macht.
We consider a linear regression model for which we assume that some of the observed variables are irrelevant for the prediction. Including the wrong variables in the statistical model can either lead to the problem of having too little information to properly estimate the statistic of interest, or having too much information and consequently describing fictitious connections. This thesis considers discrete optimization to conduct a variable selection. In light of this, the subset selection regression method is analyzed. The approach gained a lot of interest in recent years due to its promising predictive performance. A major challenge associated with the subset selection regression is the computational difficulty. In this thesis, we propose several improvements for the efficiency of the method. Novel bounds on the coefficients of the subset selection regression are developed, which help to tighten the relaxation of the associated mixed-integer program, which relies on a Big-M formulation. Moreover, a novel mixed-integer linear formulation for the subset selection regression based on a bilevel optimization reformulation is proposed. Finally, it is shown that the perspective formulation of the subset selection regression is equivalent to a state-of-the-art binary formulation. We use this insight to develop novel bounds for the subset selection regression problem, which show to be highly effective in combination with the proposed linear formulation.
In the second part of this thesis, we examine the statistical conception of the subset selection regression and conclude that it is misaligned with its intention. The subset selection regression uses the training error to decide on which variables to select. The approach conducts the validation on the training data, which oftentimes is not a good estimate of the prediction error. Hence, it requires a predetermined cardinality bound. Instead, we propose to select variables with respect to the cross-validation value. The process is formulated as a mixed-integer program with the sparsity becoming subject of the optimization. Usually, a cross-validation is used to select the best model out of a few options. With the proposed program the best model out of all possible models is selected. Since the cross-validation is a much better estimate of the prediction error, the model can select the best sparsity itself.
The thesis is concluded with an extensive simulation study which provides evidence that discrete optimization can be used to produce highly valuable predictive models with the cross-validation subset selection regression almost always producing the best results.
Why do some people become entrepreneurs while others stay in paid employment? Searching for a distinctive set of entrepreneurial skills that matches the profile of the entrepreneurial task, Lazear introduced a theoretical model featuring skill variety for entrepreneurs. He argues that because entrepreneurs perform many different tasks, they should be multi-skilled in various areas. First, this dissertation provides the reader with an overview of previous relevant research results on skill variety with regard to entrepreneurship. The majority of the studies discussed focus on the effects of skill variety. Most studies come to the conclusion that skill variety mainly affects the decision to become self-employed. Skill variety also favors entrepreneurial intentions. Less clear are the results with regard to the influence of skill variety on the entrepreneurial success. Measured on the basis of income and survival of the company, a negative or U-shaped correlation is shown. Within the empirical part of this dissertation three research goals are tackled. First, this dissertation investigates whether a variety of early interests and activities in adolescence predicts subsequent variety in skills and knowledge. Second, the determinants of skill variety and variety of early interests and activities are investigated. Third, skill variety is tested as a mediator of the gender gap in entrepreneurial intentions. This dissertation employs structural equation modeling (SEM) using longitudinal data collected over ten years from Finnish secondary school students aged 16 to 26. As indicator for skill variety the number of functional areas in which the participant had prior educational or work experience is used. The results of the study suggest that a variety of early interests and activities lead to skill variety, which in turn leads to entrepreneurial intentions. Furthermore, the study shows that an early variety is predicted by openness and an entrepreneurial personality profile. Skill variety is also encouraged by an entrepreneurial personality profile. From a gender perspective, there is indeed a gap in entrepreneurial intentions. While a positive correlation has been found between the early variety of subjects and being female, there are negative correlations between the other two variables, education and work related Skill variety, and being female. The negative effect of work-related skill variety is the strongest. The results of this dissertation are relevant for research, politics, educational institutions and special entrepreneurship education programs. The results are also important for self-employed parents that plan the succession of the family business. Educational programs promoting entrepreneurship can be optimized on the basis of the results of this dissertation by making the transmission of a variety of skills a central goal. A focus on teenagers could also increase the success as well as a preselection based on the personality profile of the participants. Regarding the gender gap, state policies should aim to provide women with more incentives to acquire skill variety. For this purpose, education programs can be tailored specifically to women and self-employment can be presented as an attractive alternative to dependent employment.
This thesis introduces a calibration problem for financial market models based on a Monte Carlo approximation of the option payoff and a discretization of the underlying stochastic differential equation. It is desirable to benefit from fast deterministic optimization methods to solve this problem. To be able to achieve this goal, possible non-differentiabilities are smoothed out with an appropriately chosen twice continuously differentiable polynomial. On the basis of this so derived calibration problem, this work is essentially concerned about two issues. First, the question occurs, if a computed solution of the approximating problem, derived by applying Monte Carlo, discretizing the SDE and preserving differentiability is an approximation of a solution of the true problem. Unfortunately, this does not hold in general but is linked to certain assumptions. It will turn out, that a uniform convergence of the approximated objective function and its gradient to the true objective and gradient can be shown under typical assumptions, for instance the Lipschitz continuity of the SDE coefficients. This uniform convergence then allows to show convergence of the solutions in the sense of a first order critical point. Furthermore, an order of this convergence in relation to the number of simulations, the step size for the SDE discretization and the parameter controlling the smooth approximation of non-differentiabilites will be shown. Additionally the uniqueness of a solution of the stochastic differential equation will be analyzed in detail. Secondly, the Monte Carlo method provides only a very slow convergence. The numerical results in this thesis will show, that the Monte Carlo based calibration indeed is feasible if one is concerned about the calculated solution, but the required calculation time is too long for practical applications. Thus, techniques to speed up the calibration are strongly desired. As already mentioned above, the gradient of the objective is a starting point to improve efficiency. Due to its simplicity, finite differences is a frequently chosen method to calculate the required derivatives. However, finite differences is well known to be very slow and furthermore, it will turn out, that there may also occur severe instabilities during optimization which may lead to the break down of the algorithm before convergence has been reached. In this manner a sensitivity equation is certainly an improvement but suffers unfortunately from the same computational effort as the finite difference method. Thus, an adjoint based gradient calculation will be the method of choice as it combines the exactness of the derivative with a reduced computational effort. Furthermore, several other techniques will be introduced throughout this thesis, that enhance the efficiency of the calibration algorithm. A multi-layer method will be very effective in the case, that the chosen initial value is not already close to the solution. Variance reduction techniques are helpful to increase accuracy of the Monte Carlo estimator and thus allow for fewer simulations. Storing instead of regenerating the random numbers required for the Brownian increments in the SDE will be efficient, as deterministic optimization methods anyway require to employ the identical random sequence in each function evaluation. Finally, Monte Carlo is very well suited for a parallelization, which will be done on several central processing units (CPUs).
Stress and pain are common experiences in human lives. Both, the stress and the pain system have adaptive functions and try to protect the organism in case of harm and danger. However, stress and pain are two of the most challenging problems for the society and the health system. Chronic stress, as often seen in modern societies, has much impact on health and can lead to chronic stress disorders. These disorders also include a number of chronic pain syndromes. However, pain can also be regarded as a stressor itself, especially when we consider how much patients suffer from long-lasting pain and the impact of pain on life quality. In this way, the effects of stress on pain can be fostered. For the generation and manifestation of chronic pain symptoms also learning processes such as classical conditioning play an important role. Processes of classical conditioning can also be influenced by stress. These facts illustrate the complex and various interactions between the pain and the stress systems. Both systems communicate permanently with each other and help to protect the organism and to keep a homeostatic state. They have various ways of communication, for example mechanisms related to endogenous opioids, immune parameters, glucocorticoids and baroreflexes. But an overactivation of the systems, for example caused by ongoing stress, can lead to severe health problems. Therefore, it is of great importance to understand these interactions and their underlying mechanisms. The present work deals with the relationship of stress and pain. A special focus is put on stress related hypocortisolism and pain processing, stress induced hypoalgesia via baroreceptor related mechanisms and stress related cortisol effects on aversive conditioning (as a model of pain learning). This work is a contribution to the wide field of research that tries to understand the complex interactions of stress and pain. To demonstrate the variety, the selected studies highlight different aspects of these interactions. In the first chapter I will give a short introduction on the pain and the stress systems and their ways of interaction. Furthermore, I will give a short summary of the studies presented in Chapter II to V and their background. The results and their meaning for future research will be discussed in the last part of the first chapter. Chronic pain syndromes have been associated with chronic stress and alterations of the HPA axis resulting in chronic hypocortisolism. But if these alterations may play a causal role in the pathophysiology of chronic pain remains unclear. Thus, the study described in Chapter II investigated the effects of pharmacological induced hypocortisolism on pain perception. Both, the stress and the pain system are related to the cardiovascular system. Increase of blood pressure is part of the stress reaction and leads to reduced pain perception. Therefore, it is important for the usage of pain tests to keep in mind potential interferences from activation of the cardiovascular system, especially when pain inhibitory processes are investigated. For this reason we compared two commonly and interchangeably used pain tests with regard to the triggered autonomic reactions. This study is described in chapter III. Chapter IV and V deal with the role of learning processes in pain and related influences of stress. Processes of classical conditioning play an important role for symptom generation and manifestation. In both studies aversive eyeblink conditioning was used as a model for pain learning. In the study described in Chapter IV we compared classical eyeblink conditioning in healthy volunteers to patients suffering from fibromyalgia, a chronic pain disorder. Also, differences of the HPA axis, as part of the stress system, were taken in account. The study of Chapter V investigated effects of the very first stress reaction, particularly rapid non-genomic cortisol effects. Healthy volunteers received an intravenous cortisol administration immediately before the eyeblink conditioning. Rapid effects have only been demonstrated on a cellular level and on animal behavior so far. In general, the studies presented in this work may give an impression of the broad variety of possible interactions between the pain and the stress system. Furthermore, they contribute to our knowledge about theses interactions. However, more research is needed to complete the picture.
Since the end of the British Empire, which had provided white Australians with points of view, attitudes and stereotypes of the world - including perceptions of their own role in it -, rediscovering an international identity has been an Australian quest. Many turned to European roots; others to the Aboriginal landscape; Blanche d"Alpuget and Christopher J. Koch are two who have ventured into Asia for the culturally and spiritually regenerative materials necessary to redefine Australia in the post-colonial world. They have taken Eastern concepts of "self", and "soul" and forged them with the Australian obsession of fear and desire of contact with the "other" in a looking-glass of hybrid, Austral-Asian myth to reveal the true soul of Australian identity. Along with a brief historical and literary background to the triangular relationship between white Australia, Asia, and the West, this study- goal is to identify some of the Southeast Asian symbols, myths and literary structures which Koch and d"Alpuget integrate into the Western tradition. Central elements include: dichotomies as of personality, righteousness, and virtue; the "Otherworld", where one may approach enlightenment, but at the risk of falling into self-delusion; archetypes of the Hindu divine feminine; Eastern roots of Koch- themes of the "double man"; concepts of the forces of "light" and "dark"; the semiotics of time and meaning; and the central Eastern metaphor of the mirror by which Australia creates interdependent images of itself and of Asia.
Forest inventories provide significant monitoring information on forest health, biodiversity,
resilience against disturbance, as well as its biomass and timber harvesting potential. For this
purpose, modern inventories increasingly exploit the advantages of airborne laser scanning (ALS)
and terrestrial laser scanning (TLS).
Although tree crown detection and delineation using ALS can be seen as a mature discipline, the
identification of individual stems is a rarely addressed task. In particular, the informative value of
the stem attributes—especially the inclination characteristics—is hardly known. In addition, a lack
of tools for the processing and fusion of forest-related data sources can be identified. The given
thesis addresses these research gaps in four peer-reviewed papers, while a focus is set on the
suitability of ALS data for the detection and analysis of tree stems.
In addition to providing a novel post-processing strategy for geo-referencing forest inventory plots,
the thesis could show that ALS-based stem detections are very reliable and their positions are
accurate. In particular, the stems have shown to be suited to study prevailing trunk inclination
angles and orientations, while a species-specific down-slope inclination of the tree stems and a
leeward orientation of conifers could be observed.
Part-time entrepreneurship has become increasingly popular and is a rather new field of research. Two important research topics are addressed in this dissertation: (a) the impact of culture on part-time and full-time entrepreneurship and (b) the motivational aspects of the transition from part-time to full-time entrepreneurship. Specifically, this dissertation advances prior research by highlighting the direct and indirect differential impact of macro-level societal culture on part-time and full-time entrepreneurship. Gender egalitarianism, uncertainty avoidance and future orientation have a significantly stronger impact on full-time than on part-time entrepreneurship. Furthermore the moderating impact of societal culture on micro-level relationships for both forms of entrepreneurship is explored. The age-old and well-established relationship between education and entrepreneurial activity is moderated by different forms of collectivism for part-time and full-time entrepreneurship. Regarding the motivation of part-time entrepreneurs to transition to full-time entrepreneurship, the entrepreneurial motives of self-realization and independence are significantly positively associated with the transition, whereas the entrepreneurial motives of income supplementation and recognition are significantly negatively associated with the transition. This dissertation advances academic research by indicating conceptual differences between part-time and full-time entrepreneurship in a multi country setting and by showing that both forms of entrepreneurship are impacted through different cultural mechanisms. Based on the findings, policy makers can identify the direct and indirect impact of societal culture on part-time and full-time entrepreneurship. As a result, policy makers can better target support and transition programs to foster entrepreneurial activity.
One mechanism underlying the acquisition of interpersonal attitudes is the formation of an association between a valenced unconditioned stimulus (US) and an affectively neutral conditioned stimulus (CS). However, a stimulus (e.g., a person) is not always and necessarily perceived to be unambiguously positive or negative. An individual can be negative regarding abstract (trait) information but at the same time display a positive (concrete) behavior. The present research deals with the question of whether the valence of abstract or concrete information about a US is encoded and subsequently transferred to an associated CS. The central assumptions are that the valence of the concrete information is more important for the evaluation of the US, whereas the abstract information is more important for the evaluation of the CS. The rationale behind these assumptions is that the US is a psychologically proximal stimulus because it elicits a more direct affective reaction. The CS, however, is psychologically more distal because it is merely associated with the US and is therefore only experienced indirectly. It is postulated that the associative relation between US and CS constitutes a dimension of psychological distance. In four studies, the valence of abstract and concrete information about a number of USs was manipulated. Within an evaluative learning paradigm, these stimuli were associated with affectively neutral CSs. As predicted, ambivalent USs were evaluated according to the valence of the concrete information. The evaluation of CSs, however, was influenced more strongly by the valence of the abstract information. Moreover, in a subsequent lexical decision task, participants were faster to categorize abstract (vs. concrete) stimuli when the stimuli were preceded by a CS prime as compared to a US prime. The results provide first evidence that perceived psychological distance influences the evaluations of US and CS in an associative evaluative learning paradigm.