Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (333) (entfernen)
Sprache
- Englisch (333) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (16)
- Hydrocortison (13)
- Fernerkundung (10)
- Modellierung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (50)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (19)
- Informatik (16)
- Anglistik (11)
- Fachbereich 2 (7)
- Fachbereich 6 (7)
- Politikwissenschaft (3)
- Computerlinguistik und Digital Humanities (1)
- Fachbereich 3 (1)
- Japanologie (1)
- Sinologie (1)
- Universitätsbibliothek (1)
Climate fluctuations and the pyroclastic depositions from volcanic activity both influence ecosystem functioning and biogeochemical cycling in terrestrial and marine environments globally. These controlling factors are crucial for the evolution and fate of the pristine but fragile fjord ecosystem in the Magellanic moorlands (~53°S) of southernmost Patagonia, which is considered a critical hotspot for organic carbon burial and marine bioproductivity. At this active continental margin in the core zone of the southern westerly wind belt (SWW), frequent Plinian eruptions and the extremely variable, hyper-humid climate should have efficiently shaped ecosystem functioning and land-to-fjord mass transfer throughout the Late Holocene. However, a better understanding of the complex process network defining the biogeochemical cycling at this land-to-fjord continuum principally requires a detailed knowledge of substrate weathering and pedogenesis in the context of the extreme climate. Yet, research on soils, the ubiquitous presence of tephra and the associated chemical weathering, secondary mineral (trans)formation and organic matter (OM) turnover processes is rare in this remote region. This complicates an accurate reconstruction of the ecosystem´s potentially sensitive response to past environmental impacts, including the dynamics of Late Holocene land-to-fjord fluxes as a function of volcanic activity and strong hydroclimate variability.
Against this background, this PhD thesis aims to disentangle the controlling factors that modulate the terrigenous element mobilization and export mechanisms in the hyper-humid Patagonian Andes and assesses their significance for fjord primary productivity over the past 4.5 kyrs BP. For the first time, distinct biogeochemical characteristics of the regional weathering system serve as major criterion in paleoenvironmental reconstruction in the area. This approach includes broad-scale mineralogical and geochemical analyses of basement lithologies, four soil profiles, volcanic ash deposits, the non-karst stalagmite MA1 and two lacustrine sediment cores. In order to pay special attention to the possibly important temporal variations of pedosphere-atmosphere interaction and ecological consequences initiated by volcanic eruptions, the novel data were evaluated together with previously published reconstructions of paleoclimate and paleoenvironmental conditions.
The devastative high-tephra loading of a single eruption from Mt. Burney volcano (MB2 at 4.216 kyrs BP) sustainably transformed this vulnerable fjord ecosystem, while acidic peaty Andosols developed from ~2.5 kyrs BP onwards after the recovery from millennium-scale acidification. The special setting is dominated by most variable redox-pH conditions, profound volcanic ash weathering and intense OM turnover processes, which are closely linked and ultimately regulated by SWW-induced water-level fluctuations. Constant nutrient supply though sea spray deposition represents a further important control on peat accumulation and OM turnover dynamics. These extreme environmental conditions constrain the biogeochemical framework for an extended land-to-fjord export of leachates comprising various organic and inorganic colloids (i.e., Al-humus complexes and Fe-(hydr)oxides). Such tephra- and/or Andosol-sourced flux contains high proportions of terrigenous organic carbon (OCterr) and mobilized essential (micro)nutrients, e.g., bio-available Fe, that are beneficial for fjord bioproductivity. It can be assumed that this supply of bio-available Fe produced by specific Fe-(hydr)oxide (trans)formation processes from tephra components may outlast more than 6 kyrs and surpasses the contribution from basement rock weathering and glacial meltwaters. However, the land-to-fjord exports of OCterr and bio-available Fe occur mostly asynchronous and are determined by the frequency and duration of redox cycles in soils or are initiated by SWW-induced extreme weather events.
The verification of (crypto)tephra layers embedded stalagmite MA1 enabled the accurate dating of three smaller Late Holocene eruptions from Mt. Burney (MB3 at 2.291 kyrs BP and MB4 at 0.853 kyrs BP) and Aguilera (A1 at 2.978 kyrs BP) volcanoes. Irrespective of the improvement of the regional tephrochronology, the obtained precise 230Th/U-ages allowed constraints on the ecological consequences caused by these Plinian eruptions. The deposition of these thin tephra layers should have entailed a very beneficial short-term stimulation of fjord bioproductivity with bio-available Fe and other (micro)nutrients, which affected the entire area between 52°S and 53°S 30´, respectively. For such beneficial effects, the thickness of tephra deposited to this highly vulnerable peatland ecosystem should be below a threshold of 1 cm.
The Late Holocene element mobilization and land-to-fjord transport was mainly controlled by (i) volcanic activity and tephra thickness, (ii) SWW-induced and southern hemispheric climate variability and (iii) the current state of the ecosystem. The influence of cascading climate and environmental impacts on OCterr and Fe-(hydr)oxide fluxes to can be categorized by four individual, in part overlapping scenarios. These different scenarios take into account the previously specified fundamental biogeochemical mechanisms and define frequently recurring patterns of ecosystem feedbacks governing the land-to-fjord mass transfer in the hyper-humid Patagonian Andes on the centennial-scale. This PhD thesis provides first evidence for a primarily tephra-sourced, continuous and long-lasting (micro)nutrient fertilization for phytoplankton growth in South Patagonian fjords, which is ultimately modulated by variations in SWW-intensity. It highlights the climate sensitivity of such critical land-to-fjord element transport and particularly emphasizes the important but so far underappreciated significance of volcanic ash inputs for biogeochemical cycles at active continental margins.
Stress related disorders increase continuously. It is not yet clear if stress also promotes breast cancer. This dissertation provides an analyses of the current state of research and focuses on the significance of pre-/postnatal stress factors and chronic stress. The derived hypotheses are empirically examined on breast cancer patients. The clinical study investigates the links between those factors and prognosis and outcome.
One of the main tasks in mathematics is to answer the question whether an equation possesses a solution or not. In the 1940- Thom and Glaeser studied a new type of equations that are given by the composition of functions. They raised the following question: For which functions Ψ does the equation F(Ψ)=f always have a solution. Of course this question only makes sense if the right hand side f satisfies some a priori conditions like being contained in the closure of the space of all compositions with Ψ and is easy to answer if F and f are continuous functions. Considering further restrictions to these functions, especially to F, extremely complicates the search for an adequate solution. For smooth functions one can already find deep results by Bierstone and Milman which answer the question in the case of a real-analytic function Ψ. This work contains further results for a different class of functions, namely those Ψ that are smooth and injective. In the case of a function Ψ of a single real variable, the question can be fully answered and we give three conditions that are both sufficient and necessary in order for the composition equation to always have a solution. Furthermore one can unify these three conditions to show that they are equivalent to the fact that Ψ has a locally Hölder-continuous inverse. For injective functions Ψ of several real variables we give necessary conditions for the composition equation to be solvable. For instance Ψ should satisfy some form of local distance estimate for the partial derivatives. Under the additional assumption of the Whitney-regularity of the image of Ψ, we can give sufficient conditions for flat functions f on the critical set of Ψ to possess a solution F(Ψ)=f.
The collapse of the tailings pond of the Aznalcállar open pit mine (West of Seville, Spain) in April 1998 left more than 4000 ha of arable land and floodplains contaminated with heavy metal containing pyrite sludge. After a first remediation campaign a considerable contamination remained in the soil. The present study evaluates the possibilities of reflectance spectroscopy and airborne hyperspectral remote sensing for the qualitative and quantitative assessment of heavy metal contamination and the acidification risk related to the mining accident. Based on an extensive data set consisting of geochemical analyses and reflectance measurements of more than 300 soil samples different chemometrics methods (multiple linear regression, partial least squares and artificial neural networks) are tested for computation of concentrations of soil constituents on the basis of the spectral reflectance. Spectral mixture analysis is applied for the analysis of the spatial distribution of the contamination. The abundance information derived from spectral mixture analysis is turned into quantitative information incorporating an artificial mixture experiment. The results of this experiment provide a link between sludge abundance and sludge weight, allowing as a consequence calculation of the amount of residual sludge per pixel, the acidification potential and other parameters important for remediation planning. The application of laboratory, field and imaging spectroscopy for providing quantitative information about the contamination levels in their spatial context is a good complement to conventional methods. The advantage is the reduction of the time and labour-intensive geochemical analysis, because after the model calibration, further samples can be analysed directly with the chemometric models. Furthermore, the spatial distribution can be mapped with imaging spectroscopy data helping in a more precise remediation planning.
Until today the effects of many chlorinated hydrocarbons (e.g. DDT, PCBs) against the specific organisms are still a subject of controversial discussions. It was also the case for potential endocrine effects to influence the spermatogenesis correlated with possible changes of the population's vitality. To clear this situation, three questions could be at the centre of attention: 1) Do the chemicals cause a special harmful effect on the male reproductive tract? 2) Could some particular chemical mixtures act to bind and activate the human estrogen receptor (hER)? 3) Are the life stages of an organism specially sensitive to the effects of chemicals and therefore be established as Screening-Test-System? the connected effects of DDT and Arochlor 1254 as single substance and in 1:1 mixture according to their estrogenic effectiveness on zebrafish (Brachydanio rerio) were therefore investigated. the concentrations of the pesticides and their mixture ranged between 0.05-µg/l and 500-µg/l and separated by a factor of 10. It was turned out that the test concentrations of 500-µg/l were too toxic to zebrafish in all the cases. The experiment was followed up with four concentrations of DDT, A54 as well as their 1:1 mixture anew each separated by a factor of 10 and ranging between 0.05-µg/l and 50-µg/l. The bioaccumulation test within 8 days showed that the zebrafish accumulated the chemicals, but no equilibrum was reached and the concentration 0.05-µg/l was established as No Observed Effect Concentration (NOEC). Putting up on these analyses, the investigation of the life cycle (LC) starting with fertilized eggs demonstrated a reduction in the rate of hatchability, reproduction and length of fish emerged. These reductions involved the duration of the life cycle stages (LCS) which consequently lasted longer than expected. Exposure time and level of the tested chemicals accelerated the occurrence of these effects which were more significant when the chemical mixtures were used too. To establish whether the parameter assessed were correlated to the male reproductive tract, the quality, quantity and life span of sperm were assessed using the methods of Leong (1988) and Shapiro et al (1994). The sperm degeneration observed, led us to investigate the spermatogenesis and the ultrastructure of the testes. This last experiment showed a significant reduction of the late stage of spermatogenesis and the heterophagic vacuoles which play an important role in the spermatid maturation. It could therefore be concluded that, DDT and A54 could act synergically and cause disorders of the male reproductive tract of male zebrafish and influence also their growth.
This thesis centers on formal tree languages and on their learnability by algorithmic methods in abstractions of several learning settings. After a general introduction, we present a survey of relevant definitions for the formal tree concept as well as special cases (strings) and refinements (multi-dimensional trees) thereof. In Chapter 3 we discuss the theoretical foundations of algorithmic learning in a specific type of setting of particular interest in the area of Grammatical Inference where the task consists in deriving a correct formal description for an unknown target language from various information sources (queries and/or finite samples) in a polynomial number of steps. We develop a parameterized meta-algorithm that incorporates several prominent learning algorithms from the literature in order to highlight the basic routines which regardless of the nature of the information sources have to be run through by all those algorithms alike. In this framework, the intended target descriptions are deterministic finite-state tree automata. We discuss the limited transferability of this approach to another class of descriptions, residual finite-state tree automata, for which we propose several learning algorithms as well. The learnable class by these techniques corresponds to the class of regular tree languages. In Chapter 4we outline a recent range of attempts in Grammatical Inference to extend the learnable language classes beyond regularity and even beyond context-freeness by techniques based on syntactic observations which can be subsumed under the term 'distributional learning', and we describe learning algorithms in several settings for the tree case taking this approach. We conclude with some general reflections on the notion of learning from structural information.
Social entrepreneurship is a successful activity to solve social problems and economic
challenges. Social entrepreneurship uses for-profit industry techniques and tools to build
financially sound businesses that provide nonprofit services. Social entrepreneurial activities
also lead to the achievement of sustainable development goals. However, due to the complex,
hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel
determinants. To expand our knowledge of how beneficial macro-level determinants can
be, this work examines empirical evidence about the impact of macro-level determinants on
social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro
level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning,
the introductory section is explained in Chapter 1, which contains the motivation for the
research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship.
Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in
the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that
a variety of determinants at the micro and macro levels are essential for the emergence of social
entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et
al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the
determinants and consequences of social entrepreneurship from different methodological
perspectives. The theoretical foundations of the micro- and macro-level determinants
influencing social entrepreneurial activities were discussed in Chapter 3
The purpose of reproducibility in research is to confirm previously published results
(Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of
transparency of methodology, reluctance to publish, and lack of interest from researchers, there
is a lack of promoting replication of the existing research study (Baker, 2016; Hedges &
Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business
and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that
provide replicability of the reported results are considered rare in previous research (Burman
et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an
empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand
macro-level determinants on social inclusion, although it is still unclear whether these
studies accurately reflect reality. It is important to provide conceptual underpinnings to the
field through a reassessment of published results (Bettis et al., 2016). The results of their
research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention,
Chapter 5 considered the reproducibility of previous results, particularly on the topic of social
entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of
reproducibility and validate the specific conclusions they drew. The literal and constructive
replication in the dissertation inspired us to explore technical replication research on social
entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the
growth of social ventures. The current debate reviews and references literature that has
specifically focused on the development of social entrepreneurship. An empirical analysis of
factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter
6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as
consequences at the micro level. This study examined many characteristics of social and
commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the
growth of social entrepreneurship differs from commercial entrepreneurial activities due to
objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative
research has been used in studies to support the evidence on related topics, including Gupta et
al (2020) emphasized that research needs to focus on specific concepts of social
entrepreneurship for the field to advance. Therefore, this study provides a quantitative,
analysis-based assessment of facts and data. For this purpose, a data set from the Global
Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from
38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence
of various social and commercial characteristics of entrepreneurship on economic growth in
developing countries. Chapter 7 briefly explains future directions and practical/theoretical
implications.
Evaluation of desalination techniques for treating the brackish water of Olushandja sub-basin
(2014)
The groundwater of Olushandja sub-basin as part of Cuvelai basin in central-northern Namibia is saline with TDS content varying between 4,000ppm to 90,000ppm. Based on climatic conditions, this region can be classified as a semi-arid to arid region with an annual rainfall during summer time varying between 200mm to 500mm. The mean annual evaporation potential is about 2,800mm, which is much higher than the annual rainfall. The southern block of this sub-basin is of low population density. It has not been covered by the supply networks for electricity and water. Therefore, the inhabitants are forced to use the untreated groundwater from the hand-dug wells for their daily purposes. This groundwater is not safe for human consumption and therefore needs to be desalinated for that purpose. The goal of this thesis has been to select a suitable desalination technology for that region. The technology to be selected is from those which use renewable energy sources, which have capacity of production from 10m3 to 100m3 per day, which are simple and robust against existing harsh environmental conditions and have already been implemented successfully in some place. Based on these criteria, the technologies which emerged from the literature are: multistage flashing (MSF), multi effect distillation (MED), multi effect humidification (MEH), membrane distillation (MD), reverse osmosis (RO) and electro dialysis reversed (ED). Out of these technologies, RO & ED are based on membrane techniques and MSF, MED & MEH use thermal processes whereas MD technology uses a hybrid process of thermal and membrane techniques for desalinating the water. For evaluation of technical performance, environmental sustainability and financial feasibility of the above mentioned desalination techniques, the following criteria have been used: gained output ratio, recovery rate, pretreatment requirements, sensitivity to feed water quality, post treatment, operating temperature, operating pressure, scaling and fouling potential, corrosion susceptibility, brine disposal, prime energy requirement, mechanical and electrical power output, heat energy, running costs and water generation costs. The data regarding the performance standards of the successfully implemented desalination techniques have been obtained from the literature of performance benchmarks. The Utility Value Analysis Tool of the Rafter-Group of Multi-Criteria Analysis (MCA) has been used for measuring the performance score of a technology. To perform the utility analysis, an evaluation matrix has to be constructed through the following procedures: selection of the decision options (or assessment groups), identification of the evaluation criteria, measurement of performance and transformation of the units. Then the criteria under the objective groups are assigned a level of importance for determining their weights.To perform the sensitivity analysis the level of importance of a criterion is changed by giving more weight or rate to the assessment group of interest (or study). Within the assessment group of interests, the best performing desalination technology has been selected according to the outcome of the sensitivity analysis. The important conclusions of this study are the identification of the capabilities of thermal and membrane based small scale desalination technologies and their applicability based on site specific needs. The sensitivity analysis indicates that the MED technology is the most environmental friendly technology that uses minimum energy and produces least concentrated brine for disposal. The ED technology has emerged to be technically suitable, but it is only applicable when source water has less than 12.000 ppm salt content. The MSF process has favorable thermal efficiency and it is insensitive to feed water quality. Its major drawbacks are energy needs and post treatment requirements that affected its net score. The MD and MSF process have scored the lowest for the technical and economic assessment groups and are concluded not to be suitable for Olushandja sub-basin. The MEH process is cheaper and technically more appropriate than the MED in the two assessment groups. Based on the above mentioned evaluations, this study concluded that Olushandja sub-basin needs more data collection on the geological profile, distinctive identification of aquifers and evidence on the interaction between the aquifers. From the best available data obtained, it could not be established with certainty where the highest level of salinity can be found in the profile, or how the geological profile is layered. More data on ground water quality for spatial overview of the trends and pattern of the sub-basin will be useful in drawing better conclusion on the specific desalination technology needed which is suitable for a specified village or living space.
The subject of this thesis is hypercyclic, mixing, and chaotic C0-semigroups on Banach spaces. After introducing the relevant notions and giving some examples the so called hypercyclicity criterion and its relation with weak mixing is treated. Some new equivalent formulations of the criterion are given which are used to derive a very short proof of the well-known fact that a C0-semigroup is weakly mixing if and only if each of its operators is. Moreover, it is proved that under some "regularity conditions" each hypercyclic C0-semigroup is weakly mixing. Furthermore, it is shown that for a hypercyclic C0-semigroup there is always a dense set of hypercyclic vectors having infinitely differentiable trajectories. Chaotic C0-semigroups are also considered. It is proved that they are always weakly mixing and that in certain cases chaoticity is already implied by the existence of a single periodic point. Moreover, it is shown that strongly elliptic differential operators on bounded C^1-domains never generate chaotic C0-semigroups. A thorough investigation of transitivity, weak mixing, and mixing of weighted compositioin operators follows and complete characterisations of these properties are derived. These results are then used to completely characterise hypercyclicity, weak mixing, and mixing of C0-semigroups generated by first order partial differential operators. Moreover, a characterisation of chaos for these C0-semigroups is attained. All these results are achieved on spaces of p-integrable functions as well as on spaces of continuous functions and illustrated by various concrete examples.
As an interface between an individual and its environment, the skin is a major site of direct exposure to exogenous substances. Once absorbed, these substances may interact with different biomolecules within the skin. The aryl hydrocarbon receptor (AhR) signaling pathway is one mechanism whereby the skin responds to exposures, predominantly through the induction or upregulation of metabolizing enzymes. One known physiological role of the AhR in many tissues is its involvement in the control of cell cycle progression. In skin, almost nothing is known about this physiological function. Moreover, the question whether frequently used naturally occurring phenolic derivatives like eugenol and isoeugenol impact on the AhR within the skin has rarely been studied so far. Eugenol and isoeugenol are due to their odour referred to as fragrances. The ubiquitous distribution of eugenol and isoeugenol results in an almost unavoidable contact with these substances in our daily lives. Despite this fact, their molecular mechanisms of action in skin are poorly understood. There is evidence supporting the hypothesis that these substances may impact on the AhR. On the one hand, eugenol is shown to induce cytochrome P450 1A1 (CYP1A1), a well-known target gene of the AhR. On the other hand, their known anti-proliferative properties might also be mediated by the AhR, based on its physiological function. In order to proof this hypothesis, it was investigated whether eugenol and isoeugenol impact on the AhR signaling pathway in skin cells. Results revealed that eugenol as well as isoeugenol impact on the AhR signaling pathway in skin cells. Both substances caused the translocation of the AhR into the nucleus, induced the expression of the well-known AhR target genes CYP1A1 and AhR repressor (AhRR) and exhibited impact on cell cycle progression. Both substances caused an AhR-dependent cell cycle arrest in skin cells, modulated protein levels of several cell cycle regulatory proteins, inhibited DNA synthesis and thereby reduced cell numbers. The comparison of wildtype cells to AhR knockdown cells revealed an influence of the AhR on cell cycle progression in skin cells in the absence of exogenous ligands. AhR knockdown cells exhibited a slower progression through the cell cycle caused by an accumulation of cells in the G0/G1 phase of the cell cycle and a decreased DNA synthesis rate. Modulation of cell cycle regulatory proteins involved in the transition from the G0/G1 to the S phase of the cell cycle was altered in AhR knockdown cells as well. To conclude, eugenol as well as isoeugenol were able to impact on the AhR signaling pathway in skin cells. Their molecular mechanisms of action are similar to those of classical AhR ligands, although their structural characteristics strongly differ from that of these ligands. In the absence of exogenous ligands the AhR promotes cell cycle progression in many tissues and this knowledge could be expanded on skin-derived cells within the scope of this thesis.
In this thesis, we mainly investigate geometric properties of optimal codebooks for random elements $X$ in a seperable Banach space $E$. Here, for a natural number $ N $ and a random element $X$ , an $N$-optimal codebook is an $ N $-subset in the underlying Banach space $E$ which gives a best approximation to $ X $ in an average sense. We focus on two types of geometric properties: The global growth behaviour (growing in $N$) for a sequence of $N$-optimal codebooks is described by the maximal (quantization) radius and a so-called quantization ball. For many distributions, such as central-symmetric distributions on $R^d$ as well as Gaussian distributions on general Banach spaces, we are able to estimate the asymptotics of the quantization radius as well as the quantization ball. Furthermore, we investigate local properties of optimal codebooks, in particular the local quantization error and the weights of the Voronoi cells induced by an optimal codebook. In the finite-dimensional setting, we are able to proof for many interesting distributions classical conjectures on the asymptotic behaviour of those properties. Finally, we propose a method to construct sequences of asymptotically optimal codebooks for random elements in infinite dimensional Banach spaces and apply this method to construct codebooks for stochastic processes, such as fractional Brownian Motions.
In recent years, the study of dynamical systems has developed into a central research area in mathematics. Actually, in combination with keywords such as "chaos" or "butterfly effect", parts of this theory have been incorporated in other scientific fields, e.g. in physics, biology, meteorology and economics. In general, a discrete dynamical system is given by a set X and a self-map f of X. The set X can be interpreted as the state space of the system and the function f describes the temporal development of the system. If the system is in state x ∈ X at time zero, its state at time n ∈ N is denoted by f^n(x), where f^n stands for the n-th iterate of the map f. Typically, one is interested in the long-time behaviour of the dynamical system, i.e. in the behaviour of the sequence (f^n(x)) for an arbitrary initial state x ∈ X as the time n increases. On the one hand, it is possible that there exist certain states x ∈ X such that the system behaves stably, which means that f^n(x) approaches a state of equilibrium for n→∞. On the other hand, it might be the case that the system runs unstably for some initial states x ∈ X so that the sequence (f^n(x)) somehow shows chaotic behaviour. In case of a non-linear entire function f, the complex plane always decomposes into two disjoint parts, the Fatou set F_f of f and the Julia set J_f of f. These two sets are defined in such a way that the sequence of iterates (f^n) behaves quite "wildly" or "chaotically" on J_f whereas, on the other hand, the behaviour of (f^n) on F_f is rather "nice" and well-understood. However, this nice behaviour of the iterates on the Fatou set can "change dramatically" if we compose the iterates from the left with just one other suitable holomorphic function, i.e. if we consider sequences of the form (g∘f^n) on D, where D is an open subset of F_f with f(D)⊂ D and g is holomorphic on D. The general aim of this work is to study the long-time behaviour of such modified sequences. In particular, we will prove the existence of holomorphic functions g on D having the property that the behaviour of the sequence of compositions (g∘f^n) on the set D becomes quite similarly chaotic as the behaviour of the sequence (f^n) on the Julia set of f. With this approach, we immerse ourselves into the theory of universal families and hypercyclic operators, which itself has developed into an own branch of research. In general, for topological spaces X, Y and a family {T_i: i ∈ I} of continuous functions T_i:X→Y, an element x ∈ X is called universal for the family {T_i: i ∈ I} if the set {T_i(x): i ∈ I} is dense in Y. In case that X is a topological vector space and T is a continuous linear operator on X, a vector x ∈ X is called hypercyclic for T if it is universal for the family {T^n: n ∈ N}. Thus, roughly speaking, universality and hypercyclicity can be described via the following two aspects: There exists a single object which allows us, via simple analytical operations, to approximate every element of a whole class of objects. In the above situation, i.e. for a non-linear entire function f and an open subset D of F_f with f(D)⊂ D, we endow the space H(D) of holomorphic functions on D with the topology of locally uniform convergence and we consider the map C_f:H(D)→H(D), C_f(g):=g∘f|_D, which is called the composition operator with symbol f. The transform C_f is a continuous linear operator on the Fréchet space H(D). In order to show that the above-mentioned "nice" behaviour of the sequence of iterates (f^n) on the set D ⊂ F_f can "change dramatically" if we compose the iterates from the left with another suitable holomorphic function, our aim consists in finding functions g ∈ H(D) which are hypercyclic for C_f. Indeed, for each hypercyclic function g for C_f, the set of compositions {g∘f^n|_D: n ∈ N} is dense in H(D) so that the sequence of compositions (g∘f^n|_D) is kind of "maximally divergent" " meaning that each function in H(D) can be approximated locally uniformly on D via subsequences of (g∘f^n|_D). This kind of behaviour stands in sharp contrast to the fact that the sequence of iterates (f^n) itself converges, behaves like a rotation or shows some "wandering behaviour" on each component of F_f. To put it in a nutshell, this work combines the theory of non-linear complex dynamics in the complex plane with the theory of dynamics of continuous linear operators on spaces of holomorphic functions. As far as the author knows, this approach has not been investigated before.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.
Variational inequality problems constitute a common basis to investigate the theory and algorithms for many problems in mathematical physics, in economy as well as in natural and technical sciences. They appear in a variety of mathematical applications like convex programming, game theory and economic equilibrium problems, but also in fluid mechanics, physics of solid bodies and others. Many variational inequalities arising from applications are ill-posed. This means, for example, that the solution is not unique, or that small deviations in the data can cause large deviations in the solution. In such a situation, standard solution methods converge very slowly or even fail. In this case, so-called regularization methods are the methods of choice. They have the advantage that an ill-posed original problem is replaced by a sequence of well-posed auxiliary problems, which have better properties (like, e.g., a unique solution and a better conditionality). Moreover, a suitable choice of the regularization term can lead to unconstrained auxiliary problems that are even equivalent to optimization problems. The development and improvement of such methods are a focus of current research, in which we take part with this thesis. We suggest and investigate a logarithmic-quadratic proximal auxiliary problem (LQPAP) method that combines the advantages of the well-known proximal-point algorithm and the so-called auxiliary problem principle. Its exploration and convergence analysis is one of the main results in this work. The LQPAP method continues the recent developments of regularization methods. It includes different techniques presented in literature to improve the numerical stability: The logarithmic-quadratic distance function constitutes an interior point effect which allows to treat the auxiliary problems as unconstrained ones. Furthermore, outer operator approximations are considered. This simplifies the numerical solution of variational inequalities with multi-valued operators since, for example, bundle-techniques can be applied. With respect to the numerical practicability, inexact solutions of the auxiliary problems are allowed using a summable-error criterion that is easy to implement. As a further advantage of the logarithmic-quadratic distance we verify that it is self-concordant (in the sense of Nesterov/Nemirovskii). This motivates to apply the Newton method for the solution of the auxiliary problems. In the numerical part of the thesis the LQPAP method is applied to linearly constrained, differentiable and nondifferentiable convex optimization problems, as well as to nonsymmetric variational inequalities with co-coercive operators. It can often be observed that the sequence of iterates reaches the boundary of the feasible set before being close to an optimal solution. Against this background, we present the strategy of under-relaxation, which robustifies the LQPAP method. Furthermore, we compare the results with an appropriate method based on Bregman distances (BrPAP method). For differentiable, convex optimization problems we describe the implementation of the Newton method to solve the auxiliary problems and carry out different numerical experiments. For example, an adaptive choice of the initial regularization parameter and a combination of an Armijo and a self-concordance step size are evaluated. Test examples for nonsymmetric variational inequalities are hardly available in literature. Therefore, we present a geometric and an analytic approach to generate test examples with known solution(s). To solve the auxiliary problems in the case of nondifferentiable, convex optimization problems we apply the well-known bundle technique. The implementation is described in detail and the involved functions and sequences of parameters are discussed. As far as possible, our analysis is substantiated by new theoretical results. Furthermore, it is explained in detail how the bundle auxiliary problems are solved with a primal-dual interior point method. Such investigations have by now only been published for Bregman distances. The LQPAP bundle method is again applied to several test examples from literature. Thus, this thesis builds a bridge between theoretical and numerical investigations of solution methods for variational inequalities.
Why they rebel peacefully: On the violence-reducing effects of a positive attitude towards democracy
Under the impression of Europe’s drift into Nazism and Stalinism in the first half of the 20th century, social psychological research has focused strongly on dangers inherent in people’s attachment to a political system. The dissertation at hand contributes to a more differentiated perspective by examining violence-reducing aspects of political system attachment in four consecutive steps: First, it highlights attachment to a social group as a resource for violence prevention on an intergroup level. The results suggest that group attachment fosters self-control, a well-known protective factor against violence. Second, it demonstrates violence-reducing influences of attachment on a societal level. The findings indicate that attachment to a democracy facilitate peaceful and prevent violent protest tendencies. Third, it introduces the concept of political loyalty, defined as a positive attitude towards democracy, in order to clarify the different approaches of political system attachment. A set of three studies show the reliability and validity of a newly developed political loyalty questionnaire that distinguishes between affective and cognitive aspects. Finally, the dissertation differentiates former findings with regard to protest tendencies using the concept of political loyalty. A set of two experiments show that affective rather than cognitive aspects of political loyalty instigate peaceful protest tendencies and prevent violent ones. Implications of this dissertation for political engagement and peacebuilding as well as avenues for future research are discussed.
The development of our society contributed to increased occurrence of emerging substances (pesticides, pharmaceuticals, personal care products, etc.) in wastewater. Because of their potential hazard on ecosystems and humans, Wastewater Treatment Plants (WWTPs) need to adapt to better remove these compounds. Technology or policy development should however comply with sustainable development, e.g. based on Life Cycle Assessment (LCA) metrics. Nevertheless, the reliability or consistency of LCA results can sometimes be debatable. The main objective of this work was to explore how LCA can better support the implementation of innovative wastewater treatment options, in particular including removal benefits. The method was applied to support solutions for pharmaceuticals elimination from wastewater, regarding: (i) UV technology design, (ii) choice of advanced technology and (iii) centralized or decentralized treatment policy. The assessment approach followed by previous authors based on net impacts calculation seemed very promising to consider both environmental effects induced by treatment plant operation and environmental benefits obtained from pollutants removal. It was therefore applied to compare UV configuration types. LCA outcomes were consistent with degradation kinetics analysis. For the comparison of advanced technologies and policy scenarios, the common practice (net impacts based on EDIP method) was compared to other assessments, to better consider elimination benefits. First, USEtox consensus was applied for the avoided (eco)toxicity impacts, in combination with the recent method ReCiPe for generated impacts. Then, an eco-efficiency indicator (EFI) was developed to weigh the treatment efforts (generated impacts based on EDIP and ReCiPe methods) by the average removal efficiency (overcoming (eco)toxicity uncertainty issues). In total, the four types of comparative assessment showed the same trends: (i) ozonation and activated carbon perform better than UV irradiation, and (ii) no clear advantage distinguished between policy scenarios. It cannot be however concluded that advanced treatment of pharmaceuticals is not necessary because other criteria should be considered (risk assessment, bacterial resistance, etc.) and large uncertainties were embedded in calculations. Indeed, a significant part of this work was dedicated to the discussion of uncertainty and limitations of the LCA outcomes. At the inventory level, it was difficult to model technology operation at development stage. For impact assessment, the newly developed characterization factors for pharmaceuticals (eco)toxicity showed large uncertainties, mainly due to the lack of data and quality for toxicity tests. The use of information made available under REACH framework to develop CFs for detergent ingredients tried to cope with this issue but the benefits were limited due to the mismatch of information between REACH and USEtox method. The highlighted uncertainties were treated with sensitivity analyses to understand their effects on LCA results. This research work finally presents perspectives on the use of transparently generated data (technology inventory and (eco)toxicity factors) and further development of EFI indicator. Also, an accent is made on increasing the reliability of LCA outcomes, in particular through the implementation of advanced techniques for uncertainty management. To conclude, innovative technology/product development (e.g. based on circular economy approach) needs the involvement of all types of actors and the support from sustainability metrics.
Phase-amplitude cross-frequency coupling is a mechanism thought to facilitate communication between neuronal ensembles. The mechanism could underlie the implementation of complex cognitive processes, like executive functions, in the brain. This thesis contributes to answering the question, whether phase-amplitude cross-frequency coupling - assessed via electroencephalography (EEG) - is a mechanism by which executive functioning is implemented in the brain and whether an assumed performance effect of stress on executive functioning is reflected in phase-amplitude coupling strength. A huge body of studies shows that stress can influence executive functioning, in essence having detrimental effects. In two independent studies, each being comprised of two core executive function tasks (flexibility and behavioural inhibition as well as cognitive inhibition and working memory), beta-gamma phase-amplitude coupling was robustly detected in the left and right prefrontal hemispheres. No systematic pattern of coupling strength modulation by either task demands or acute stress was detected. Beta-gamma coupling might also be present in more basic attention processes. This is the first investigation of the relationship between stress, executive functions and phase-amplitude coupling. Therefore, many aspects have not been explored yet. For example, studying phase precision instead of coupling strength as an indicator for phase-amplitude coupling modulations. Furthermore, data was analysed in source space (independent component analysis); comparability to sensor space has still to be determined. These as well as other aspects should be investigated, due to the promising finding of very robust and strong beta-gamma coupling for all executive functions. Additionally, this thesis tested the performance of two widely used phase-amplitude coupling measures (mean vector length and modulation index). Both measures are specific and sensitive to coupling strength and coupling width. The simulation study also drew attention to several confounding factors, which influence phase-amplitude coupling measures (e. g. data length, multimodality).
The vision of a future information and communication society has prompted leading politicians in the United States, the European Union and Japan to influence or even lead the economic and social transition in the context of an active technology policy. The technological development of society, however, is a product of a complex interplay of technological, economic and socio-political constraints. These constraints limit the political decision-making and implementation abilities. Moreover, facts and information are continuously changing during a paradigmatic technological, economic and social shift, which limits political decision-making abilities. This study compares political decision-making to promote computer-mediated communications in the Triad since the beginning of the 1980s, on four levels: the development of a political vision, the long-term aims and strategies, technology policy (e.g. the promotion of technological development and competition policy) and regulatory policy (e.g. universal access, protection of privacy and intellectual property). While technology policy tends to be uncontroversial, during a paradigmatic shift regulatory policy is difficult and lengthy. Nevertheless, the inclusion of interest groups, which rise during this paradigmatic shift and which are close to the technologies and their societal consequences, help to aid decision-making processes. In this context, politics in the United States has been more successful that in the European Union and especially Japan. Although this study predates the rise of eCommerce over the Internet, it addresses many of the themes underlying it. Of these themes, many remain politically unsettled, both on national, supranational and especially international levels. For example, for encryption and secure payments, which are necessary for eCommerce, no international standards do yet exist. The issue of taxation has hardly been opened for discussions. In sum, this study does not only offer a historical overview of the development of the Internet, but it also discusses issues of continuing present concern.
Extension of inexact Kleinman-Newton methods to a general monotonicity preserving convergence theory
(2011)
The thesis at hand considers inexact Newton methods in combination with algebraic Riccati equation. A monotone convergence behaviour is proven, which enables a non-local convergence. Above relation is transferred to a general convergence theory for inexact Newton methods securing the monotonicity of the iterates for convex or concave mappings. Several application prove the pratical benefits of the new developed theory.
The midcingulate cortex has become the focus of scientific interest as it has been associated with a wide range of attentional phenomena. This survey found evidence indicating the relevance of gender and handedness for measures of regional cortical morphology. Although gender was associated with structural variations concerning the neuroanatomy of the midcingulum bundle as well, handedness did not emerge in the analyses of white matter characteristics as significant factor. Hemispheric differences were found at the level of both gray and white matter. Turning to the functional implications of neuroanatomical variations and comparing subjects with a pronounced and a low degree of midcingulate folding, which indicates differential expansions of cytoarchitectural areas, behavioral and electrophysiological differences in the processing of interference became evident. A high degree of leftward midcingulate fissurization was associated with better behavioral performance, presumably caused by a more effective conflict-monitoring system triggering fast and automatic attentional filtering mechanisms. Subjects exhibiting a lower degree of midcingulate fissurization rather seem to rely on more effortful control processes. These results carry implications not only concerning neuronal representations of individual differences in attentional processes, but might also be of relevance for the refinement of models for mental disorders.
The efficacy and effectiveness of psychotherapeutic interventions have been proven time and again. We therefore know that, in general, evidence-based treatments work for the average patient. However, it has also repeatedly been shown that some patients do not profit from or even deteriorate during treatment. Patient-focused psychotherapy research takes these differences between patients into account by focusing on the individual patient. The aim of this research approach is to analyze individual treatment courses in order to evaluate when and under which circumstances a generally effective treatment works for an individual patient. The goal is to identify evidence based clinical decision rules for the adaptation of treatment to prevent treatment failure. Patient-focused research has illustrated how different intake indicators and early change patterns predict the individual course of treatment, but they leave a lot of variance unexplained. The thesis at hand analyzed whether Ecological Momentary Assessment (EMA) strategies could be integrated into patient-focused psychotherapy research in order to improve treatment response prediction models. EMA is an electronically supported diary approach, in which multiple real-time assessments are conducted in participants" everyday lives. We applied EMA over a two-week period before treatment onset in a mixed sample of patients seeking outpatient treatment. The four daily measurements in the patients" everyday environment focused on assessing momentary affect and levels of rumination, perceived self-efficacy, social support and positive or negative life events since the previous assessment. The aim of this thesis project was threefold: First, to test the feasibility of EMA in a routine care outpatient setting. Second, to analyze the interrelation of different psychological processes within patients" everyday lives. Third and last, to test whether individual indicators of psychological processes during everyday life, which were assessed before treatment onset, could be used to improve prediction models of early treatment response. Results from Study I indicate good feasibility of EMA application during the waiting period for outpatient treatment. High average compliance rates over the entire assessment period and low average burdens perceived by the patients support good applicability. Technical challenges and the results of in-depth missing analyses are reported to guide future EMA applications in outpatient settings. Results from Study II shed further light on the rumination-affect link. We replicated results from earlier studies, which identified a negative association between state rumination and affect on a within-person level and additionally showed a) that this finding holds for the majority but not every individual in a diverse patient sample with mixed Axis-I disorders, b) that rumination is linked to negative but also to positive affect and c) that dispositional rumination significantly affects the state rumination-affect association. The results provide exploratory evidence that rumination might be considered a transdiagnostic mechanism of psychological functioning and well-being. Results from Study III finally suggest that the integration of indicators derived from EMA applications before treatment onset can improve prediction models of early treatment response. Positive-negative affect ratios as well as fluctuations in negative affect measured during patients" daily lives allow the prediction of early treatment response. Our results indicate that the combination of commonly applied intake predictors and EMA indicators of individual patients" daily experiences can improve treatment response predictions models. We therefore conclude that EMA can successfully be integrated into patient-focused research approaches in routine care settings to ameliorate or optimize individual care.
ASEAN and ASEAN Plus Three: Manifestations of Collective Identities in Southeast and East Asia?
(2003)
East Asia is a region undergoing vast structural changes. As the region moved closer together economically and politically following the breakdown of the bipolar world order and the ensuing expansion of intra-regional interdependencies, the states of the region faced the challenge of having to actively recast their mutual relations. At the same time, throughout the 1990s, the West became increasingly interested in trans- and inter-regional dialogue and cooperation with the emerging economies of East Asia. These developments gave rise to a "new regionalism", which eventually also triggered debates on Asian identities and the region's potential to integrate. Before this backdrop, this thesis analyzes in how far both the Association of Southeast Asian Nations (ASEAN), which has been operative since 1967 and thus embodies the "old regionalism" of Southeast Asia, and the ASEAN Plus Three forum (APT: the ASEAN states plus China, Japan and South Korea), which has come into existence in the aftermath of the Asian economic crisis of 1997, can be said to represent intergovernmental manifestations of specific collective identities in Southeast and East Asia, respectively. Based on profiles of the respective discursive, behavioral and motivational patterns as well as the integrative potential of ASEAN and APT, this study establishes in how far the member states adhere to sustainable collective patterns of interaction, expectations and objectives, and assesses in how far they can be said to form specific 'ingroups'. Four studies on collective norms, readiness to pool sovereignty, solidarity and attitudes vis-Ã -vis relevant third states show that ASEAN has evolved a certain degree of collective identity, though the Association's political relevance and coherence is frequently thwarted by changes in its external environment. A study on the cooperative and integrative potential of APT yields no manifest evidence of an ongoing or incipient pan-East Asian identity formation process.
The main purpose of this dissertation is to solve the following question: How will the emergence of the Euro influence the currency composition of the NICs?monetary reserves? Taiwan and Thailand are chosen as our investigation subjects. There are two sorts of motives for central banks' reserve holdings, i.e., intervention-related motives and portfolio-related motives. The need for reserve holdings resulting from intervention-related motives are justified because of the costs resulting from exchange rate instability. On the other hand, we use the Tobin-Markowitz model to justify the need for monetary reserves held for portfolio-related motives. The operational implication of this distinction is the separation of monetary reserves into two tranches corresponding to different objectives. An analysis of a central bank's transaction balance is a money quality analysis. Such an analysis has to do with transaction costs and non-pecuniary rates of return. The facts point out, that the Euro's emergence will not change the fact that the USD will continue to be the major currency of transaction balances of the central banks in Taiwan and Thailand. In order to answer the question about diversification of monetary reserves as idle balance in the two NICs, we carry out an analysis of the portfolio approach, which is based on the basic ideas of the Tobin-Markowitz model. This analysis shows that Taiwan and/or Thailand respectively cannot reduce risk at a given rate of return or increase the rate of return at a given risk by diversifying their monetary reserves as idle balance from the USD to the Euro.
Stiftungsunternehmen sind Unternehmen, die sich ganz oder teilweise im Eigentum einer gemeinnützigen oder privaten Stiftung befinden. Die Anzahl an Stiftungsunternehmen in Deutschland ist in den letzten Jahren deutlich gestiegen. Bekannte deutsche Unternehmen wie Aldi, Bosch, Bertelsmann, LIDL oder Würth befinden sich im Eigentum von Stiftungen. Einige von ihnen, wie beispielsweise Fresenius, ZF Friedrichshafen oder Zeiss, sind sogar an der Börse notiert. Die Mehrzahl der Stiftungsunternehmen entsteht dadurch, dass Unternehmensgründer oder Unternehmerfamilien ihr Unternehmen in eine Stiftung einbringen, anstatt es zu vererben oder zu verkaufen.
Die Motive hierfür sind vielfältig und können familiäre Gründe (z. B. Kinderlosigkeit, Vermeidung von Familienstreit), unternehmensbezogene Gründe (z. B. Möglichkeit der langfristigen Planung durch stabile Eigentümerstruktur) und steuerliche Gründe (Vermeidung oder Reduzierung der Erbschaftssteuer) haben oder sind durch die Person des Gründers motiviert (Möglichkeit, das Unternehmen auch nach dem eigenen Tod über die Stiftung noch weiterhin zu prägen). Aufgrund der Tatsache, dass Stiftungsunternehmen zumeist aus Familienunternehmen hervorgehen, wird in der Forschung häufig nicht zwischen Familien- und Stiftungsunternehmen differenziert. Aus diesem Grund werden in dieser Dissertation zu Beginn anhand des Drei-Kreis-Modells für Familienunternehmen die Unterschiede zwischen Stiftungs- und Familienunternehmen dargestellt. Die Ergebnisse zeigen, dass nur eine sehr geringe Anzahl von Stiftungsunternehmen eine große Ähnlichkeit zu klassischen Familienunternehmen aufweist. Die meisten Stiftungsunternehmen unterscheiden sich zum Teil sehr stark von Familienunternehmen. Diese Ergebnisse verdeutlichen, dass Stiftungsunternehmen als separates Forschungsfeld betrachtet werden sollten.
Da innerhalb der Gruppe der Stiftungsunternehmen ebenfalls eine starke Heterogenität herrscht, werden im Anschluss Performanceunterschiede innerhalb der Gruppe der Stiftungsunternehmen untersucht. Hierzu wurden die Daten von 142 deutschen Stiftungsunternehmen für die Jahre 2006-2016 erhoben und mittels einer lineareren Regression ausgewertet. Die Ergebnisse zeigen, dass zwischen den verschiedenen Typen signifikante Unterschiede herrschen. Unternehmen, die von einer gemeinnützigen Stiftung gehalten werden, weisen eine signifikant schlechtere Performance auf, als Unternehmen die eine private Stiftung als Shareholder haben.
Im nächsten Schritt wird die Gruppe der börsennotierten Stiftungsunternehmen untersucht. Mittels einer Ereignisstudie wird getestet, wie sich die Stiftung als Eigentümer eines börsennotierten Unternehmens auf den Shareholder Value auswirkt. Die Ergebnisse zeigen, dass eine Anteilsverringerung einer Stiftung einen positiven Einfluss auf den Shareholder Value hat. Stiftungen werden vom Kapitalmarkt dementsprechend negativ bewertet. Aufgrund der divergierenden Ziele von Stiftung und Unternehmen birgt die Verbindung zwischen Stiftung und Unternehmen potentielle Konflikte und Herausforderungen für die beteiligten Personen. Mittels eines qualitativen explorativen Ansatzes, wird basierend auf Interviews, ein Modell entwickelt, welches die potentiellen Konflikte in Stiftungsunternehmen anhand des Beispiels der Doppelstiftung aufzeigt.
Im letzten Schritt werden Handlungsempfehlungen in Form eines Entwurfs für einen Corporate Governance Kodex erarbeitet, die (potentiellen) Stifterinnen und Stiftern helfen sollen, mögliche Konflikte entweder zu vermeiden oder bereits bestehende Probleme zu lösen.
Die Ergebnisse dieser Dissertation sind relevant für Theorie und Praxis. Aus theoretischer Sicht liegt der Wert dieser Untersuchungen darin, dass Forscher künftig besser zwischen Stiftungs- und Familienunternehmen unterscheiden können. Zudem bringt diese Arbeit den aktuellen Forschungsstand zum Thema Stiftungsunternehmen weiter. Außerdem bietet diese Dissertation insbesondere potentiellen Stiftern einen Überblick über die verschiedenen Ausgestaltungsmöglichkeiten und die Vor- und Nachteile, die diese Konstruktionen mit sich bringen. Die Handlungsempfehlungen ermöglichen es Stiftern, vorab potentielle Gefahren erkennen zu können und diese zu umgehen.
Teamwork is ubiquitous in the modern workplace. However, it is still unclear whether various behavioral economic factors de- or increase team performance. Therefore, Chapters 2 to 4 of this thesis aim to shed light on three research questions that address different determinants of team performance.
Chapter 2 investigates the idea of an honest workplace environment as a positive determinant of performance. In a work group, two out of three co-workers can obtain a bonus in a dice game. By misreporting a secret die roll, cheating without exposure is an option in the game. Contrary to claims on the importance of honesty at work, we do not observe a reduction in the third co-worker's performance, who is an uninvolved bystander when cheating takes place.
Chapter 3 analyzes the effect of team size on performance in a workplace environment in which either two or three individuals perform a real-effort task. Our main result shows that the difference in team size is not harmful to task performance on average. In our discussion of potential mechanisms, we provide evidence on ongoing peer effects. It appears that peers are able to alleviate the potential free-rider problem emerging out of working in a larger team.
In Chapter 4, the role of perceived co-worker attractiveness for performance is analyzed. The results show that task performance is lower, the higher the perceived attractiveness of co-workers, but only in opposite-sex constellations.
The following Chapter 5 analyzes the effect of offering an additional payment option in a fundraising context. Chapter 6 focuses on privacy concerns of research participants.
In Chapter 5, we conduct a field experiment in which, participants have the opportunity to donate for the continuation of an art exhibition by either cash or cash and an additional cashless payment option (CPO). The treatment manipulation is completed by framing the act of giving either as a donation or pay-what-you-want contribution. Our results show that donors shy away from using the CPO in all treatment conditions. Despite that, there is no negative effect of the CPO on the frequency of financial support and its magnitude.
In Chapter 6, I conduct an experiment to test whether increased transparency of data processing affects data disclosure and whether the results change if it is indicated that the implementation of the GDPR happened involuntarily. I find that increased transparency raises the number of participants who do not disclose personal data by 21 percent. However, this is not the case in the involuntary-signal treatment, where the share of non-disclosures is relatively high in both conditions.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Zeitgleich mit stetig wachsenden gesellschaftlichen Herausforderungen haben im vergangenen Jahrzehnt Sozialunternehmen stark an Bedeutung gewonnen. Sozialunternehmen verfolgen das Ziel, mit unternehmerischen Mitteln gesellschaftliche Probleme zu lösen. Da der Fokus von Sozialunternehmen nicht hauptsächlich auf der eigenen Gewinnmaximierung liegt, haben sie oftmals Probleme, geeignete Unternehmensfinanzierungen zu erhalten und Wachstumspotenziale zu verwirklichen.
Zur Erlangung eines tiefergehenden Verständnisses des Phänomens der Sozialunternehmen untersucht der erste Teil dieser Dissertation anhand von zwei Studien auf der Basis eines Experiments das Entscheidungsverhalten der Investoren von Sozialunternehmen. Kapitel 2 betrachtet daher das Entscheidungsverhalten von Impact-Investoren. Der von diesen Investoren verfolgte Investmentansatz „Impact Investing“ geht über eine reine Orientierung an Renditen hinaus. Anhand eines Experiments mit 179 Impact Investoren, die insgesamt 4.296 Investitionsentscheidungen getroffen haben, identifiziert eine Conjoint-Studie deren wichtigste Entscheidungskriterien bei der Auswahl der Sozialunternehmen. Kapitel 3 analysiert mit dem Fokus auf sozialen Inkubatoren eine weitere spezifische Gruppe von Unterstützern von Sozialunternehmen. Dieses Kapitel veranschaulicht auf der Basis des Experiments die Motive und Entscheidungskriterien der Inkubatoren bei der Auswahl von Sozialunternehmen sowie die von ihnen angebotenen Formen der nichtfinanziellen Unterstützung. Die Ergebnisse zeigen unter anderem, dass die Motive von sozialen Inkubatoren bei der Unterstützung von Sozialunternehmen unter anderem gesellschaftlicher, finanzieller oder reputationsbezogener Natur sind.
Der zweite Teil erörtert auf der Basis von zwei quantitativ empirischen Studien, inwiefern die Registrierung von Markenrechten sich zur Messung sozialer Innovationen eignet und mit finanziellem und sozialem Wachstum von sozialen Startups in Verbindung steht. Kapitel 4 erörtert, inwiefern Markenregistrierungen zur Messung von sozialen Innovationen dienen können. Basierend auf einer Textanalyse der Webseiten von 925 Sozialunternehmen (> 35.000 Unterseiten) werden in einem ersten Schritt vier Dimensionen sozialer Innovationen (Innovations-, Impact-, Finanz- und Skalierbarkeitsdimension) ermittelt. Darauf aufbauend betrachtet dieses Kapitel, wie verschiedene Markencharakteristiken mit den Dimensionen sozialer Innovationen zusammenhängen. Die Ergebnisse zeigen, dass insbesondere die Anzahl an registrierten Marken als Indikator für soziale Innovationen (alle Dimensionen) dient. Weiterhin spielt die geografische Reichweite der registrierten Marken eine wichtige Rolle. Aufbauend auf den Ergebnissen von Kapitel 4 untersucht Kapitel 5 den Einfluss von Markenregistrierungen in frühen Unternehmensphasen auf die weitere Entwicklung der hybriden Ergebnisse von sozialen Startups. Im Detail argumentiert Kapitel 5, dass sowohl die Registrierung von Marken an sich als auch deren verschiedene Charakteristiken unterschiedlich mit den sozialen und ökonomischen Ergebnissen von sozialen Startups in Verbindung stehen. Anhand eines Datensatzes von 485 Sozialunternehmen zeigen die Analysen aus Kapitel 5, dass soziale Startups mit einer registrierten Marke ein vergleichsweise höheres Mitarbeiterwachstum aufweisen und einen größeren gesellschaftlichen Beitrag leisten.
Die Ergebnisse dieser Dissertation weiten die Forschung im Social Entrepreneurship-Bereich weiter aus und bieten zahlreiche Implikationen für die Praxis. Während Kapitel 2 und 3 das Verständnis über die Eigenschaften von nichtfinanziellen und finanziellen Unterstützungsorganisationen von Sozialunternehmen vergrößern, schaffen Kapitel 4 und 5 ein größeres Verständnis über die Bedeutung von Markenanmeldungen für Sozialunternehmen.
Fibromyalgia is a disorder of unknown etiology characterized by widespread, chronic musculoskeletal pain of at least three month duration and pressure hyperalgesia at specific tender points on clinical examination. The disorder is accompanied by a multitude of additional symptoms such as fatigue, sleep disturbances, morning stiffness, depression, and anxiety. In terms of biological disturbances, low cortisol concentrations have been repeatedly observed in blood and urine samples of fibromyalgia patients, both under basal and stress-induced conditions. The aim of this dissertation was to investigate the presence of low cortisol concentrations (hypocortisolism) and potential accompanying alterations on sympathetic and immunological levels in female fibromyalgia patients. Beside the expected hypocortisolism, a higher norepinephrine secretion and lower natural killer cell levels were found in the patient group compared to a control group consisting of healthy, age-matched women. In addition, an increased activity of some pro-inflammatory markers was observed thus leading to alterations in the balance of pro-/anti-inflammatory activity. The results underline the relevance of simultaneous investigations of interacting bodily systems for a better understanding of underlying biological mechanisms in stress-related disorders.
There is ample evidence that the personality trait of extraversion is associated with frequent experiences of positive affect whereas introversion is associated with less frequent experiences of positive affect. According to a theory of Watson et al. (1997), these findings demonstrate that positive affect forms the conceptual core of extraversion. In contrast, several other researchers consider sociability - and not positive affect - as the core of extraversion. The aim of the present work is to examine the relation between extraversion and dispositional positive affect on the neurobiological level. In 38 participants resting cerebral blood flow was measured with continuous arterial spin labeling (CASL). Each participant was scanned on two measurement occasions separated by seven weeks. In addition, questionnaire measures of extraversion and dispositional positive affect were collected. To employ CASL for investigating the biological basis of personality traits, the psychometric properties of CASL blood flow measurements were examined in two studies. The first study was conducted to validate the CASL technique. Using a visual stimulation paradigm, the expected pattern of activity was found, i.e. there were specific differences in blood flow in the primary and secondary visual areas. Moreover, the results in the first measurement occasion could be reproduced in the second. Thus, these results suggest that CASL blood flow measurements have a high degree of validity. The aim of the second psychometric study was to examine whether resting blood flow measurements are characterized by a sufficient trait stability to be used as a marker for personality traits. Employing the latent state-trait theory developed by Steyer and colleagues, it was shown that about 70 % of the variance of regional blood flow could be explained by individual differences in a latent trait. This suggests that blood flow measurements have sufficient trait stability for investigating the biological basis of personality traits. In the third study, the relation between extraversion and dispositional positive affect was investigated on the neurobiological level. Voxel-based analyses showed that dispositional positive affect was correlated with resting blood flow in the ventral striatum, i.e. a brain structure that is associated with approach behavior and reward processing. This biological basis was also found for extraversion. In addition, when extraversion was statistically controlled, the association between dispositional positive affect and blood flow in the ventral striatum was still present. However, when dispositional positive affect was statistically controlled, the relation between extraversion and the ventral striatum disappeared. Taken together, these results suggest that positive affect forms a core of extraversion on the neurobiological level. The present findings thus add psychophysiological evidence to the theory of Watson et al. (1997), which suggests that positive affect forms the conceptual core of extraversion.
The reduction of information contained in model time series through the use of aggregating statistical performance measures is very high compared to the amount of information that one would like to draw from it for model identification and calibration purposes. It is readily known that this loss imposes important limitations on model identification and -diagnostics and thus constitutes an element of the overall model uncertainty as essentially different model realizations with almost identical performance measures (e.g. r-² or RMSE) can be generated. In three consecutive studies the present work proposes an alternative approach towards hydrological model evaluation based on the application of Self-Organizing Maps (SOM; Kohonen, 2001). The Self-Organizing Map is a type of artificial neural network and unsupervised learning algorithm that is used for clustering, visualization and abstraction of multidimensional data. It maps vectorial input data items with similar patterns onto contiguous locations of a discrete low-dimensional grid of neurons. The iterative training of the SOM causes the neurons to form a discrete, data-compressed representation of the high-dimensional input data. Using appropriate visualization techniques, information on distributions, patterns and relationships in complex data sets can be extracted. Irrespective of their potential, SOM applications have earned very little attention in hydrological modelling compared to other artificial neural network techniques. Therefore, the aim of the present work is to demonstrate that the application of Self-Organizing Maps has very high potential to address fundamental issues of model evaluation: It is shown that the clustering and classification of model time series by means of SOM can provide useful insights into model behaviour. In combination with the diagnostic properties of Signature Indices (Gupta et al., 2008; Yilmaz et al., 2008) SOM provides a novel tool for interpreting the model parameters in the hydrological context and identifying parameter sets that simultaneously meet multiple objectives, even if the corresponding model realizations belong to different models. Moreover, the presented studies and reviews also encourage further studies on the application of SOM in hydrological modelling.
Early life adversity (ELA) poses a high risk for developing major health problems in adulthood including cardiovascular and infectious diseases and mental illness. However, the fact that ELA-associated disorders first become manifest many years after exposure raises questions about the mechanisms underlying their etiology. This thesis focuses on the impact of ELA on startle reflexivity, physiological stress reactivity and immunology in adulthood.
The first experiment investigated the impact of parental divorce on affective processing. A special block design of the affective startle modulation paradigm revealed blunted startle responsiveness during presentation of aversive stimuli in participants with experience of parental divorce. Nurture context potentiated startle in these participants suggesting that visual cues of childhood-related content activates protective behavioral responses. The findings provide evidence for the view that parental divorce leads to altered processing of affective context information in early adulthood.
A second investigation was conducted to examine the link between aging of the immune system and long-term consequences of ELA. In a cohort of healthy young adults, who were institutionalized early in life and subsequently adopted, higher levels of T cell senescence were observed compared to parent-reared controls. Furthermore, the results suggest that ELA increases the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell-specific immunosenescence.
The third study addresses the effect of ELA on stress reactivity. An extended version of the Cold Pressor Test combined with a cognitive challenging task revealed blunted endocrine response in adults with a history of adoption while cardiovascular stress reactivity was similar to control participants. This pattern of response separation may best be explained by selective enhancement of central feedback-sensitivity to glucocorticoids resulting from ELA, in spite of preserved cardiovascular/autonomic stress reactivity.
In this thesis, we consider the solution of high-dimensional optimization problems with an underlying low-rank tensor structure. Due to the exponentially increasing computational complexity in the number of dimensions—the so-called curse of dimensionality—they present a considerable computational challenge and become infeasible even for moderate problem sizes.
Multilinear algebra and tensor numerical methods have a wide range of applications in the fields of data science and scientific computing. Due to the typically large problem sizes in practical settings, efficient methods, which exploit low-rank structures, are essential. In this thesis, we consider an application each in both of these fields.
Tensor completion, or imputation of unknown values in partially known multiway data is an important problem, which appears in statistics, mathematical imaging science and data science. Under the assumption of redundancy in the underlying data, this is a well-defined problem and methods of mathematical optimization can be applied to it.
Due to the fact that tensors of fixed rank form a Riemannian submanifold of the ambient high-dimensional tensor space, Riemannian optimization is a natural framework for these problems, which is both mathematically rigorous and computationally efficient.
We present a novel Riemannian trust-region scheme, which compares favourably with the state of the art on selected application cases and outperforms known methods on some test problems.
Optimization problems governed by partial differential equations form an area of scientific computing which has applications in a variety of areas, ranging from physics to financial mathematics. Due to the inherent high dimensionality of optimization problems arising from discretized differential equations, these problems present computational challenges, especially in the case of three or more dimensions. An even more challenging class of optimization problems has operators of integral instead of differential type in the constraint. These operators are nonlocal, and therefore lead to large, dense discrete systems of equations. We present a novel solution method, based on separation of spatial dimensions and provably low-rank approximation of the nonlocal operator. Our approach allows the solution of multidimensional problems with a complexity which is only slightly larger than linear in the univariate grid size; this improves the state of the art for a particular test problem problem by at least two orders of magnitude.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
With two-thirds to three-quarters of all companies, family firms are the most common firm type worldwide and employ around 60 percent of all employees, making them of considerable importance for almost all economies. Despite this high practical relevance, academic research took notice of family firms as intriguing research subjects comparatively late. However, the field of family business research has grown eminently over the past two decades and has established itself as a mature research field with a broad thematic scope. In addition to questions relating to corporate governance, family firm succession and the consideration of entrepreneurial families themselves, researchers mainly focused on the impact of family involvement in firms on their financial performance and firm strategy. This dissertation examines the financial performance and capital structure of family firms in various meta-analytical studies. Meta-analysis is a suitable method for summarizing existing empirical findings of a research field as well as identifying relevant moderators of a relationship of interest.
First, the dissertation examines the question whether family firms show better financial performance than non-family firms. A replication and extension of the study by O’Boyle et al. (2012) based on 1,095 primary studies reveals a slightly better performance of family firms compared to non-family firms. Investigating the moderating impact of methodological choices in primary studies, the results show that outperformance holds mainly for large and publicly listed firms and with regard to accounting-based performance measures. Concerning country culture, family firms show better performance in individualistic countries and countries with a low power distance.
Furthermore, this dissertation investigates the sensitivity of family firm performance with regard to business cycle fluctuations. Family firms show a pro-cyclical performance pattern, i.e. their relative financial performance compared to non-family firms is better in economically good times. This effect is particularly pronounced in Anglo-American countries and emerging markets.
In the next step, a meta-analytic structural equation model (MASEM) is used to examine the market valuation of public family firms. In this model, profitability and firm strategic choices are used as mediators. On the one hand, family firm status itself does not have an impact on firms‘ market value. On the other hand, this study finds a positive indirect effect via higher profitability levels and a negative indirect effect via lower R&D intensity. A split consideration of family ownership and management shows that these two effects are mainly driven by family ownership, while family management results in less diversification and internationalization.
Finally, the dissertation examines the capital structure of public family firms. Univariate meta-analyses indicate on average lower leverage ratios in family firms compared to non-family firms. However, there is significant heterogeneity in mean effect sizes across the 45 countries included in the study. The results of a meta-regression reveal that family firms use leverage strategically to secure their controlling position in the firm. While strong creditor protection leads to lower leverage ratios in family firms, strong shareholder protection has the opposite effect.
The allergic contact dermatitis (ACD) to small molecular weight compounds is a common inflammatory skin reaction. ACD is restricted to industrialized countries, has an enormous sociomedical and socioeconomic impact. About 2,800 compounds from the six million chemicals known in our environment are believed to have allergic, and to a lesser degree also contact-sensitizing or immunogenic properties causing allergic contact dermatitis. ACD results from T cell responses to harmless, low molecular weight chemicals (haptens) applied to the skin. Haptens are not directly recognized by the cells of the immune system. They need to be presented by subsets of antigen presenting cells to the cells of the immune system. In this regard, epidermal Langerhans cells (LC) and the cells into which they mature (dendritic cells) are believed to play a pivotal role in the sensitization process for ACD. LC are able to bind the haptens, internalize them, and present them to naive T cells and induce thereby the development of effector T cells. They are so-called professional antigen presenting cells. This process is initiated and maintained by the release of several mediators, which are released by various cells after their contact with the haptens. One of the first proteins secreted into the environment is interleukin (IL)-1ß. This cytokine is produced and secreted minutes after an antigen enters the cell. It is commonly believed that the large amounts of this protein and other cytokines such as granulocyte-colony stimulation factor (GM-CSF) and tumor necrosis factor alpha (TNF-ï¡) needed for the initiation and activation of ACD are coming first from other cells residing in the skin, e.g., keratinocytes, monocytes and macrophages. These cytokines provide the danger signals needed for the activation of the Langerhans cell (LC), which then produce via a positive feedback loop various cytokines themselves. In addition, other proteins such as chemokines influence the generation of danger signals, migration, homing of T cells in the local lymph nodes as well as the recruitment of T cells into the skin. Thus, a small molecular compounds or hapten needs to be able to induce danger signals in order to become immunogenic. In this study, we investigated whether para-phenylenediamine (PPD), an arylamine and common contact allergen, is able to induce danger signals and likely provide the signals needed for an initiation of an immune response[162, 163]. PPD is used as an antioxidant, an ingredient of hair dyes, intermediate of dyestuff, and PPD is found in chemicals used for photographic processing. But up to date, it has not been clearly demonstrated if PPD itself is a sensitizing agent. Thus, this study aimed on the potential of PPD to provide the danger signals by studying IL-1β, TNF-ï¡, and monocyte chemoattractant proteins (MCP-1) in human monocytes, peripheral blood mononuclear cells (PBMC) from healthy volunteers, and also in two human monocyte cell lines namely U937, and THP-1. This study found that PPD decreased dose- and time-dependently the expression and release of three relevant mediators involved in the generation of danger signals. Namely, PPD reduced the mRNA and protein levels for IL-1ß, TNF-ï¡, and MCP-1 in primary human monocytes from various donors. These findings were extended and validated by investigations using the cell line U937. The data were highly specific for PPD, and no such results were gained for its known auto oxidation product called Bandrowski- base or for meta-phenylenediamine (MPD), and ortho-phenylenediamine (OPD). Therefore, we can speculate that this effect is likely to be dependent on the para-substitution. Based on these results we conclude that PPD itself is not able to mount a cascade for the induction of danger signals. It should be mentioned that it is still possible that PPD induces danger signals for sensitization by other unknown processes. Therefore, more research is still needed focusing on this subject especially in professional antigen presenting cells in order to solve the still open question whether PPD itself sensitizes naive T cells or if PPD is solely an allergen. Independently we found unexpectedly that PPD as well as other haptens such as 2, 4-Dinitrochlorobenzene, nickelsulfate, as well as some terpenoide increased clearly the expression of CC chemokin receptor 2 (CCR2), the receptor for the chemokine MCP-1. Up to date, the main importance for the CCR2 receptor comes from results demonstrating that CCR2 is critical for the migration of monocytes after encounter with bacterial lipopolysaccharides. Under these circumstances the receptor disappears from the cell surface and is down regulated. An up regulation of CCR2 has not been reported for haptens, and deserves further investigations.
Evaluative conditioning (EC) refers to changes in liking that are due to the pairing of stimuli, and is one of the effects studied in order to understand the processes of attitude formation. Initially, EC had been conceived of as driven by processes that are unique to the formation of attitudes, and that occur independent of whether or not individuals engage in conscious and effortful propositional processes. However, propositional processes have gained considerable popularity as an explanatory concept for the boundary conditions observed in EC studies, with some authors going as far as to suggest that the evidence implies that EC is driven primarily by propositional processes. In this monograph I present research which questions the validity of this claim, and I discuss theoretical challenges and avenues for future EC research.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.
Due to the breath-taking growth of the World Wide Web (WWW), the need for fast and efficient web applications becomes more and more urgent. In this doctoral thesis, the emphasis will be on two concrete tasks for improving Internet applications. On the one hand, a major problem of many of today's Internet applications may be described as the performance of the Client/Server-communication: servers often take a long time to respond to a client's request. There are several strategies to overcome this problem of high user-perceived latencies; one of them is to predict future user-requests. This way, time-consuming calculations on the server's side can be performed even before the corresponding request is being made. Furthermore, in certain situations, also the pre-fetching or the pre-sending of data might be appropriate. Those ideas will be discussed in detail in the second part of this work. On the other hand, a focus will be placed on the problem of proposing hyperlinks to improve the quality of rapid written texts, at first glance, an entirely different problem to predicting client requests. Ultra-modern online authoring systems that provide possibilities to check link-consistencies and administrate link management should also propose links in order to improve the usefulness of the produced HTML-documents. In the third part of this elaboration, we will describe a possibility to build a hyperlink-proposal module based on statistical information retrieval from hypertexts. These two problem categories do not seem to have much in common. It is one aim of this work to show that there are certain, similar solution strategies to look after both problems. A closer comparison and an abstraction of both methodologies will lead to interesting synergetic effects. For example, advanced strategies to foresee future user-requests by modeling time and document aging can be used to improve the quality of hyperlink-proposals too.
The fragmentation of landscapes has an important impact on the conservation of biodiversity. The genetic diversity is an important factor for a population- viability, influenced by the landscape structure. However, different species with differing ecological demands react rather differently on the same landscape pattern. To address this feature, we studied ten xerothermophilous butterfly species with differing habitat requirements (habitat specialists with low dispersal power in contrast to habitat generalists with low dispersal power and habitat generalists with higher dispersal power). We analysed allozyme loci for about 10 populations (Ã 40 individuals) of each species in a western German study region with adjoining areas in Luxemburg and north-eastern France. The genetic diversity and genetic differentiation between local populations was discussed under conservation genetic aspects. For generalists we detected a more or less panmictic structure and for species with lower abundance and sedentarily behaviour the effect of isolation by distance. On the other hand, the isolation of specialists was mostly reflected by strong genetic differentiation patterns between the investigated populations. Parameters of genetic diversity were mostly significantly higher in generalists, compared to specialists. Substructures within populations as an answer of low intrapatch migration, low population densities and high population fluctuations could be shown as well. Aspects of landscape history (the historical distribution of habitats resulting of the presence of limestone areas) and the changes of extensive sheep pasturing and the loss of potential habitats in the last few decades (recent fragmentation) are discussed against the gained genetic data-set of the ten butterflies.
High-resolution projections of the future climate are required to assess climate change realistically at a regional scale. This is in particular important for climate change impact studies since global projections are much too coarse to represent local conditions adequately. A major concern is thereby the change of extreme values in a warming climate due to their severe impact on the natural environment, socio-economical systems and the human health. Regional climate models (RCMs) are, however, able to reproduce much of those local features. Current horizontal resolutions are about 18-25km, which is still too coarse to directly resolve small-scale processes such as deep-convection. For this reason, projections of a possible future climate were simulated in this study with the regional climate model COSMO-CLM at horizontal resolutions of 4.5km and 1.3km for the region of Saarland-Lorraine-Luxemburg and Rhineland-Palatinate for the first time. At a horizontal scale of about 1km deep-convection is treated explicitly, which is expected to improve particularly the simulation of convective summer precipitation and a better resolved orography is expected to improve near surface fields such as 2m temperature. These simulations were performed as 10-year long time-slice experiments for the present climate (1991"2000), the near future (2041"2050) and the end of the century (2091"2100). The climate change signals of the annual and seasonal means and the change of extremes are analysed with respect to precipitation and 2m temperature and a possible added value due to the increased resolution is investigated. To assess changes in extremes, extreme indices have been applied and 10- and 20-year return levels were estimated by "peak-over-threshold" models. Since it is generally known that model output of RCMs should not directly be used for climate change impact studies, the precipitation and temperature fields were bias-corrected with several quantile-matching methods. Among them is a new developed parametric method which includes an extension for extreme values and is hence expected to improve the correction. In addition, the impact of the bias-correction on the climate change signals and on the extreme value statistics was investigated. The results reveal a significant warming of the annual mean by about +1.7 -°C until 2041"2050 and +3.7 -°C until 2091"2100, but considerably stronger signals of up to +5 -°C in summer in the Rhine Valley. Furthermore, the daily variability increases by about +0.8 -°C in summer but decreases by about -0.8 -°C in winter. Consequently, hot extremes increase moderately until the mid of the century but strongly thereafter, in particular in the Rhine Valley. Cold extremes warm continuously in the complete domain in the next 100 years but strongest in mountainous areas. The change signals with regard to annual precipitation are of the order -±10% but not significant. Significant, however, are a predicted increase of +32% of the seasonal precipitation in autumn until 2041"2050 and a decrease of -28% in summer until 2091-2100. No significant changes were found for days with intensities > 20 mm/day, but the results indicate that extremes with return periods ≤2 years increase as well as the frequency and duration of dry periods. The bias-corrections amplified positive signals but dampened negative signals and considerably reduced the power of detection. Moreover, absolute values and frequencies of extremes were altered by the correction but change signals remained approximately constant. The new method outperformed other parametric methods, in particular with regard to extreme value correction and related extreme indices and return levels. Although the bias correction removed systematic errors, it should be treated as an additional layer of uncertainty in climate change studies. Finally, the increased resolution of 1.3km improved predominantly the representation of temperature fields and extremes in terms of spatial heterogeneity. The benefits for summer precipitation were not as clear due to a severe dry-bias in summer, but it could be shown that in principle the onset and intensity of convection improves. This work demonstrates that climate change will have severe impacts in this investigation area and that in particular extremes may change considerably. An increased resolution provides thereby an added value to the results. These findings encourage further investigations, for other variables as for example near-surface wind, which will be more feasible with growing computing resources. These analyses should, however, be repeated with longer time series, different RCMs and anthropogenic scenarios to determine the robustness and uncertainty of these results more extensively.
This work is concerned with two kinds of objects: regular expressions and finite automata. These formalisms describe regular languages, i.e., sets of strings that share a comparatively simple structure. Such languages - and, in turn, expressions and automata - are used in the description of textual patterns, workflow and dependence modeling, or formal verification. Testing words for membership in any given such language can be implemented using a fixed - i.e., finite - amount of memory, which is conveyed by the phrasing finite-automaton. In this aspect they differ from more general classes, which require potentially unbound memory, but have the potential to model less regular, i.e., more involved, objects. Other than expressions and automata, there are several further formalisms to describe regular languages. These formalisms are all equivalent and conversions among them are well-known.However, expressions and automata are arguably the notions which are used most frequently: regular expressions come natural to humans in order to express patterns, while finite automata translate immediately to efficient data structures. This raises the interest in methods to translate among the two notions efficiently. In particular,the direction from expressions to automata, or from human input to machine representation, is of great practical relevance. Probably the most frequent application that involves regular expressions and finite automata is pattern matching in static text and streaming data. Common tools to locate instances of a pattern in a text are the grep application or its (many) derivatives, as well as awk, sed and lex. Notice that these programs accept slightly more general patterns, namely ''POSIX expressions''. Concerning streaming data, regular expressions are nowadays used to specify filter rules in routing hardware.These applications have in common that an input pattern is specified in form a regular expression while the execution applies a regular automaton. As it turns out, the effort that is necessary to describe a regular language, i.e., the size of the descriptor,varies with the chosen representation. For example, in the case of regular expressions and finite automata, it is rather easy to see that any regular expression can be converted to a finite automaton whose size is linear in that of the expression. For the converse direction, however, it is known that there are regular languages for which the size of the smallest describing expression is exponential in the size of the smallest describing automaton.This brings us to the subject at the core of the present work: we investigate conversions between expressions and automata and take a closer look at the properties that exert an influence on the relative sizes of these objects.We refer to the aspects involved with these consideration under the titular term of Relative Descriptional Complexity.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
Floods are hydrological extremes that have enormous environmental, social and economic consequences.The objective of this thesis was a contribution to the implementation of a processing chain that integrates remote sensing information into hydraulic models. Specifically, the aim was to improve water elevation and discharge simulations by assimilating microwave remote sensing-derived flood information into hydraulic models. The first component of the proposed processing chain is represented by a fully automated flood mapping algorithm that enables the automated, objective, and reliable flood extent extraction from Synthetic Aperture Radar images, providing accurate results in both rural and urban regions. The method operates with minimum data requirements and is efficient in terms of computational time. The map obtained with the developed algorithm is still subject to uncertainties, both introduced by the flood mapping algorithm and inherent in the image itself. In this work, particular attention was given to image uncertainty deriving from speckle. By bootstrapping the original satellite image pixels, several synthetic images were generated and provided as input to the developed flood mapping algorithm. From the analysis performed on the mapping products, speckle uncertainty can be considered as a negligible component of the total uncertainty. In the final step of the proposed processing chain real event water elevations, obtained from satellite observations, were assimilated in a hydraulic model with an adapted version of the Particle Filter, modified to work with non-Gaussian distribution of observations. To deal with model structure error and possibly biased observations, a global and a local weight variant of the Particle Filter were tested. The variant to be preferred depends on the level of confidence that is attributed to the observations or to the model. This study also highlighted the complementarity of remote sensing derived and in-situ data sets. An accurate binary flood map represents an invaluable product for different end users. However, deriving from this binary map additional hydraulic information, such as water elevations, is a way of enhancing the value of the product itself. The derived data can be assimilated into hydraulic models that will fill the gaps where, for technical reasons, Earth Observation data cannot provide information, also enabling a more accurate and reliable prediction of flooded areas.
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.