Refine
Year of publication
Document Type
- Doctoral Thesis (384)
- Article (185)
- Working Paper (19)
- Book (15)
- Conference Proceedings (10)
- Part of Periodical (5)
- Contribution to a Periodical (4)
- Part of a Book (3)
- Habilitation (3)
- Other (3)
Language
- English (634) (remove)
Keywords
- Stress (27)
- Optimierung (21)
- Modellierung (20)
- Deutschland (19)
- Fernerkundung (18)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- stress (11)
- Europäische Union (10)
- China (9)
Institute
- Psychologie (108)
- Raum- und Umweltwissenschaften (99)
- Fachbereich 4 (69)
- Fachbereich 2 (50)
- Mathematik (49)
- Fachbereich 6 (44)
- Fachbereich 1 (29)
- Wirtschaftswissenschaften (29)
- Informatik (21)
- Anglistik (15)
In splitting theory of locally convex spaces we investigate evaluable characterizations of the pairs (E, X) of locally convex spaces such that each exact sequence 0 -> X -> G -> E -> 0 of locally convex spaces splits, i.e. either X -> G has a continuous linear left inverse or G -> E has a continuous linear right inverse. In the thesis at hand we deal with splitting of short exact sequences of so-called PLH spaces, which are defined as projective limits of strongly reduced spectra of strong duals of Fréchet-Hilbert spaces. This class of locally convex spaces contains most of the spaces of interest for application in the theory of partial differential operators as the space of Schwartz distributions , the space of real analytic functions and various spaces of ultradifferentiable functions and ultradistributions. It also contains non-Schwartz spaces as B(2,k,loc)(Ω) and spaces of smooth and square integrable functions that are not covered by the current theory for PLS spaces. We prove a complete characterizations of the above problem in the case of X being a PLH space and E either being a Fréchet-Hilbert space or a strong dual of one by conditions of type (T ). To this end, we establish the full homological toolbox of Yoneda Ext functors in exact categories for the category of PLH spaces including the long exact sequence, which in particular involves a thorough discussion of the proper concept of exactness. Furthermore, we exhibit the connection to the parameter dependence problem via the Hilbert tensor product for hilbertizable locally convex spaces. We show that the Hilbert tensor product of two PLH spaces is again a PLH space which in particular proves the positive answer to Grothendieck- problème des topologies. In addition to that we give a complete characterization of the vanishing of the first derivative of the functor proj for tensorized PLH spectra if one of the PLH spaces E and X meets some nuclearity assumptions. To apply our results to concrete cases we establish sufficient conditions of (DN)-(Ω) type and apply them to the parameter dependence problem for partial differential operators with constant coefficients on B(2,k,loc)(Ω) spaces as well as to the smooth and square integrable parameter dependence problem. Concluding we give a complete solution of all the problems under consideration for PLH spaces of Köthe type.
Chapter 2: Using data from the German Socio-Economic Panel, this study examines the relation-ship between immigrant residential segregation and immigrants" satisfaction with the neighbor-hood. The estimates show that immigrants living in segregated areas are less satisfied with the neighborhood. This is consistent with the hypothesis that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Our result holds true even when controlling for other influences such as household income and quality of the dwelling. It also holds true in fixed effects estimates that account for unobserved time-invariant influences. Chapter 3: Using survey data from the German Socio-Economic Panel, this study shows that immigrants living in segregated residential areas are more likely to report discrimination because of their ethnic background. This applies to both segregated areas where most neighbors are immigrants from the same country of origin as the surveyed person and segregated areas where most neighbors are immigrants from other countries of origin. The results suggest that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Chapter 4: Using data from the German Socio-Economic Panel (SOEP) and administrative data from 1996 to 2009, I investigate the question whether or not right-wing extremism of German residents is affected by the ethnic concentration of foreigners living in the same residential area. My results show a positive but insignificant relationship between ethnic concentration at the county level and the probability of extreme right-wing voting behavior for West Germany. However, due to potential endogeneity issues, I additionally instrument the share of foreigners in a county with the share of foreigners in each federal state (following an approach of Dustmann/Preston 2001). I find evidence for the interethnic contact theory, predicting a negative relationship between foreign-ers" share and right-wing voting. Moreover, I analyze the moderating role of education and the influence of cultural traits on this relationship. Chapter 5: Using data from the Socio-Economic Panel from 1998 to 2009 and administrative data on regional ethnic diversity, I show that ethnic diversity inhibits significantly people- political interest and participation in political organizations in West Germany. People seem to isolate themselves from political participation if exposed to more ethnic diversity which is particularly relevant with respect to the ongoing integration process of the European Union and the increasing transfer of legislative power from the national to European level. The results are robust if an instrumental variable strategy suggested by Dustmann and Preston (2001) is used to take into account that ethnic diversity measured on a local spatial level could be endogenous due to residential sorting. Interestingly, participation in non-political organizations is positively affected by ethnic diversity if selection bias is corrected for.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
Laboratory landslide experiments enable the observation of specific properties of these natural hazards. However, these observations are limited by traditional techniques: frequently used high-speed video analysis and wired sensors (e.g. displacement). These techniques lead to the drawback that either only the surface and 2D profiles can be observed or wires confine the motion behaviour. In contrast, an unconfined observation of the total spatiotemporal dynamics of landslides is needed for an adequate understanding of these natural hazards.
The present study introduces an autonomous and wireless probe to characterize motion features of single clasts within laboratory-scale landslides. The Smartstone probe is based on an inertial measurement unit (IMU) and records acceleration and rotation at a sampling rate of 100 Hz. The recording ranges are ±16 g (accelerometer) and ±2000∘ s−1 (gyroscope). The plastic tube housing is 55 mm long with a diameter of 10 mm. The probe is controlled, and data are read out via active radio frequency identification (active RFID) technology. Due to this technique, the probe works under low-power conditions, enabling the use of small button cell batteries and minimizing its size.
Using the Smartstone probe, the motion of single clasts (gravel size, median particle diameter d50 of 42 mm) within approx. 520 kg of a uniformly graded pebble material was observed in a laboratory experiment. Single pebbles were equipped with probes and placed embedded and superficially in or on the material. In a first analysis step, the data of one pebble are interpreted qualitatively, allowing for the determination of different transport modes, such as translation, rotation and saltation. In a second step, the motion is quantified by means of derived movement characteristics: the analysed pebble moves mainly in the vertical direction during the first motion phase with a maximal vertical velocity of approx. 1.7 m s−1. A strong acceleration peak of approx. 36 m s−2 is interpreted as a pronounced hit and leads to a complex rotational-motion pattern. In a third step, displacement is derived and amounts to approx. 1.0 m in the vertical direction. The deviation compared to laser distance measurements was approx. −10 %. Furthermore, a full 3D spatiotemporal trajectory of the pebble is reconstructed and visualized supporting the interpretations. Finally, it is demonstrated that multiple pebbles can be analysed simultaneously within one experiment. Compared to other observation methods Smartstone probes allow for the quantification of internal movement characteristics and, consequently, a motion sampling in landslide experiments.
The claim that a thinker concerned with the development of a totalizing metaphysical system can be a literary philosopher may seem hard to justify. For Arthur Schopenhauer, the entire world is the representation or appearance of the will to life, the metaphysical essence of all being. And yet, because this will must always appear and always take form, it is only formally that we can grasp it, only in concrete instances. For this reason, the poet “shows us how the will behaves under the influence of motives and reflection. He presents us this for the most part in the most perfect of its appearances” (WWRII, 310). In this paper, I will argue that Schopenhauer founds a philosophical approach which comes to rest on literary foundations and which alights at key moments on the strength of his literary as well as his philosophical forebears. I will do this by means of looking at how Schopenhauer treats the concept of fate. It is my contention that the fatalism inherent in Schopenhauer’s ethics is a direct result of a fundamentally literary approach to the concept. This enables us to conceive of fate from a literary and not solely from a metaphysical standpoint. I will begin by outlining the place of the literary in Schopenhauer’s philosophy, including a brief account of those writers whose work he incorporates into his analysis, and then I will demonstrate its relation to his fatalism.
The cold pressor test (CPT) elicits strong cardiovascular reactions via activation of the sympathetic nervous system (SNS), yielding subsequent increases in heart rate (HR) and blood pressure (BP). However, little is known on how exposure to the CPT affects cardiac ventricular repolarization. Twenty-eight healthy males underwent both a bilateral feet CPT and a warm water (WW) control condition on two separate days, one week apart. During pre-stress baseline and stress induction cardiovascular signals (ECG lead II, Finometer BP) were monitored continuously. Salivary cortisol and subjective stress ratings were assessed intermittently. Corrected QT (QTc) interval length and T-wave amplitude (TWA) were assessed for each heartbeat and subsequently aggregated individually over baseline and stress phases, respectively. CPT increases QTc interval length and elevates the TWA. Stress-induced changes in cardiac repolarization are only in part and weakly correlated with cardiovascular and cortisol stress-reactivity. Besides its already well-established effects on cardiovascular, endocrine, and subjective responses, CPT also impacts on cardiac repolarization by elongation of QTc interval length and elevation of TWA. CPT effects on cardiac repolarization share little variance with the other indices of stress reactivity, suggesting a potentially incremental value of this parameter for understanding psychobiological adaptation to acute CPT stress.
Present-day air quality is known through dense monitoring and extensive pollu-
tion control mechanisms. In contrast, knowledge of historical pollution,
particularly before the industrial revolution, is accessible only through occasional
reports of singular local events and through natural archives such as ice or
sediment cores that record global-scale pollution. However, the regular local to
regional pollution that most affects human life is hardly known. Historical
sciences have argued both for and against significant air pollution in and around
historic cities and manufacturing sites. For the Roman era, it has been
hypothesized that air quality played a role in several patterns of action of the period.
However, to the author's knowledge, there are no quantitative studies of
Roman emissions. Using the results of modern experimental archaeology, this
study attempts to quantify the emissions from Roman pottery kilns and their
impact on surrounding human settlements. It is shown that although the
pollution did not reach today's limits, it must have approached levels known to cause
adverse health effects. A series of additional test simulations have been
conducted to determine how these first results might be improved in the future.
"Culture", in addition to its ethnic signification, can also express various groups' and communities' political and economic situation in society. As well as signifying the accommodation of ethnic diversity, the integration of dissimilar cultures in South Africa has to do with both the former oppressors and the formerly oppressed coming to terms with the oppression of the past, and with the equitable distribution of material means. Constitutional and other legal means have been designed to facilitate a process of integration dealing with the abovementioned issues. Some of these measures will be looked at. The speaker will argue that the integration of different cultures in South Africa cannot and will not be achieved if the law is invoked, in a strong arm fashion, trying to concoct a melting pot. The law can do no more than aiding the facilitation of a process of consolidation as precondition to nation building. Deep-seated, cultural differences among various sections of the population cannot and should not be denied or simply thought away.
As the oldest genre in New Zealand literature written in English, poetry always played a significant role in the country's literary debate and was generally considered to be an indicator of the country's cultural advancement. Throughout the 20th century, the question of home, of where it is and what it entails, became a crucial issue in discussing a distinct New Zealand sense of identity and in strengthening its independent cultural status. The establishment of a national sense of home was thus of primary concern, and poetry was regarded as the cultural marker of New Zealand's independence as a nation. In this politically motivated cultural debate, the writing of women was only considered on the margin, largely because their writing was considered too personal and too intimately tied together with daily life, especially domestic life, as to be able to contribute to a larger cultural statement. Such criticism built on gender role stereotypes, like for instance women's roles as mothers and housewives in the 1950s. The strong alignment of women with the home environment is not coincidental but a construct that was, and still is, predominantly shaped by white patriarchal ideology. However, it is in particular women's, both Pakeha and Maori, thorough investigation into the concept of home from within New Zealand's society that bears the potential for revealing a more profound relationship between actual social reality and the poetic imagination. The close reading of selected poems by Ursula Bethell, Mary Stanley, Lauris Edmond and J.C. Sturm in this thesis reveals the ways in which New Zealand women of different backgrounds subvert, transcend and deconstruct such paradigms through their poetic imagination. Bethell, Stanley, Edmond and Sturm position their concepts of home at the crossroads between the public and the private realm. Their poems explore the correspondence between personal and national concerns and assess daily life against the backdrop of New Zealand's social development. Such complex socio-cultural interdependence has not been paid sufficient attention to in literary criticism, largely because a suitable approach to capturing the complexity of this kind of interconnectedness was lacking. With Spaces of Overlap and Spaces of Mediation this thesis presents two critical models that seek to break the tight critical frames in the assessment of poetic concepts of home. Both notions are based on a contextualised approach to the poetic imagination in relation to social reality and seek to carve out the concept of home in its interconnected patterns. Eventually, this approach helps to comprehend the ways in which women's intimate negotiations of home translate into moments of cultural insight and transcend the boundaries of the individual poets' concerns. The focus on women's (re)negotiations of home counteracts the traditionally male perspective on New Zealand poetry and provides a more comprehensive picture of New Zealand's cultural fabric. In highlighting the works of Ursula Bethell, Mary Stanley, Lauris Edmond and J.C. Sturm, this thesis not only emphasises their individual achievements but makes clear that a traditional line of New Zealand women's poetry exists that has been neglected far too long in the estimation of New Zealand's literary history.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
This essay identifies a shared response to news media in poetry written over the past three decades by writers working in Chinese, Russian, and English. These poets often directly incorporate texts and images from news media into their work. Some scholars have argued that this tendency towards the collaging of texts derived from news and social media reflects a shift in poetic subjectivity. However, when seen from a comparative perspective, these and other cut-ups of news and social media are better understood as, on the one hand, an extension of a much longer tradition of literary and artistic responses to the news and, on the other, a renewal of that tradition in response to the intensification of the intertwined pressures of new media and globalization since the end of the Cold War and the rise of the Internet. The article identifies this shared response to media and globalization among a variety of examples in Chinese, Russian, and English, including Kirill Medvedev’s «Текст, посвященный трагическим событиям 11 сентября в Нью-Йорке» (“Text Devoted to the Tragic Events of September 11 in New York”); Stanislav Lvovsky’s «Чужими словами» (“In Other Words”); Dmitri Prigov’s «По материалам прессы» (“Based on Material from the Press”) and “ru.sofob (50 x 50)”; Lin Yaode’s 林燿德 “Er erba” 《二二八》(“February 28”), Hsia Yü 夏宇 and her collaborators’ group project “Huadiao huadiao huadiao” 《劃掉劃掉劃掉》 (“Cross It Out, Cross It Out, Cross It Out”), Yan Jun’s 顏峻 2003 multi-media video performance “Fan dui yiqie you zuzhi de qipian” 《反对一切有组织的欺骗》 (“Against All Organized Deception”); online video poetry produced in response to the 2008 Sichuan earthquake; and Brian Kim Stefans’s mashup of “New York Times” articles with texts from the Situationist International. On the one hand, these texts operate between various media and art forms: between poetry and contemporary art, music, journalism, and social media, between the print newspaper and digital file, between the webpage and live performance, and between image and text. But on the other hand, and inextricably, they also operate within global information networks. They are better understood as addressing not the transformation of the poetic subject but the undoing of the boundaries of poetry and of the concept of a nationally defined literature.
Many real-life phenomena, such as computer systems, communication networks, manufacturing systems, supermarket checkout lines as well as structural military systems can be represented by means of queueing models. Looking at queueing models, a controller may considerably improve the system's performance by reducing queue lengths, or increasing the throughput, or diminishing the overhead, whereas in the absence of a controller the system behavior may get quite erratic, exhibiting periods of high load and long queues followed by periods, during which the servers remain idle. The theoretical foundations of controlled queueing systems are led in the theory of Markov, semi-Markov and semi-regenerative decision processes. In this thesis, the essential work consists in designing controlled queueing models and investigation of their optimal control properties for the application in the area of the modern telecommunication systems, which should satisfy the growing demands for quality of service (QoS). For two types of optimization criterion (the model without penalties and with set-up costs), a class of controlled queueing systems is defined. The general case of the queue that forms this class is characterized by a Markov Additive Arrival Process and heterogeneous Phase-Type service time distributions. We show that for these queueing systems the structural properties of optimal control policies, e.g. monotonicity properties and threshold structure, are preserved. Moreover, we show that these systems possess specific properties, e.g. the dependence of optimal policies on the arrival and service statistics. In order to practically use controlled stochastic models, it is necessary to obtain a quick and an effective method to find optimal policies. We present the iteration algorithm which can be successfully used to find an optimal solution in case of a large state space.
The study analyzes the long-term trends (1998–2019) of concentrations of the air pollutants ozone (O3) and nitrogen oxides (NOx) as well as meteorological conditions at forest sites in German midrange mountains to evaluate changes in O3 uptake conditions for trees over time at a plot scale. O3 concentrations did not show significant trends over the course of 22 years, unlike NO2 and NO, whose concentrations decreased significantly since the end of the 1990s. Temporal analyses of meteorological parameters found increasing global radiation at all sites and decreasing precipitation, vapor pressure deficit (VPD), and wind speed at most sites (temperature did not show any trend). A principal component analysis revealed strong correlations between O3 concentrations and global radiation, VPD, and temperature. Examination of the atmospheric water balance, a key parameter for O3 uptake, identified some unusually hot and dry years (2003, 2011, 2018, and 2019). With the help of a soil water model, periods of plant water stress were detected. These periods were often in synchrony with periods of elevated daytime O3 concentrations and usually occurred in mid and late summer, but occasionally also in spring and early summer. This suggests that drought protects forests against O3 uptake and that, in humid years with moderate O3 concentrations, the O3 flux was higher than in dry years with higher O3 concentrations.
Influence of Ozone and Drought on Tree Growth under Field Conditions in a 22 Year Time Series
(2022)
Studying the effect of surface ozone (O3) and water stress on tree growth is important for planning sustainable forest management and forest ecology. In the present study, a 22-year long time series (1998–2019) on basal area increment (BAI) and fructification severity of European beech (Fagus sylvatica L.) and Norway spruce (Picea abies (L.) H.Karst.) at five forest sites in Western Germany (Rhineland Palatinate) was investigated to evaluate how it correlates with drought and stomatal O3 fluxes (PODY) with an hourly threshold of uptake (Y) to represent the detoxification capacity of trees (POD1, with Y = 1 nmol O3 m−2 s−1). Between 1998 and 2019, POD1 declined over time by on average 0.31 mmol m−2 year−1. The BAI showed no significant trend at all sites, except in Leisel where a slight decline was observed over time (−0.37 cm2 per year, p < 0.05). A random forest analysis showed that the soil water content and daytime O3 mean concentration were the best predictors of BAI at all sites. The highest mean score of fructification was observed during the dry years, while low level or no fructification was observed in most humid years. Combined effects of drought and O3 pollution mostly influence tree growth decline for European beech and Norway spruce.
Formulations of macrocyclic lactone anthelmintics such as moxidectin are regularly administered to sheep to combat parasites. A disadvantage of these pharmaceuticals are their side effects on non-target organisms when entering the environment. Little is known about anthelmintic effects on plant reproduction and whether the effects depend on environmental factors. For ecological and methodological reasons, we aimed at testing whether temperature affects the efficacy of a common moxidectin-based formulation on seed germination. We carried out a germination experiment including three typical species of temperate European grasslands (Centaurea jacea, Galium mollugo, Plantago lanceolata). We applied three temperature regimes (15/5, 20/10, 30/20°C), and a four-level dilution series (1:100–1:800) of formulated moxidectin (i.e., Cydectin oral drench). These solutions represent seed-anthelmintic contacts in the digestive tract of sheep shortly after deworming. In addition, a control was carried out with purified water only. We regularly counted emerging seedlings and calculated final germination percentage, mean germination time and synchrony of germination. Formulated moxidectin significantly reduced percentage, speed and synchrony of germination. A 1:100 dilution of the formulation reduced germination percentage by a quarter and increased mean germination time by six days compared to the control. Temperature moderated effects of the anthelmintic drug on germination in all response variables and all species, but in different patterns and magnitudes (significant anthelmintic x temperature x species interactions). In all response variables, the two more extreme temperature regimes (15/5, 30/20°C) led to the strongest effects of formulated moxidectin. With respect to germination percentage, G. mollugo was more sensitive to formulated moxidectin at the warmest temperature regime, whereas P. lanceolata showed the highest sensitivity at the coldest regime. This study shows that it is important to consider temperature dependencies of the effects of pharmaceuticals on seed germination when conducting standardised germination experiments.
In this study, candidate loci for periodic catatonia (SCZD10, OMIM #605419) on chromosome 15q15 and 22q13.33 have been fine mapped and investigated. Previously, several studies found evidences for a major susceptibility locus on chromosome 15q15 and a further potential locus on 22q13.33 pointing to genetic heterogeneity. Fine mapping was done in our multiplex families through linkage and mutational analysis using genomic markers selected from public databases. Positional candidate genes like SPRED1 and BRD1, and ultra-conserved elements were investigated by direct sequencing in these families. The results narrow down the susceptibility locus on chromosome 15q14-15q15.1 to a region between markers D15S1042 and D15S968, as well as exclusion of SPRED1 and ultra-conserved elements as susceptibility candidates. Fine mapping for two chromosome 23q13.33-linked families showed that the recombination events would place the disease-causing gene to a telomeric ~577 Kb interval and SNP rs138880 investigation revealed an A-allele in the affected person, therefore excludes BRD1 as well as confirmed MLC1 to be the candidate gene for periodic catatonia.
Early life adversity (ELA) is associated with a higher risk for diseases in adulthood. Changes in the immune system have been proposed to underlie this association. Although higher levels of inflammation and immunosenescence have been reported, data on cell-specific immune effects are largely absent. In addition, stress systems and health behaviors are altered in ELA, which may contribute to the generation of the "ELA immune phenotype". In this thesis, we have investigated the ELA immune phenotype on a cellular level and whether this is an indirect consequence of changes in behavior or stress reactivity. To address these questions the EpiPath cohort was established, consisting of 115 young adults with or without ELA. ELA participants had experienced separation from their parents in early childhood and were subsequently adopted, which is a standard model for ELA, whereas control participants grew up with their biological parents. At a first visit, blood samples were taken for analysis of epigenetic markers and immune parameters. A selection of the cohort underwent a standardized laboratory stress test (SLST). Endocrine, immune, and cardiovascular parameters were assessed at several time points before and after stress. At a second visit, participants underwent structural clinical interviews and filled out psychological questionnaires. We observed a higher number of activated T cells in ELA, measured by HLA-DR and CD25 expression. Neither cortisol levels nor health-risk behaviors explained the observed group differences. Besides a trend towards higher numbers of CCR4+CXCR3-CCR6+ CD4 T cells in ELA, relative numbers of immune cell subsets in circulation were similar between groups. No difference was observed in telomere length or in methylation levels of age-related CpGs in whole blood. However, we found a higher expression of senescence markers (CD57) on T cells in ELA. In addition, these cells had an increased cytolytic potential. A mediation analysis demonstrated that cytomegalovirus infection " an important driving force of immunosenescence " largely accounted for elevated CD57 expression. The psychological investigations revealed that after adoption, family conditions appeared to have been similar to the controls. However, PhD thesis MMC Elwenspoek 18 ELA participants scored higher on a depression index, chronic stress, and lower on self-esteem. Psychological, endocrine, and cardiovascular parameters significantly responded to the SLST, but were largely similar between the two groups. Only in a smaller subset of groups matched for gender, BMI, and age, the cortisol response seemed to be blunted in ELA participants. Although we found small differences in the methylation level of the GR promoter, GR sensitivity and mRNA expression levels GR as well as expression of the GR target genes FKBP5 and GILZ were similar between groups. Taken together, our data suggest an elevated state of immune activation in ELA, in which particularly T cells are affected. Furthermore, we found higher levels of T cells immunosenescence in ELA. Our data suggest that ELA may increase the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell specific immunosenescence. Importantly, we found no evidence of HPA dysregulation in participants exposed to ELA in the EpiPath cohort. Thus, the observed immune phenotype does not seem to be secondary to alterations in the stress system or health-risk behaviors, but rather a primary effect of early life programming on immune cells. Longitudinal studies will be necessary to further dissect cause from effect in the development of the ELA immune phenotype.
Income is one of the key indicators to measure regional differences, individual opportunities, and inequalities in society. In Germany, the regional distribution of income is a central concern, especially regarding persistent East-West, North-South, or urban-rural inequalities.
Effective local policies and institutions require reliable data and indicators on
regional inequality. However, its measurement faces severe data limitations: Inconsistencies
in the existing microdata sources yield an inconclusive picture of regional inequality.
While survey data provide a wide range of individual and household information but lack top incomes, tax data contain the most reliable income records but offer a limited range of socio-demographic variables essential for income analysis. In addition, information on the
long-term evolution of the income distribution at the small-scale level is scarce.
In this context, this thesis evaluates regional income inequality in Germany from various perspectives and embeds three self-contained studies in Chapters 3, 4, and 5, which present different data integration approaches. The first chapter motivates this thesis, while the second chapter provides a brief overview of the theoretical and empirical concepts as well
as the datasets, highlighting the need to combine data from different sources.
Chapter 3 tackles the issue of poor coverage of top incomes in surveys, also referred to as the ’missing rich’ problem, which leads to severe underestimation of income inequality. At the regional level this shortcoming is even more eminent due to small regional sample sizes. Based on reconciled tax and survey data, this chapter therefore proposes a new multiple
imputation top income correction approach that, unlike previous research, focuses on the regional rather than the national level. The findings indicate that inequality between and within the regions is much larger than previously understood with the magnitude of the adjustment depending on the federal states’ level of inequality in the tail. To increase the potential of the tax data for income analysis and to overcome the lack
of socio-demographic characteristics, Chapter 4 enriches the tax data with information on education and working time from survey data. For that purpose, a simulation study evaluates missing data methods and performant prediction models, finding that Multinomial
Regression and Random Forest are the most suitable methods for the specific data fusion scenario. The results indicate that data fusion approaches broaden the scope for regional inequality analysis from cross-sectional enhanced tax data.
Shifting from a cross-sectional to a longitudinal perspective on regional income inequality, Chapter 5 contributes to the currently relatively small body of literature dealing with the potential development of regional income disparities over time. Regionalized dynamic microsimulations provide a powerful tool for the study of long-term income developments. Therefore, this chapter extends the microsimulation model MikroSim with an income module
that accounts for the individual, household, and regional context. On this basis, the potential dynamics in gender and migrant income gaps across the districts in Germany are simulated under scenarios of increased full-time employment rates and higher levels
of tertiary education. The results show that the scenarios have regionally differing effects on inequality dynamics, highlighting the considerable potential of dynamic microsimulations for regional evidence-based policies. For the German case, the MikroSim model is well suited to analyze future regional developments and can be flexibly adapted for further specific research questions.
Background: Increasing exposure to engineered inorganic nanoparticles takes actually place in both terrestric and aquatic ecosystems worldwide. Although we already know harmful effects of AgNP on the soil bacterial community, information about the impact of the factors functionalization, concentration, exposure time, and soil texture on the AgNP effect expression are still rare. Hence, in this study, three soils of different grain size were exposed for up to 90 days to bare and functionalized AgNP in concentrations ranging from 0.01 to 1.00 mg/kg soil dry weight. Effects on soil microbial community were quantified by various biological parameters, including 16S rRNA gene, photometric, and fluorescence analyses.
Results: Multivariate data analysis revealed significant effects of AgNP exposure for all factors and factor combinations investigated. Analysis of individual factors (silver species, concentration, exposure time, soil texture) in the unifactorial ANOVA explained the largest part of the variance compared to the error variance. In depth analysis of factor combinations revealed even better explanation of variance. For the biological parameters assessed in this study, the matching of soil texture and silver species, and the matching of soil texture and exposure time were the two most relevant factor combinations. The factor AgNP concentration contributed to a lower extent to the effect expression compared to silver species, exposure time and physico–chemical composition of soil.
Conclusions: The factors functionalization, concentration, exposure time, and soil texture significantly impacted the effect expression of AgNP on the soil microbial community. Especially long-term exposure scenarios are strongly needed for the reliable environmental impact assessment of AgNP exposure in various soil types.
The influence of the dopamine agonist Ritalin-® on performance in a card sorting task involving a monetary reward component was tested in 43 healthy male participants. It was investigated whether Ritalin-® would have differential behavioral effects as a function of the participants' parental bonding experiences and the personality variable "Novelty Seeking". When activity and performance accuracy were stimulated my monetary reward, Ritalin-® reduced activity in response to reward and added to the reward-induced increase in performance accuracy. However, performance accuracy after drug challenge was improved only in the low care participants. In the high care participants, it was contrarily impaired. This observation suggests that the successful therapeutic administration of Ritalin-® in ADHD may be influenced by early life parental care. Suggesting an association between the personality dimension of "Novelty Seeking" and the dopamine system, high "Novelty Seeking" scores positively correlated with sensitivity to Ritalin-® challenge.
Background: As digital mental health delivery becomes increasingly prominent, a solid evidence base regarding its efficacy is needed.
Objective: This study aims to synthesize evidence on the comparative efficacy of systemic psychotherapy interventions provided via digital versus face-to-face delivery modalities.
Methods: We followed PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for searching PubMed, Embase, Cochrane CENTRAL, CINAHL, PsycINFO, and PSYNDEX and conducting a systematic review and meta-analysis. We included randomized controlled trials comparing mental, behavioral, and somatic outcomes of systemic psychotherapy interventions using self- and therapist-guided digital versus face-to-face delivery modalities. The risk of bias was assessed with the revised Cochrane Risk of Bias tool for randomized trials. Where appropriate, we calculated standardized mean differences and risk ratios. We calculated separate mean differences for nonaggregated analysis.
Results: We screened 3633 references and included 12 articles reporting on 4 trials (N=754). Participants were youths with poor diabetic control, traumatic brain injuries, increased risk behavior likelihood, and parents of youths with anorexia nervosa. A total of 56 outcomes were identified. Two trials provided digital intervention delivery via videoconferencing: one via an interactive graphic interface and one via a web-based program. In total, 23% (14/60) of risk of bias judgments were high risk, 42% (25/60) were some concerns, and 35% (21/60) were low risk. Due to heterogeneity in the data, meta-analysis was deemed inappropriate for 96% (54/56) of outcomes, which were interpreted qualitatively instead. Nonaggregated analyses of mean differences and CIs between delivery modalities yielded mixed results, with superiority of the digital delivery modality for 18% (10/56) of outcomes, superiority of the face-to-face delivery modality for 5% (3/56) of outcomes, equivalence between delivery modalities for 2% (1/56) of outcomes, and neither superiority of one modality nor equivalence between modalities for 75% (42/56) of outcomes. Consequently, for most outcome measures, no indication of superiority or equivalence regarding the relative efficacy of either delivery modality can be made at this stage. We further meta-analytically compared digital versus face-to-face delivery modalities for attrition (risk ratio 1.03, 95% CI 0.52-2.03; P=.93) and number of sessions attended (standardized mean difference –0.11; 95% CI –1.13 to –0.91; P=.83), finding no significant differences between modalities, while CIs falling outside the range of the minimal important difference indicate that equivalence cannot be determined at this stage.
Conclusions: Evidence on digital and face-to-face modalities for systemic psychotherapy interventions is largely heterogeneous, limiting conclusions regarding the differential efficacy of digital and face-to-face delivery. Nonaggregated and meta-analytic analyses did not indicate the superiority of either delivery condition. More research is needed to conclude if digital and face-to-face delivery modalities are generally equivalent or if—and in which contexts—one modality is superior to another.
In recent years, Islamic banking has been one of the fastest growing markets in the financial world. Even to German banks, Islamic finance is not as 'foreign' as one might think. Indeed, several banks are already operating so-called "Islamic windows" in various Arab countries. However, German banks are still reluctant to offer 'Islamic' products in Germany, despite the fact that approximately 3.5 million Muslims currently live there. Potential reasons for this reluctance include widespread misunderstanding of Islamic banking in Germany and prevailing cultural prejudice towards Islam generally. The author seeks to address these concerns and to take an objective approach towards understanding the potential for Islamic banking in Germany. Legally, Islamic law cannot be the governing law of any contract in Germany. Therefore, the aim must be to draft contracts that are both enforceable under German law and consistent with the principles of Shari'a " the Islamic law. In this paper, the author gives a detailed legal analysis of the most common Islamic banking products and how they could be given effect under German law, while attempting to address widespread concerns about arbitration or parallel Shari'a courts. This publication is one of the first legal analysis of Islamic banking products in Germany. As such, its goal is not to be the final word, but rather to begin the conversation about potential problems and conflicts of Islamic banking in Germany that require further investigation.
Family firms play a crucial role in the DACH region (Germany, Austria, Switzerland). They are characterized by a long tradition, a strong connection to the region, and a well-established network. However, family firms also face challenges, especially in finding a suitable successor. Wealthy entrepreneurial families are increasingly opting to establish Single Family Offices (SFOs) as a solution to this challenge. An SFO takes on the management and protection of family wealth. Its goal is to secure and grow the wealth over generations. In Germany alone, there are an estimated 350 to 450 SFOs, with 70% of them being established after the year 2000. However, research on SFOs is still in its early stages, particularly regarding the role of SFOs as firm owners. This dissertation delves into an exploration of SFOs through four quantitative empirical studies. The first study provides a descriptive overview of 216 SFOs from the DACH-region. Findings reveal that SFOs exhibit a preference for investing in established companies and real estate. Notably, only about a third of SFOs engage in investments in start-ups. Moreover, SFOs as a group are heterogeneous. Categorizing them into three groups based on their relationship with the entrepreneurial family and the original family firm reveals significant differences in their asset allocation strategies. Subsequent studies in this dissertation leverage a hand-collected sample of 173 SFO-owned firms from the DACH region, meticulously matched with 684 family-owned firms from the same region. The second study focusing on financial performance indicates that SFO-owned firms tend to exhibit comparatively poorer financial performance than family-owned firms. However, when members of the SFO-owning family hold positions on the supervisory or executive board of the firm, there's a notable improvement. The third study, concerning cash holdings, reveals that SFO-owned firms maintain a higher cash holding ratio compared to family-owned firms. Notably, this effect is magnified when the SFO has divested its initial family firms. Lastly, the fourth study regarding capital structure highlights that SFO-owned firms tend to display a higher long-term debt ratio than family-owned firms. This suggests that SFO-owned firms operate within a trade-off theory framework, like private equity-owned firms. Furthermore, this effect is stronger for SFOs that sold their original family firm. The outcomes of this research are poised to provide entrepreneurial families with a practical guide for effectively managing and leveraging SFOs as a strategic long-term instrument for succession and investment planning.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
This dissertation is dedicated to the analysis of the stabilty of portfolio risk and the impact of European regulation introducing risk based classifications for investment funds.
The first paper examines the relationship between portfolio size and the stability of mutual fund risk measures, presenting evidence for economies of scale in risk management. In a unique sample of 338 fund portfolios we find that the volatility of risk numbers decreases for larger funds. This finding holds for dispersion as well as tail risk measures. Further analyses across asset classes provide evidence for the robustness of the effect for balanced and fixed income portfolios. However, a size effect did not emerge for equity funds, suggesting that equity fund managers simply scale their strategy up as they grow. Analyses conducted on the differences in risk stability between tail risk measures and volatilities reveal that smaller funds show higher discrepancies in that respect. In contrast to the majority of prior studies on the basis of ex-post time series risk numbers, this study contributes to the literature by using ex-ante risk numbers based on the actual assets and de facto portfolio data.
The second paper examines the influence of European legislation regarding risk classification of mutual funds. We conduct analyses on a set of worldwide equity indices and find that a strategy based on the long term volatility as it is imposed by the Synthetic Risk Reward Indicator (SRRI) would lead to substantial variations in exposures ranging from short phases of very high leverage to long periods of under investments that would be required to keep the risk classes. In some cases, funds will be forced to migrate to higher risk classes due to limited means to reduce volatilities after crises events. In other cases they might have to migrate to lower risk classes or increase their leverage to ridiculous amounts. Overall, we find if the SRRI creates a binding mechanism for fund managers, it will create substantial interference with the core investment strategy and may incur substantial deviations from it. Fruthermore due to the forced migrations the SRRI degenerates to a passive indicator.
The third paper examines the impact of this volatility based fund classification on portfolio performance. Using historical data on equity indices we find initially that a strategy based on long term portfolio volatility, as it is imposed by the Synthetic Risk Reward Indicator (SRRI), yields better Sharpe Ratios (SRs) and Buy and Hold Returns (BHRs) for the investment strategies matching the risk classes. Accounting for the Fama-French factors reveals no significant alphas for the vast majority of the strategies. In our simulation study where volatility was modelled through a GJR(1,1) - model we find no significant difference in mean returns, but significantly lower SRs for the volatility based strategies. These results were confirmed in robustness checks using alternative models and timeframes. Overall we present evidence which suggests that neither the higher leverage induced by the SRRI nor the potential protection in downside markets does pay off on a risk adjusted basis.
The discretization of optimal control problems governed by partial differential equations typically leads to large-scale optimization problems. We consider flow control involving the time-dependent Navier-Stokes equations as state equation which is stamped by exactly this property. In order to avoid the difficulties of dealing with large-scale (discretized) state equations during the optimization process, a reduction of the number of state variables can be achieved by employing a reduced order modelling technique. Using the snapshot proper orthogonal decomposition method, one obtains a low-dimensional model for the computation of an approximate solution to the state equation. In fact, often a small number of POD basis functions suffices to obtain a satisfactory level of accuracy in the reduced order solution. However, the small number of degrees of freedom in a POD based reduced order model also constitutes its main weakness for optimal control purposes. Since a single reduced order model is based on the solution of the Navier-Stokes equations for a specified control, it might be an inadequate model when the control (and consequently also the actual corresponding flow behaviour) is altered, implying that the range of validity of a reduced order model, in general, is limited. Thus, it is likely to meet unreliable reduced order solutions during a control problem solution based on one single reduced order model. In order to get out of this dilemma, we propose to use a trust-region proper orthogonal decomposition (TRPOD) approach. By embedding the POD based reduced order modelling technique into a trust-region framework with general model functions, we obtain a mechanism for updating the reduced order models during the optimization process, enabling the reduced order models to represent the flow dynamics as altered by the control. In fact, a rigorous convergence theory for the TRPOD method is obtained which justifies this procedure also from a theoretical point of view. Benefiting from the trust-region philosophy, the TRPOD method guarantees to save a lot of computational work during the control problem solution, since the original state equation only has to be solved if we intend to update our model function in the trust-region framework. The optimization process itself is completely based on reduced order information only.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
The stress hormone cortisol as the end-product of the hypothalamic-pituitary-adrenal (HPA) axis has been found to play a crucial role in the release of aggressive behavior (Kruk et al., 2004; Böhnke et al., 2010). In order to further explore potential mechanisms underlying the relationship between stress and aggression, such as changes in (social) information processing, we conducted two experimental studies that are presented in this thesis. In both studies, acute stress was induced by means of the Socially Evaluated Cold Pressor Test (SECP) designed by Schwabe et al. (2008). Stressed participants were classified as either cortisol responders or nonresponders depending on their rise in cortisol following the stressor. Moreover, basal HPA axis activity was measured prior to the experimental sessions and EEG was recorded throughout the experiments. The first study dealt with the influence of acute stress on cognitive control processes. 41 healthy male participants were assigned to either the stress condition or the non-stressful control procedure of the SECP. Before as well as after the stress induction, all participants performed a cued task-switching paradigm in order to measure cognitive control processes. Results revealed a significant influence of acute and basal cortisol levels, respectively, on the motor preparation of the upcoming behavioral response, that was reflected in changes in the magnitude of the terminal Contingent Negative Variation (CNV). In the second study, the effect of acute stress and subsequent social provocation on approach-avoidance motivation was examined. 72 healthy students (36 males, 36 females) took part in the study. They performed an approach-avoidance task, using emotional facial expressions as stimuli, before as well as after the experimental manipulation of acute stress (again via the SECP) and social provocation realized by means of the Taylor Aggression Paradigm (Taylor, 1967). Additionally to salivary cortisol, testosterone samples were collected at several points in time during the experimental session. Results indicated a positive relationship between acute testosterone levels and the motivation to approach social threat stimuli in highly provoked cortisol responders. Similar results were found when the testosterone-to-cortisol ratio at baseline was taken into account instead of acute testosterone levels. Moreover, brain activity during the approach-avoidance task was significantly influenced by acute stress and social provocation, as reflected in reductions of early (P2) as well as of later (P3) ERP components in highly provoked cortisol responders. This may indicate a less accurate, rapid processing of socially relevant stimuli due to an acute increase in cortisol and subsequent social provocation. In conclusion, the two studies presented in this thesis provide evidence for significant changes in information processing due to acute stress, basal cortisol levels and social provocation, suggesting an enhanced preparation for a rapid behavioral response in the sense of a fight-or-flight reaction. These results confirm the model of Kruk et al. (2004) proposing a mediating role of changed information processes in the stress-aggression-link.
Globalization significantly transforms labor markets. Advances in production technologies, transportation, and political integration reshape how and where goods and services are produced. Local economic conditions and diverse policy responses create varying speeds of change, affecting regions' attractiveness for living and working -- and promoting mobility.
Competition for talent necessitates a deep understanding of why individuals choose specific destinations, how to ensure their effective labor market integration, and what workplace factors affect workers' well-being.
This thesis focuses on two crucial aspects of labor market change -- Migration and workplace technological change. It contributes to our understanding of the determinants of labor mobility, the factors facilitating migrant integration, and the role of workplace automation for worker well-being.
Chapter 2 investigates the relationship between minimum wages (MWs) and regional worker mobility in the EU. EU citizens are free to work anywhere in the common market, which allows them to take advantage of the significant variation in MWs across the EU. However, although MWs are set at the national level, it is also their local relevance that varies substantially -- depending on factors such as the share of affected workers or the extent to which they shift local compensation levels. These variations may attract workers from elsewhere, from within a country or from abroad.
Analyzing regional variations in the Kaitz index, a measure of local MW impact, reveals that higher MWs can significantly increase inflows of low-skilled EU workers, particularly in central Europe.
Chapter 3 examines the inequality in returns to skills experienced by immigrants, focusing on the role of linguistic proximity between migrants' origin and destination countries. Harmonized individual-level data from nine linguistically diverse migrant-hosting economies allows for an analysis of the wage gaps faced by immigrants from various origins, implicitly indicating how well they and their skills are integrated into the local labor markets. The analysis reveals that greater linguistic distance is associated with a higher wage penalty for highly skilled immigrants and a lower position in the wage distribution for those without tertiary education.
Chapter 4 investigates an institutional factor potentially relevant for the integration of immigrants -- the labor market impact of Confucius Institutes (CIs), Chinese government-sponsored institutions that promote Chinese language and culture abroad. CIs have been found to foster trade and cultural exchange, indicating their potential relevance in shaping attitudes and trust of natives towards China and Chinese individuals. Examining the relationship between local CI presence and the wages of Chinese immigrants in local labor markets of the United States, the analysis reveals that CIs associate with significantly reduced wages for nearby residing Chinese immigrants. An event study demonstrates that the mere announcement of a new CI negatively impacts local wages for Chinese immigrants, independent of the CI's actual opening.
Chapter 5 explores how working in automatable jobs affects life satisfaction in Germany. Following earlier literature, we classify occupations by potential for automation, and define the top third of occupations in this metric as \textit{automatable jobs}. We find workers in highly automatable jobs reporting a lower life satisfaction. Moreover, we detect a non-linearity, where workers in moderately automatable jobs (the second third of the distribution) experience a positive association with life satisfaction. Overall, the negative relationship of automation is most pronounced among younger and blue-collar workers, irrespective of the non-linearity.
Climate change is expected to cause mountain species to shift their ranges to higher elevations. Due to the decreasing amounts of habitats with increasing elevation, such shifts are likely to increase their extinction risk. Heterogeneous mountain topography, however, may reduce this risk by providing microclimatic conditions that can buffer macroclimatic warming or provide nearby refugia. As aspect strongly influences the local microclimate, we here assess whether shifts from warm south-exposed aspects to cool north-exposed aspects in response to climate change can compensate for an upward shift into cooler elevations.
Every day we are exposed to a large set of appetitive food cues, mostly of high caloric, high carbohydrate content. Environmental factors like food cue exposition can impact eating behavior, by triggering anticipatory endocrinal responses and reinforcing the reward value of food. Additionally, it has been shown that eating behavior is largely influence by neuroendocrine factors. Energy homeostasis is of great importance for survival in all animal species. It is challenged under the state of food deprivation which is considered to be a metabolic stressor. Interestingly, the systems regulating stress and food intake share neural circuits. Adrenal glucocorticoids, as cortisol, and the pancreatic hormone insulin have been shown to be crucial to maintain catabolic and anabolic balance. Cortisol and insulin can cross the blood-brain barrier and interact with receptors distributed throughout the brain, influencing appetite and eating behavior. At the same time, these hormones have an important impact on the stress response. The aim of the current work is to broaden the knowledge on reward related food cue processing. With that purpose, we studied how food cue processing is influenced by food deprivation in women (in different phases of the menstrual cycle) and men. Furthermore, we investigated the impact of the stress/metabolic hormones, insulin and cortisol, at neural sites important for energy metabolism and in the processing of visual food cues. The Chapter I of this thesis details the underlying mechanisms of the startle response and its application in the investigation of food cue processing. Moreover, it describes the effects of food deprivation and of the stress-metabolic hormones insulin and cortisol in reward related processing of food cues. It explains the rationale for the studies presented in Chapter II-IV and describes their main findings. A general discussion of the results and recommendations for future research is given. In the study described in Chapter II, startle methodology was used to study the impact of food deprivation in the processing of reward related food cues. Women in different phases of the menstrual cycle and men were studied, in order to address potential effects of sex and menstrual cycle. All participants were studied either satiated or food deprived. Food deprivation provoked enhanced acoustic startle (ASR) response during foreground presentation of visual food cues. Sex and menstrual cycle did not influence this effect. The startle pattern towards food cues during fasting can be explained by a frustrative nonreward effect (FNR), driven by the impossibility to consume the exposed food. In Chapter III, a study is described, which was carried out to explore the central effects of insulin and cortisol, using continuous arterial spin labeling to map cerebral blood flow patterns. Following standardized periods of fasting, male participants received either intranasal insulin, oral cortisol, both, or placebo. Intranasal insulin increased resting regional cerebral blood flow in the putamen and insular cortex, structures that are involved in the regulation of eating behavior. Neither cortisol nor interaction effects were found. These results demonstrate that insulin exerts an action in metabolic centers during resting state, which is not affected by glucocorticoids. The study described in Chapter IV uses a similar pharmacological manipulation as the one presented in Chapter III, while assessing processing of reward related food cues through the startle paradigm validated in Chapter II. A sample of men was studied during short-term food deprivation. Considering the importance of both cortisol and insulin in glucose metabolism, food pictures were divided by glycemic index. Cortisol administration enhanced ASR during foreground presentation of "high glycemic" food pictures. This result suggests that cortisol provokes an increase in reward value of high glycemic food cues, which is congruent with previous research on stress and food consumption. This thesis gives support to the FNR hypothesis towards food cues during states of deprivation. Furthermore, it highlights the potential effects of stress related hormones in metabolism-connected neuronal structures, and in the reward related mechanisms of food cue processing. In a society marked by increased food exposure and availability, alongside with increased stress, it is important to better understand the impact of food exposition and its interaction with relevant hormones. This thesis contributes to the knowledge in this field. More research in this direction is needed.
As a target for condemnation, the thematic prevalence of racism in African American novels of satire is not surprising. In order to confront this vice in its shifting manifestations, however, the African American satirist has to employ special techniques. This thesis examines some of these devices as they occur in George Schuyler- Black No More, Charles Wright- The Wig, and Percival Everett- Erasure. Given the reciprocity of target and technique in the satiric context, close attention is paid to how the authors under study locate and interrogate racism in their narratives. In this respect, the significance of anti-essentialist Marxist criticism in Schuyler- Black No More and the author- portrayal of the society of his time as capitalist machinery is examined. While Schuyler is concerned with exposing the general socioeconomic workings of the 1920s from a Marxist perspective, Wright offers the reader perspective into how this oppressive machinery psychologically manipulates and corrupts the individual in the historic context of Lyndon B. Johnson- political vision of the Great Society. Everett then elaborates on the epistemological concern which is traceable in Wright- work and addresses the role media representation plays in manufacturing images and rigid categories that shape systematic racism. As such, the present study not only highlights the versatility of satire as a rhetorical secret weapon and thus ventures toward the idiosyncrasies of the African American novel of satire, it also makes an effort to trace the ever-changing face of racial discrimination.
The following dissertation contains three studies examining academic boredom development in five high-track German secondary schools (AVG-project data; Study 1: N = 1,432; Study 2: N = 1,861; Study 3: N = 1,428). The investigation period spanned 3.5 years, with four waves of measurement from grades 5 to 8 (T1: 5th grade, after transition to secondary school; T2: 5th grade, after mid-term evaluations; T3: 6th grade, after mid-term evaluations; T4: 8th grade, after mid-term evaluations). All three studies featured cross-sectional and longitudinal analyses, separating, and comparing the subject domains of mathematics and German.
Study 1 provided an investigation of academic boredom’s factorial structure alongside correlational and reciprocal relations of different forms of boredom and academic self-concept. Analyses included reciprocal effects models and latent correlation analyses. Results indicated separability of boredom intensity, boredom due to underchallenge and boredom due to overchallenge, as separate, correlated factors. Evidence for reciprocal relations between boredom and academic self-concept was limited.
Study 2 examined the effectiveness and efficacy of full-time ability grouping for as a boredom intervention directed at the intellectually gifted. Analyses included propensity score matching, and latent growth curve modelling. Results pointed to limited effectiveness and efficacy for full-time ability grouping regarding boredom reduction.
Study 3 explored gender differences in academic boredom development, mediated by academic interest, academic self-concept, and previous academic achievement. Analyses included measurement invariance testing, and multiple-indicator-multi-cause-models. Results showed one-sided gender differences, with boys reporting less favorable boredom development compared to girls, even beyond the inclusion of relevant mediators.
Findings from all three studies were embedded into the theoretical framework of control-value theory (Pekrun, 2006; 2019; Pekrun et al., 2023). Limitations, directions for future research, and practical implications were acknowledged and discussed.
Overall, this dissertation yielded important insights into boredom’s conceptual complexity. This concerned factorial structure, developmental trajectories, interrelations to other learning variables, individual differences, and domain specificities.
Keywords: Academic boredom, boredom intensity, boredom due to underchallenge, boredom due to overchallenge, ability grouping, gender differences, longitudinal data analysis, control-value theory
Entrepreneurial ventures are associated with economic growth, job creation, and innovation. Most entrepreneurial ventures need external funding to succeed. However, they often find it difficult to access traditional forms of financing, such as bank loans. To overcome this hurdle and to provide entrepreneurial ventures with badly-needed external capital, many types of entrepreneurial finance have emerged over the past decades and continue to emerge today. Inspired by these dynamics, this postdoctoral thesis contains five empirical studies that address novel questions regarding established (e.g., venture capital, business angels) and new types of entrepreneurial finance (i.e., initial coin offerings).
The microbial enzyme alkaline phosphatase contributes to the removal of organic phosphorus compounds from wastewaters. To cope with regulatory threshold values for permitted maximum phosphor concentrations in treated wastewaters, a high activity of this enzyme in the biological treatment stage, e.g., the activated sludge process, is required. To investigate the reaction dynamics of this enzyme, to analyze substrate selectivities, and to identify potential inhibitors, the determination of enzyme kinetics is necessary. A method based on the application of the synthetic fluorogenic substrate 4-methylumbelliferyl phosphate is proven for soils, but not for activated sludges. Here, we adapt this procedure to the latter. The adapted method offers the additional benefit to determine inhibition kinetics. In contrast to conventional photometric assays, no particle removal, e.g., of sludge pellets, is required enabling the analysis of the whole sludge suspension as well as of specific sludge fractions. The high sensitivity of fluorescence detection allows the selection of a wide substrate concentration range for sound modeling of kinetic functions.
- Fluorescence array technique for fast and sensitive analysis of high sample numbers
- No need for particle separation – analysis of the whole (diluted) sludge suspension
- Simultaneous determination of standard and inhibition kinetics
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
Academic achievement is a central outcome in educational research, both in and outside higher education, has direct effects on individual’s professional and financial prospects and a high individual and public return on investment. Theories comprise cognitive as well as non-cognitive influences on achievement. Two examples frequently investigated in empirical research are knowledge (as a cognitive determinant) and stress (as a non-cognitive determinant) of achievement. However, knowledge and stress are not stable, what raises questions as to how temporal dynamics in knowledge on the one hand and stress on the other contribute to achievement. To study these contributions in the present doctoral dissertation, I used meta-analysis, latent profile transition analysis, and latent state-trait analysis. The results support the idea of knowledge acquisition as a cumulative and long-term process that forms the basis for academic achievement and conceptual change as an important mechanism for the acquisition of knowledge in higher education. Moreover, the findings suggest that students’ stress experiences in higher education are subject to stable, trait-like influences, as well as situational and/or interactional, state-like influences which are differentially related to achievement and health. The results imply that investigating the causal networks between knowledge, stress, and academic achievement is a promising strategy for better understanding academic achievement in higher education. For this purpose, future studies should use longitudinal designs, randomized controlled trials, and meta-analytical techniques. Potential practical applications include taking account of students’ prior knowledge in higher education teaching and decreasing stress among higher education students.
Within this thesis the hedging behaviour of airlines from 2005 to 2019 is analysed by using an unbalanced panel dataset consisting of a total of 78 airlines from 39 countries. The focus of the analysis is on financial and operational hedging as well as the influence of both on CO2 emissions and the development of emitted CO2 emissions. For the analysis Probit models with random effects and OLS models with fixed effects were used.
The results regarding the relationship between leverage and financial hedging indicate a negative relationship between everage and financial fuel hedging and a non-linear convex relationship for highly leveraged airlines, which is contrary to the theory of financial distress.
In addition, the study provides evidence that airlines using other types of derivatives, such as interest rate derivatives, engage in more fuel hedging.
In terms of operational hedging, the analysis suggests that operating a diversified fleet is a complement to, rather than a substitute for, financial hedging. With regard to alliance membership, the results do not show that alliance membership is a substitute for financial hedging, as members of alliances are more likely to engage in hedging transactions and to a greater extent.
The analysis shows that the relative CO2 emissions fall in the period under review, but this does not apply to the absolute amount. No general statement can be made about the influence of financial and operational hedging on CO2 emissions, as the results are mixed.
Amphibian diversity in the Amazonian floating meadows: a Hanski core-satellite species system
(2021)
The Amazon catchment is the largest river basin on earth, and up to 30% of its waters flow across floodplains. In its open waters, floating plants known as floating meadows abound. They can act as vectors of dispersal for their associated fauna and, therefore, can be important for the spatial structure of communities. Here, we focus on amphibian diversity in the Amazonian floating meadows over large spatial scales. We recorded 50 amphibian species over 57 sites, covering around 7000 km along river courses. Using multi-site generalised dissimilarity modelling of zeta diversity, we tested Hanski's core-satellite hypothesis and identified the existence of two functional groups of species operating under different ecological processes in the floating meadows. ‘Core' species are associated with floating meadows, while ‘satellite' species are associated with adjacent environments, being only occasional or accidental occupants of the floating vegetation. At large scales, amphibian diversity in floating meadows is mostly determined by stochastic (i.e. random/neutral) processes, whereas at regional scales, climate and deterministic (i.e. niche-based) processes are central drivers. Compared with the turnover of ‘core' species, the turnover of ‘satellite' species increases much faster with distances and is also controlled by a wider range of climatic features. Distance is not a limiting factor for ‘core' species, suggesting that they have a stronger dispersal ability even over large distances. This is probably related to the existence of passive long-distance dispersal of individuals along rivers via vegetation rafts. In this sense, Amazonian rivers can facilitate dispersal, and this effect should be stronger for species associated with riverine habitats such as floating meadows.
Earth observation (EO) is a prerequisite for sustainable land use management, and the open-data Landsat mission is at the forefront of this development. However, increasing data volumes have led to a "digital-divide", and consequently, it is key to develop methods that account for the most data-intensive processing steps, then used for the generation and provision of analysis-ready, standardized, higher-level (Level 2 and Level 3) baseline products for enhanced uptake in environmental monitoring systems. Accordingly, the overarching research task of this dissertation was to develop such a framework with a special emphasis on the yet under-researched drylands of Southern Africa. A fully automatic and memory-resident radiometric preprocessing streamline (Level 2) was implemented. The method was applied to the complete Angolan, Zambian, Zimbabwean, Botswanan, and Namibian Landsat record, amounting 58,731 images with a total data volume of nearly 15 TB. Cloud/shadow detection capabilities were improved for drylands. An integrated correction of atmospheric, topographic and bidirectional effects was implemented, based on radiative theory with corrections for multiple scatterings, and adjacency effects, as well as including a multilayered toolset for estimating aerosol optical depth over persistent dark targets or by falling back on a spatio-temporal climatology. Topographic and bidirectional effects were reduced with a semi-empirical C-correction and a global set of correction parameters, respectively. Gridding and reprojection were already included to facilitate easy and efficient further processing. The selection of phenologically similar observations is a key monitoring requirement for multi-temporal analyses, and hence, the generation of Level 3 products that realize phenological normalization on the pixel-level was pursued. As a prerequisite, coarse resolution Land Surface Phenology (LSP) was derived in a first step, then spatially refined by fusing it with a small number of Level 2 images. For this purpose, a novel data fusion technique was developed, wherein a focal filter based approach employs multi-scale and source prediction proxies. Phenologically normalized composites (Level 3) were generated by coupling the target day (i.e. the main compositing criterion) to the input LSP. The approach was demonstrated by generating peak, end and minimum of season composites, and by comparing these with static composites (fixed target day). It was shown that the phenological normalization accounts for terrain- and land cover class-induced LSP differences, and the use of Level 2 inputs enables a wide range of monitoring options, among them the detection of within state processes like forest degradation. In summary, the developed preprocessing framework is capable of generating several analysis-ready baseline EO satellite products. These datasets can be used for regional case studies, but may also be directly integrated into more operational monitoring systems " e.g. in support of the Reducing Emissions from Deforestation and Forest Degradation (REDD) incentive. In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Trier University's products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
Das erste Kapitel "ECOWAS" capability and potential to overcome constraints to growth and poverty reduction of its member states" diskutiert die Analyse wirtschaftlicher und sozialer Barrieren für ökonomisches Wachstum " eine der Hauptelemente für Entwicklungs- und Armutsreduktionsstrategien in Entwicklungsländern. Die Form der länderspezifischen Analyse von Wachstumsbarrieren wurde nach dem Scheitern der auf alle Länder generalisierten Entwicklungsstrategie des Washington Consensus insbesondere durch den Ansatz der "Growth Diagnostics" der Harvard Professoren Hausman, Rodrik und Velasco eingeführt. Es zeigt sich jedoch, dass bisher der Fokus rein auf den länderspezifischen Analysen bzw. Strategieentwicklungen liegt. Diese Arbeit erweiterte die Diskussion auf die regionale Ebene, indem es beispielhaft an der Economic Community of West African States (ECOWAS) die länderspezifischen Wachstumsbarrieren mit den regionalen Wachstumsbarrieren vergleicht. Dies erfolgt mittels einer Darstellung der in Studien und Strategien bereits identifizierten, länderspezifischen Wachstumsbarrieren in den jeweiligen Ländern sowie mit der Auswertung der regionalen Strategien der ECOWAS. Dazu wird ermittelt, inwieweit auf der regionalen Ebene auch messbare Ergebnisse bei der Bekämpfung von Wachstumsbarrieren erzielt werden. Es zeigt sich, dass ,trotz der wirtschaftlichen und sozialen Diversität der Region, die ECOWAS den Großteil der in den Ländern identifizierten Wachstumsbarrieren ebenfalls auflistet und darüber hinaus sogar mit messbaren Ergebnissen dazu beiträgt, Veränderungen des Status Quo zu erreichen. Die Erweiterung des Ansatzes der Growth Diagnostics auf die regionale Ebene sowie die Erweiterung um das vergleichende Element von länderspezifischen und regionalen Wachstumsbarrieren zeigen sich als praktikabler Weg, Entwicklungsstrategien auf regionaler Ebene zu prüfen und subsidiär weiterzuentwickeln. Das zweite Kapitel "Simplifying evaluation of potential causalities in development projects using Qualitative Comparative Analysis (QCA)" diskutiert die Methode der qualitativen komperativen Analyse (QCA) als Evaluierungsmethodik für Projekte der Entwicklungszusammenarbeit. Hierbei stehen die adäquate Messung sowie die verständliche Darstellung der Wirkung von Entwicklungszusammenarbeit im Vordergrund. Dies ist ein Beitrag zu der intensiv geführten Diskussion, wie Wirkung von Hilfe in Entwicklungsländern gemessen und daraus für weitere Projekte gelernt werden kann. Mit der beispielhaften Anwendung der QCA auf einen Datensatz der deutschen Entwicklungszusammenarbeit im Senegal wird erstmalig diese Methode für die Entwicklungszusammenarbeit in der Praxis angewandt. Der Fokus liegt dabei auf der Überprüfung von bestimmten Programmtheorien, d.h. der Annahme bestimmter Zusammenhänge zwischen eingesetzten Mitteln, äußeren Umständen und den Projektergebnissen bei der Implementierung von Projekten. Während solche Programmtheorien in dem Großteil der Projektskizzen der deutschen Entwicklungszusammenarbeit enthalten sind, werden die wenigsten dieser Programmtheorien geprüft. Diese Arbeit zeigt QCA als eine effiziente Methode für diese Überprüfung. Eine eindeutige Bestätigung oder Falsifizierung dieser Theorien ist mittels dieser Methodik möglich. Dazu können die Ergebnisse bei den beiden einfacheren Formen der QCA, der crisp-set sowie der multi-value QCA, leicht nachvollziehbar vermittelt werden. Des Weiteren zeigt die Arbeit, dass QCA ebenfalls die Weiterentwicklung einer Programmtheorie ermöglicht, allerdings ist diese Weiterentwicklung nur begrenzt effizient und stark von den vorliegenden Daten sowie der Datenstruktur abhängig. Die Arbeit zeigt somit das Potential der QCA insbesondere für den Test von Programmtheorien auf und stellt die praktische Anwendung für mögliche Replizierung beispielhaft dar. Das dritte und letzte Kapitel der Doktorarbeit "The regional trade dynamics of Turkey: a panel data gravity model" analysiert den türkischen Handel, um die Veränderungen der letzten Jahrzehnte aufzuzeigen und daran zu diskutieren, inwieweit sich die Türkei als aufstrebendes Schwellenland von den bestehenden Handelsstrukturen loslöst. Diese Arbeit ist ein Beitrag zur Diskussion der sich Verschiebenden Machtkonstellationen durch das wirtschaftliche Aufholen der Schwellenländer. Bei der Türkei ist diese Diskussion zusätzlich interessant, da die Frage, ob die Türkei sich von der westlichen Welt, Nordamerika und Europa, abwendet, berücksichtigt wird. Mittels Dummy-Variablen für verschiedene Regionen in einem Gravitätsmodell werden die türkischen Handelsdaten zuerst insgesamt und nach Sektoren analysiert und die Veränderungen über verschieden Perioden des türkischen Außenhandels betrachtet. Es zeigt sich, dass in den türkischen Handelsbeziehungen eine Regionalisierung und eine Diversifizierung der Handelspartner stattfinden. Allerdings geht dies nicht mit einer Abkehr von westlichen Handelspartnern einher.
This thesis contains three parts that are all connected by their contribution to research about the effects of trading apps on investment behavior. The primary motivation for this study is to investigate the previously undetermined consequences and effects of trading apps, which are a new phenomenon in the broker market, on the investment and risk behavior of Neobroker users.
Chapter 2 addresses the characteristics of a typical Neobroker user and a former Neobroker user and the impact of trading apps on the investment and risk behavior of their users. The results show that Neobroker users are significantly more risk tolerant than the general German population and are influenced by trading apps regarding their investment and risk behavior. Low trading fees and the low minimum investment amount are the main reasons for the use of trading apps. Investors who stop using trading apps mostly stop investing altogether. Another worrying result is that financial literacy among all groups is low and most Neobroker users have wrong conceptions about how trading apps earn money. In general, the financial literacy of all groups considered in this chapter is surprisingly low.
The third chapter investigates the effects of trading apps on investment behavior over time and compares the investment and risk behavior of Neobroker users and general investors. By using representative data of German Neobroker users, who were surveyed repeatedly over a 8-month time interval, it becomes possible to determine causal effects of the use of trading apps over time. In total, the financial literacy of Neobroker users increases with the longer use of a trading app. A worrying result is that the risk tolerance of Neobroker users rises significantly over time. Male Neobroker users gain a higher annual return (non-risk-adjusted) than female Neobroker users. In comparison to general investors, Neobroker users are significantly younger, more risk tolerant, more likely to buy derivatives and gain a higher annual return (non-risk-adjusted).
The fourth chapter analyses the impact of personality traits on the investment and risk behavior of Neobroker users. The results show that the BIG-5 personality traits have an impact on the investment behavior of Neobroker users. Two personality traits, openness and conscientiousness, stand out the most, as these two have explanatory power over various aspects of the behavior of Neobroker users. In particular, whether they buy different financial products than planned, the time they inform themselves about financial markets, the variety of financial products owned, and the reasons to use a Neobroker. Surprisingly, the risk tolerance of Neobroker users and the reasons to invest are not connected to any personal dimension. Whether a participant uses a trading app or a traditional broker to invest is respectively influenced by different personality traits.
In the face of uncontrollable complexity, the concept of a rational design of the organization is being replaced by the notion of an open future that is inherently unpredictable and unplanable. In rapidly changing environments, organizations and leaders are confronted with a constant stream of irritations and unexpected developments, that require ongoing attention. This prompts the question of whether the conceptualization of digital transformation as a paradigm shift also implies the need for new forms of leadership. The article analyzes the discourse on digital leadership and assesses the extent to which this concept relativizes leadership in the context of the evolution of leadership theory, which is characterized by a persistent process of modification and relativization of preceding concepts. Leadership concepts are not only responsive to general needs, but also vary according to specific contexts, such as non-profit leadership or leadership in social welfare organizations and meta-organizations. Results of a discourse analysis, which underscore the significance of adopting a complexity theory perspective on digital leadership, will therefore be contrasted with the initial findings of an empirical study on digitization in such meta-organizations. This allows for a discussion of the general findings on the revitalization of leadership, which will serve as a paradigmatic example of the previously developed context. The article concludes with implications for further theory development with the aim of making a specific contribution to organization-sensitive digitization research. The findings of the empirical study indicate the significance of employing informal structures and a heightened emphasis on subjectivity within meta-organizations, as opposed to the formal structures of organizations. The concept of digital leadership does not signify the obsolescence of traditional leadership; rather, it can be conceptualized as an advanced form of unheroic leadership within the context of external and internal complexity.
GIS – what can and what can’t it say about social relations in adaptation to urban flood risk?
(2019)
Urban flooding cannot be avoided entirely and in all areas, particularly in coastal cities. Therefore adaptation to the growing risk is necessary. Geographical Information Systems (GIS) based knowledge on risk informs location-based approach to adaptation to climate risk. It allows managing city- wide coordination of adaptation measures, reducing adverse impacts of local strategies on neighbouring areas to the minimum. Quantitative assessments dominate GIS applications in flood risk management, for instance to demonstrate the distribution of people and assets in a flood prone area. Qualitative, participatory approaches to GIS are on the rise but have not been applied in the context of flooding yet. The overarching research question of this working paper is: what can GIS, and what can it not say about relationships / social relations in adaptation to urban flood risk? The use of GIS in risk mapping has exposed environmental injustices. Applications of GIS further allow model- ling future flood risk in function of demographic and land use changes, and combining it with decision support systems (DSS). While such GIS applications provide invaluable information for urban planners steering adaptation they however fall short on revealing the social relations that shape individual and household adaptation decisions. The relevance of networked social relations in adaptation to flood risk has been demonstrated in case studies, and extensively in the literature on organizational learning and adaptation to change. The purpose of this literature review is to identify the type of social relations that shape adaptive capacities towards urban flood risk which can- not be identified in a conventional GIS application.
Both water scarcity and flood risk are increasingly turning into safety concerns for many urban dwellers and, consequently, become increasingly politicised. This development involves a reconfiguration of the academic land- scape around urban risk, vulnerability and adaptation to climate change research. This paper is a literature assessment of concepts on disaster risk, vulnerability and adaptation and their applicability to the context of studying water in an African city. An overview on water-related risk in African cities is presented and concepts and respective disciplinary backgrounds reviewed. Recent debates that have emerged from the application of risk, vulnerability and adaptation concepts in research and policy practice are presented. Finally the applicability of these concepts as well as the relevance and implications of recent debates for studying water in African cities is discussed. ‘Riskscape’ is proposed as a conceptual frame for close and integrated analysis of water related risk in an African city.
The present dissertation was developed to emphasize the importance of self-regulatory abilities and to derive novel opportunities to empower self-regulation. From the perspective of PSI (Personality Systems Interactions) theory (Kuhl, 2001), interindividual differences in self-regulation (action vs. state orientation) and their underlying mechanisms are examined in detail. Based on these insights, target-oriented interventions are derived, developed, and scientifically evaluated. The present work comprises a total of four studies which, on the one hand, highlight the advantages of a good self-regulation (e.g., enacting difficult intentions under demands; relation with prosocial power motive enactment and well-being). On the other hand, mental contrasting (Oettingen et al., 2001), an established self-regulation method, is examined from a PSI perspective and evaluated as a method to support individuals that struggle with self-regulatory deficits. Further, derived from PSI theory`s assumptions, I developed and evaluated a novel method (affective shifting) that aims to support individuals in overcoming self-regulatory deficits. Thereby affective shifting supports the decisive changes in positive affect for successful intention enactment (Baumann & Scheffer, 2010). The results of the present dissertation show that self-regulated changes between high and low positive affect are crucial for efficient intention enactment and that methods such as mental contrasting and affective shifting can empower self-regulation to support individuals to successfully close the gap between intention and action.
The benefits of prosocial power motivation in leadership: Action orientation fosters a win-win
(2023)
Power motivation is considered a key component of successful leadership. Based on its dualistic nature, the need for power (nPower) can be expressed in a dominant or a prosocial manner. Whereas dominant motivation is associated with antisocial behaviors, prosocial motivation is characterized by more benevolent actions (e.g., helping, guiding). Prosocial enactment of the power motive has been linked to a wide range of beneficial outcomes, yet less has been investigated what determines a prosocial enactment of the power motive. According to Personality Systems Interactions (PSI) theory, action orientation (i.e., the ability to self-regulate affect) promotes prosocial enactment of the implicit power motive and initial findings within student samples verify this assumption. In the present study, we verified the role of action orientation as an antecedent for prosocial power enactment in a leadership sample (N = 383). Additionally, we found that leaders personally benefited from a prosocial enactment strategy. Results show that action orientation through prosocial power motivation leads to reduced power-related anxiety and, in turn, to greater leader well-being. The integration of motivation and self-regulation research reveals why leaders enact their power motive in a certain way and helps to understand how to establish a win-win situation for both followers and leaders.
A lack of ability to inhibit prepotent responses, or more generally a lack of impulse control, is associated with several disorders such as attention-deficit/hyperactivity disorder and schizophrenia as well as general damage to the prefrontal cortex. A stop-signal task (SST) is a reliable and established measure of response inhibition. However, using the SST as an objective assessment in diagnostic or research-focused settings places significant stress on participants as the task itself requires concentration and cognitive effort and is not particularly engaging. This can lead to decreased motivation to follow task instructions and poor data quality, which can affect assessment efficacy and might increase drop-out rates. Gamification—the application of game-based elements in nongame settings—has shown to improve engaged attention to a cognitive task, thus increasing participant motivation and data quality.
The viviparous eelpout Zoarces viviparus is a common fish across the North Atlantic and has successfully colonized habitats across environmental gradients. Due to its wide distribution and predictable phenotypic responses to pollution, Z. viviparus is used as an ideal marine bioindicator organism and has been routinely sampled over decades by several countries to monitor marine environmental health. Additionally, this species is a promising model to study adaptive processes related to environmental change, specifically global warming. Here, we report the chromosome-level genome assembly of Z. viviparus, which has a size of 663 Mb and consists of 607 scaffolds (N50 = 26 Mb). The 24 largest represent the 24 chromosomes of the haploid Z. viviparus genome, which harbors 98% of the complete Benchmarking Universal Single-Copy Orthologues defined for ray-finned fish, indicating that the assembly is highly contiguous and complete. Comparative analyses between the Z. viviparus assembly and the chromosome-level genomes of two other eelpout species revealed a high synteny, but also an accumulation of repetitive elements in the Z. viviparus genome. Our reference genome will be an important resource enabling future in-depth genomic analyses of the effects of environmental change on this important bioindicator species.
Perennial crops eliminate soil disturbance and reduce the amount of synthetic chemicals that are applied to the soil, improving soil biodiversity and food web structure. Additionally, perennial cropping is characterised by all year-round surface coverage which benefits soil biota in terms of habitat and food sources. Perennial intermediate wheatgrass (Thinopyrum intermedium, IWG) was domesticated and commercialised by The Land Institute in Kansas as Kernza® and serves as an example for these nature-based solutions. It develops an extensive root system that has a higher nutrient retention, possibly reducing nutrient runoff. It thereby follows a more resource-conservative strategy with improved belowground-oriented resource allocation in its root system. This may reduce the need for excessive fertiliser as the crop has a higher nitrogen efficiency, among other things.
IWG promoted the earthworm community and its diversity, more specifically, the occurrence of epigeic species (litter inhabitants), since those species benefit from the increased soil coverage and elimination of disturbances in the soil. As IWG creates a dense and extensive root system, as shown by the increased occurrence of root-feeding nematodes, endogeic species (horizontal burrowers) are supported through the provision of a reliable food source. IWG was characterised as a mostly undisturbed system with a highly structured food web through nematode analysis, as expressed through the promotion of structure indicators, for example, that are sensitive to disturbances in the soil and are therefore supported under no-till management. The root microbiome is continuously being shaped by the host as the crop regrows from the roots each vegetation period. This creates a symbiotic relationship and a beneficial feedback loop for the crop. Resultantly, the root-endophytic microbiome under IWG had a higher network complexity, connectivity and stability compared to annual wheat. The regrowth from the roots for IWG requires increased nutrient and energy storage, which was indicated by increased starch values. Correspondingly, the longer residence time of the roots in the soil resulted in higher lignin values. Furthermore, the decomposition pathway was dominated by fungivorous nematodes which may correspond to stimulated nutrient cycling and a heterogeneous resource environment, as seen for low input systems.
Overall, perennial wheat cultivation improved soil biodiversity already after an establishment of 3-6 years. As those benefits were present for all three countries, the varying soil and climate conditions do not seem to interfere with the positive effect of perennial wheat on the soil ecosystem, demonstrating a wide transferability and adaptability of the crop onto other study sites as well. Enhanced complexity and connectivity of the food web in comparison to annual wheat may indicate a resistance against abiotic stress, suggesting IWG cultivation as a viable option for a sustainable and resilient agriculture. The improvement in nutrient cycling and the resource-efficient cultivation strategy for IWG could enable cultivation on marginal land where annual crop cultivation is not possible as the soils are susceptible to erosion and nutrient runoff. This opens up new possibilities for agricultural cultivation on previously unused land, thus contributing to food security in the future.
Introduction: Conventional agricultural land-use may negatively impact biodiversity and the environment due to the increased disturbances to the soil ecosystem by tillage, for example. Cultivation of the perennial grain intermediate wheatgrass (Thinopyrum intermedium, IWG, Kernza®) is a nature-based solution for sustainable agriculture, improving nutrient retention mainly through its extensive root system. Nematodes serve as sensitive bioindicators, detecting early changes in the soil food web, reflecting in changes in their community structure.
Materials and Methods:IWG and annual wheat sites in South France, Belgium and South Sweden were investigated in April 2022 for two depths (5–15 cm; 25–35 cm) to evaluate the difference in nematode community structure among the cropping systems.
Results: Sites with IWG cultivation held an accumulation of structure indicators (c-p 3–5 nematodes) compared to sites with annual wheat cultivation. A generalised linear mixed model revealed significantly more root feeders, especially for the subsoil, under IWG as a result of the perennial cultivation. The maturity index, plant-parasitic index, channel index and structure index were greater for IWG sites. The enrichment index was greater for annual wheat sites due to the dominance of bacterivores and enrichment indicators (c-p 1 nematodes). The nematode community structure (weighted faunal profile analysis) indicates IWG sites as being a generally undisturbed system with efficient nutrient cycling and balanced distribution of feeding types, as well as higher metabolic footprint values for root feeders (including plant-parasitic nematodes) and fungivores. Annual wheat sites, on the other hand, held indicators of a disturbed system with increased occurrence of opportunistic species and a more bacterial driven pathway. The topsoil had an increased occurrence of structure indicators in both cropping systems.
Conclusion: IWG creates favourable conditions for a diverse food web, including improved nutrient cycling and a heterogeneous resource environment, regardless of climatic conditions, establishing it as a stable and resilient agricultural management system.
We examined the long-term relationship of psychosocial risk and health behaviors on clinical events in patients awaiting heart transplantation (HTx). Psychosocial characteristics (e.g., depression), health behaviors (e.g., dietary habits, smoking), medical factors (e.g., creatinine), and demographics (e.g., age, sex) were collected at the time of listing in 318 patients (82% male, mean age = 53 years) enrolled in the Waiting for a New Heart Study. Clinical events were death/delisting due to deterioration, high-urgency status transplantation (HU-HTx), elective transplantation, and delisting due to clinical improvement. Within 7 years of follow-up, 92 patients died or were delisted due to deterioration, 121 received HU-HTx, 43 received elective transplantation, and 39 were delisted due to improvement. Adjusting for demographic and medical characteristics, the results indicated that frequent consumption of healthy foods (i.e., foods high in unsaturated fats) and being physically active increased the likelihood of delisting due improvement, while smoking and depressive symptoms were related to death/delisting due to clinical deterioration while awaiting HTx. In conclusion, psychosocial and behavioral characteristics are clearly associated with clinical outcomes in this population. Interventions that target psychosocial risk, smoking, dietary habits, and physical activity may be beneficial for patients with advanced heart failure waiting for a cardiac transplant.
The present dissertation deals with variable stress patterns in English complex adjectives such as celebratory, identifiable or imaginative. This variation is usually described in terms of retaining the stress from the embedded base (idéntify -> idéntifiable) or deviating from the stress of the embedded base (idéntify -> identifíable). While several accounts have explored this variation, none of them have been able to identify a plausible reason for why it occurs. Additionally, the role of individual speaker differences has been disregarded in the discussion. This dissertation therefore explores the empirically observable extent of the variation and investigates possible causes of it with a special focus on individual differences between speakers. It uses data from a complex online experiment that included five different tasks to assess speakers' stress production, perception, morphological processing, vocabulary size and other factors. It furthermore tests the predictions of previous accounts on the large set of authentic utterances from speakers collected using this online experiment. The data show that individual differences in vocabulary size between speakers are a significant predictor of a speaker's tendency to retain the stress of the embedded base.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
Many developed countries, including Germany, face a steady rise in the share of
individuals obtaining higher education. While rising education itself bears a series
of advantages as extensively studied in previous literature, it is also conceptually
linked to a higher likelihood of working in an occupation that does not match
one’s formal qualifications. Previous studies have predominantly evaluated
how demographic or job‐related aspects correlate with the likelihood of being
educationally ﴾mis﴿matched. However, they have largely ignored institutional
facets of the educational system or industrial organization. Moreover, little is
known about how private wealth affects educational mismatch or whether job
satisfaction is homogenously affected among individuals once such a mismatch
occurs. The five projects collected in this thesis aim to answer these open
questions in the literature for Germany, using data from the Socio‐Economic Panel
and employing different time intervals between 1984 and 2022.
Beginning with the educational system in early childhood, Chapter 2 evaluates
the impact of school‐starting age on the likelihood of over‐ and undereducation.
It exploits the exogenous variation in school‐entry rules across federal states
and years in Germany with regression discontinuity designs. The results report
a negative impact of school‐starting age on the likelihood of undereducation,
but no systematic relationship with overeducation.
Subsequently, Chapter 3 explores the variation in education costs by leveraging
the quasi‐experimental setting induced by the time‐limited introduction of tuition
fees in several German federal states between 2006 and 2014. The increase
in education costs among treated graduates results in a significantly higher
likelihood of overeducation, which endures even several years post‐graduation.
Chapter 4 focuses on the industrial relations system and examines the
correlation between trade union membership and the likelihood and extent of
educational ﴾mis﴿match. The results reveal that trade union members report
significantly less overeducation at both the intensive and extensive margin
and also a higher likelihood of being matched compared to non‐members. Furthermore, the heterogeneity analysis provides evidence that this correlation
is driven by improved bargaining power instead of informational advantages.
Chapter 5 focuses on private wealth as a determinant of educational mismatch
by investigating the impact of a wealth shock through inheritances, lottery
winnings or gifts on the likelihood of over‐ and undereducation. Due to
the diminishing marginal returns of wages with increasing windfall gains the
likelihood of undereducation is expected to decrease, while that of overeducation
is expected to increase. Empirically, these suppositions are supported for
overeducation, as its likelihood increases significantly after the windfall gain.
Further analyses reveal that this effect is driven by individuals switching
occupations while increasing their leisure time, and it materializes only for
medium to large windfall gains.
Contrary to the previous chapters, Chapter 6 focuses on educational mismatch,
more precisely on overeducation, as the independent variable. In particular, it
investigates the correlation between overeducation and job satisfaction. The
results align with the previously established negative correlation for private sector
employees exclusively. In contrast, interaction and subsample analyses reveal a
positive correlation for public sector employees. This link is driven by individuals
with a high degree of altruistic motivation and family orientation.
Optimal mental workload plays a key role in driving performance. Thus, driver-assisting systems that automatically adapt to a drivers current mental workload via brain–computer interfacing might greatly contribute to traffic safety. To design economic brain computer interfaces that do not compromise driver comfort, it is necessary to identify brain areas that are most sensitive to mental workload changes. In this study, we used functional near-infrared spectroscopy and subjective ratings to measure mental workload in two virtual driving environments with distinct demands. We found that demanding city environments induced both higher subjective workload ratings as well as higher bilateral middle frontal gyrus activation than less demanding country environments. A further analysis with higher spatial resolution revealed a center of activation in the right anterior dorsolateral prefrontal cortex. The area is highly involved in spatial working memory processing. Thus, a main component of drivers’ mental workload in complex surroundings might stem from the fact that large amounts of spatial information about the course of the road as well as other road users has to constantly be upheld, processed and updated. We propose that the right middle frontal gyrus might be a suitable region for the application of powerful small-area brain computer interfaces.
In addition to flood disasters on major rivers, damage caused by the flooding of smaller and medium-sized tributaries is also of considerable significance. To ensure that flood protection measures are effective, engineering flood prevention measures on the rivers must be supported by integrated catchment management. This includes decentralised water retention measures implemented in the sectors of forestry, agriculture and in residential areas. Within this scope new instruments have to be elaborated and introduced, such as GIS-based systems and systems for the evaluation of economic consequences and eco-efficiency of flood damage precaution measures associated with land-use. These are extremely significant for improving information management, the prevention of advice to the general public and for the acceptance of flood precaution measures. The conference intends to promote scientific exchange between specialists working on all areas concerning integrated catchment management. This includes the methodology for identification of catchment types prone to flooding hazards, the control and validation of land-use concepts for decentralised water retention as well as its combination and upscaling procedures up to mesoscale catchments. As catchment management is not only the concern of natural scientists the strategies for enhancing catchment management and the development of decision-support tools will also be important topics of the conference. ***Addenda *1. The articles from page 136 to 161 belong to session 5 *2. Article page 107: Ancient irrigation strategies: land use and hazard mitigation in Ma-´rib, Yemen (New list of authors: Ueli Brunner (a) , Michael Schütz (b), Dana Pietsch (c), Peter Kühn (c), Thomas Scholten (c), Iris Gerlach (d))
My dissertation is concerned with contemporary (Anglo-)Canadian immigrant fiction and proposes an analytic grid with which it may be appreciated and compared more adequately. As a starting-point serves the general observation that the works of many Canadian immigrant writers are characterised by a focus on their respective home cultures as well as on their Canadian host culture. Following the ground-breaking work of Northrop Frye, Margaret Atwood and David Staines, the categories of "there" and "here" are suggested in order to reflect this double encoding of Canadian immigrant literature. However, "here" and "there" are more than spatial configurations in that they represent a concern with issues of multiculturalism and postcolonialism. Both of which are informed by an emphasis on difference and identity, and difference and identity are also what the narratives of M.G. Vassanji, Neil Bissoondath and Rohinton Mistry are preoccupied with. My study sets out to show two things: On the one hand, it attempts to exemplify the complexity and interrelatedness of "there" and "here" in a representative fashion. Hence in their treatments of difference, M.G. Vassanji, Neil Bissoondath and Rohinton Mistry come up with comparable identity constructions "here" and "there" respectively. On the other hand, special attention is paid to the strategies by which Vassanji, Bissoondath and Mistry construct difference and corroborate their respective understandings of identity.
Because EU water quality policy can result in infrastructure creation or adaptation at the local level across member states, compliance cases are worth examining critically from a sustainable spatial planning perspective. In this study, the 2000 EU Water Framework Directive’s (WFD) reach to local implementation efforts in average towns and cities is shown through the case study of nonconforming household wastewater infrastructure in the German state of Rhineland Palatinate. Seeing wastewater as a socio-technical infrastructure, we ask how the WFD implementation can be understood in the context of local infrastructure development, sustainability, and spatial planning concepts. In particular, this study examines what compliance meant for the centralization or decentralization of local wastewater infrastructure systems—and the sustainability implications for cities
from those choices.
During pregnancy every eighth woman is treated with glucocorticoids. Glucocorticoids inhibit cell division but are assumed to accelerate the differentiation of cells. In this review animal models for the development of the human fetal and neonatal hypothalamic-pituitary-adrenal (HPA) axis are investigated. It is possible to show that during pregnancy in humans, as in most of the here-investigated animal models, a stress hyporesponsive period (SHRP) is present. In this period, the fetus is facing reduced glucocorticoid concentrations, by low or absent fetal glucocorticoid synthesis and by reduced exposure to maternal glucocorticoids. During that phase, sensitive maturational processes in the brain are assumed, which could be inhibited by high glucocorticoid concentrations. In the SHRP, species-specific maximal brain growth spurt and neurogenesis of the somatosensory cortex take place. The latter is critical for the development of social and communication skills and the secure attachment of mother and child. Glucocorticoid treatment during pregnancy needs to be further investigated especially during this vulnerable SHRP. The hypothalamus and the pituitary stimulate the adrenal glucocorticoid production. On the other hand, glucocorticoids can inhibit the synthesis of corticotropin-releasing hormone (CRH) in the hypothalamus and of adrenocorticotropic hormone (ACTH) in the pituitary. Alterations in this negative feedback are assumed among others in the development of fibromyalgia, diabetes and factors of the metabolic syndrome. In this work it is shown that the fetal cortisol surge at the end of gestation is at least partially due to reduced glucocorticoid negative feedback. It is also assumed that androgens are involved in the control of fetal glucocorticoid synthesis. Glucocorticoids seem to prevent masculinization of the female fetus by androgens during the sexual gonadal development. In this work a negative interaction of glucocorticoids and androgens is detectable.
Water-deficit stress, usually shortened to water- or drought stress, is one of the most critical abiotic stressors limiting plant growth, crop yield and quality concerning food production. Today, agriculture consumes about 80-90% of the global freshwater used by humans and about two thirds are used for crop irrigation. An increasing world population and a predicted rise of 1.0-2.5-°C in the annual mean global temperature as a result of climate change will further increase the demand of water in agriculture. Therefore, one of the most challenging tasks of our generation is to reduce the amount water used per unit yield to satisfy the second UN Sustainable Development Goal and to ensure global food security. Precision agriculture offers new farming methods with the goal to improve the efficiency of crop production by a sustainable use of resources. Plant responses to water stress are complex and co-occur with other environmental stresses under natural conditions. In general, water stress causes plant physiological and biochemical changes that depend on the severity and the duration of the actual plant water deficit. Stomatal closure is one of the first responses to plant water stress causing a decrease in plant transpiration and thus an increase in plant temperature. Prolonged or severe water stress leads to irreversible damage to the photosynthetic machinery and is associated with decreasing chlorophyll content and leaf structural changes (e.g., leaf rolling). Since a crop can already be irreversibly damaged by only mild water deficit, a pre-visual detection of water stress symptoms is essential to avoid yield loss. Remote sensing offers a non-destructive and spatio-temporal method for measuring numerous physiological, biochemical and structural crop characteristics at different scales and thus is one of the key technologies used in precision agriculture. With respect to the detection of plant responses to water stress, the current state-of-the-art hyperspectral remote sensing imaging techniques are based on measurements of thermal infrared emission (TIR; 8-14 -µm), visible, near- and shortwave infrared reflectance (VNIR/SWIR; 0.4-2.5 -µm), and sun-induced fluorescence (SIF; 0.69 and 0.76 -µm). It is, however, still unclear how sensitive these techniques are with respect to water stress detection. Therefore, the overall aim of this dissertation was to provide a comparative assessment of remotely sensed measures from the TIR, SIF, and VNIR/SWIR domains for their ability to detect plant responses to water stress at ground- and airborne level. The main findings of this thesis are: (i) temperature-based indices (e.g., CWSI) were most sensitive for the detection of plant water stress in comparison to reflectance-based VNIR/SWIR indices (e.g., PRI) and SIF at both, ground- and airborne level, (ii) for the first time, spectral emissivity as measured by the new hyperspectral TIR instrument could be used to detect plant water stress at ground level. Based on these findings it can be stated that hyperspectral TIR remote sensing offers great potential for the detection of plant responses to water stress at ground- and airborne level based on both TIR key variables, surface temperature and spectral emissivity. However, the large-scale application of water stress detection based on hyperspectral TIR measures in precision agriculture will be challenged by several problems: (i) missing thresholds of temperature-based indices (e.g., CWSI) for the application in irrigation scheduling, (ii) lack of current TIR satellite missions with suitable spectral and spatial resolution, (iii) lack of appropriate data processing schemes (including atmosphere correction and temperature emissivity separation) for hyperspectral TIR remote sensing at airborne- and satellite level.
Abstract: Thermal infrared (TIR) multi-/hyperspectral and sun-induced fluorescence (SIF) approaches together with classic solar-reflective (visible, near-, and shortwave infrared reflectance (VNIR)/SWIR) hyperspectral remote sensing form the latest state-of-the-art techniques for the detection of crop water stress. Each of these three domains requires dedicated sensor technology currently in place for ground and airborne applications and either have satellite concepts under development (e.g., HySPIRI/SBG (Surface Biology and Geology), Sentinel-8, HiTeSEM in the TIR) or are subject to satellite missions recently launched or scheduled within the next years (i.e., EnMAP and PRISMA (PRecursore IperSpettrale della Missione Applicativa, launched on March 2019) in the VNIR/SWIR, Fluorescence Explorer (FLEX) in the SIF). Identification of plant water stress or drought is of utmost importance to guarantee global water and food supply. Therefore, knowledge of crop water status over large farmland areas bears large potential for optimizing agricultural water use. As plant responses to water stress are numerous and complex, their physiological consequences affect the electromagnetic signal in different spectral domains. This review paper summarizes the importance of water stress-related applications and the plant responses to water stress, followed by a concise review of water-stress detection through remote sensing, focusing on TIR without neglecting the comparison to other spectral domains (i.e., VNIR/SWIR and SIF) and multi-sensor approaches. Current and planned sensors at ground, airborne, and satellite level for the TIR as well as a selection of commonly used indices and approaches for water-stress detection using the main multi-/hyperspectral remote sensing imaging techniques are reviewed. Several important challenges are discussed that occur when using spectral emissivity, temperature-based indices, and physically-based approaches for water-stress detection in the TIR spectral domain. Furthermore, challenges with data processing and the perspectives for future satellite missions in the TIR are critically examined. In conclusion, information from multi-/hyperspectral TIR together with those from VNIR/SWIR and SIF sensors within a multi-sensor approach can provide profound insights to actual plant (water) status and the rationale of physiological and biochemical changes. Synergistic sensor use will open new avenues for scientists to study plant functioning and the response to environmental stress in a wide range of ecosystems.
Job crafting is the behavior that employees engage in to create personally better fitting work environments, for example, by increasing challenging job demands. To better understand the driving forces behind employees’ engagement in job crafting, we investigated implicit and explicit power motives. While implicit motives tend to operate at the unconscious, explicit motives operate at the unconscious level. We focused on power motives, as power is an agentic motive characterized by the need to influence your environment. Although power is relevant to job crafting in its entirety, in this study, we link it to increasing challenging job demands due to its relevance to job control, which falls under the umbrella of power. Using a cross-sectional design, we collected survey data from a sample of Lebanese nurses (N = 360) working in 18 different hospitals across the country. In both implicit and explicit power motive measures, we focused on integrative power that enable people to stay calm and integrate opposition. The results showed that explicit power predicted job crafting (H1) and that implicit power amplified this effect (H2). Furthermore, job crafting mediated the relationship between congruently high power motives and positive work-related outcomes (H3) that were interrelated (H4). Our findings unravel the driving forces behind one of the most important dimensions of job crafting and extend the benefits of motive congruence to work-related outcomes.
Agricultural monitoring is necessary. Since the beginning of the Holocene, human agricultural
practices have been shaping the face of the earth, and today around one third of the ice-free land
mass consists of cropland and pastures. While agriculture is necessary for our survival, the
intensity has caused many negative externalities, such as enormous freshwater consumption, the
loss of forests and biodiversity, greenhouse gas emissions as well as soil erosion and degradation.
Some of these externalities can potentially be ameliorated by careful allocation of crops and
cropping practices, while at the same time the state of these crops has to be monitored in order
to assess food security. Modern day satellite-based earth observation can be an adequate tool to
quantify abundance of crop types, i.e., produce spatially explicit crop type maps. The resources to
do so, in terms of input data, reference data and classification algorithms have been constantly
improving over the past 60 years, and we live now in a time where fully operational satellites
produce freely available imagery with often less than monthly revisit times at high spatial
resolution. At the same time, classification models have been constantly evolving from
distribution based statistical algorithms, over machine learning to the now ubiquitous deep
learning.
In this environment, we used an explorative approach to advance the state of the art of crop
classification. We conducted regional case studies, focused on the study region of the Eifelkreis
Bitburg-Prüm, aiming to develop validated crop classification toolchains. Because of their unique
role in the regional agricultural system and because of their specific phenologic characteristics
we focused solely on maize fields.
In the first case study, we generated reference data for the years 2009 and 2016 in the study
region by drawing polygons based on high resolution aerial imagery, and used these in
conjunction with RapidEye imagery to produce high resolution maize maps with a random forest
classifier and a gaussian blur filter. We were able to highlight the importance of careful residual
analysis, especially in terms of autocorrelation. As an end result, we were able to prove that, in
spite of the severe limitations introduced by the restricted acquisition windows due to cloud
coverage, high quality maps could be produced for two years, and the regional development of
maize cultivation could be quantified.
In the second case study, we used these spatially explicit datasets to link the expansion of biogas
producing units with the extended maize cultivation in the area. In a next step, we overlayed the
maize maps with soil and slope rasters in order to assess spatially explicit risks of soil compaction
and erosion. Thus, we were able to highlight the potential role of remote sensing-based crop type
classification in environmental protection, by producing maps of potential soil hazards, which can
be used by local stakeholders to reallocate certain crop types to locations with less associated
risk.
In our third case study, we used Sentinel-1 data as input imagery, and official statistical records
as maize reference data, and were able to produce consistent modeling input data for four
consecutive years. Using these datasets, we could train and validate different models in spatially
iv
and temporally independent random subsets, with the goal of assessing model transferability. We
were able to show that state-of-the-art deep learning models such as UNET performed
significantly superior to conventional models like random forests, if the model was validated in a
different year or a different regional subset. We highlighted and discussed the implications on
modeling robustness, and the potential usefulness of deep learning models in building fully
operational global crop classification models.
We were able to conclude that the first major barrier for global classification models is the
reference data. Since most research in this area is still conducted with local field surveys, and only
few countries have access to official agricultural records, more global cooperation is necessary to
build harmonized and regionally stratified datasets. The second major barrier is the classification
algorithm. While a lot of progress has been made in this area, the current trend of many appearing
new types of deep learning models shows great promise, but has not yet consolidated. There is
still a lot of research necessary, to determine which models perform the best and most robust,
and are at the same time transparent and usable by non-experts such that they can be applied
and used effortlessly by local and global stakeholders.
In order to discuss potential sustainability issues of expanding silage maize cultivation in Rhineland-Palatinate, spatially explicit monitoring is necessary. Publicly available statistical records are often not a sufficient basis for extensive research, especially on soil health, where risk factors like erosion and compaction depend on variables that are specific to every site, and hard to generalize for larger administrative aggregates. The focus of this study is to apply established classification algorithms to estimate maize abundance for each independent pixel, while at the same time accounting for their spatial relationship. Therefore, two ways to incorporate spatial autocorrelation of neighboring pixels are combined with three different classification models. The performance of each of these modeling approaches is analyzed and discussed. Finally, one prediction approach is applied to the imagery, and the overall predicted acreage is compared to publicly available data. We were able to show that Support Vector Machine (SVM) classification and Random Forests (RF) were able to distinguish maize pixels reliably, with kappa values well above 0.9 in most cases. The Generalized Linear Model (GLM) performed substantially worse. Furthermore, Regression Kriging (RK) as an approach to integrate spatial autocorrelation into the prediction model is not suitable in use cases with millions of sparsely clustered training pixels. Gaussian Blur is able to improve predictions slightly in these cases, but it is possible that this is only because it smoothes out impurities of the reference data. The overall prediction with RF classification combined with Gaussian Blur performed well, with out of bag error rates of 0.5% in 2009 and 1.3% in 2016. Despite the low error rates, there is a discrepancy between the predicted acreage and the official records, which is 20% in 2009 and 27% in 2016.
With the ongoing trend towards deep learning in the remote sensing community, classical pixel based algorithms are often outperformed by convolution based image segmentation algorithms. This performance was mostly validated spatially, by splitting training and validation pixels for a given year. Though generalizing models temporally is potentially more difficult, it has been a recent trend to transfer models from one year to another, and therefore to validate temporally. The study argues that it is always important to check both, in order to generate models that are useful beyond the scope of the training data. It shows that convolutional neural networks have potential to generalize better than pixel based models, since they do not rely on phenological development alone, but can also consider object geometry and texture. The UNET classifier was able to achieve the highest F1 scores, averaging 0.61 in temporal validation samples, and 0.77 in spatial validation samples. The theoretical potential for overfitting geometry and just memorizing the shape of fields that are maize has been shown to be insignificant in practical applications. In conclusion, kernel based convolutions can offer a large contribution in making agricultural classification models more transferable, both to other regions and to other years.
Floods are hydrological extremes that have enormous environmental, social and economic consequences.The objective of this thesis was a contribution to the implementation of a processing chain that integrates remote sensing information into hydraulic models. Specifically, the aim was to improve water elevation and discharge simulations by assimilating microwave remote sensing-derived flood information into hydraulic models. The first component of the proposed processing chain is represented by a fully automated flood mapping algorithm that enables the automated, objective, and reliable flood extent extraction from Synthetic Aperture Radar images, providing accurate results in both rural and urban regions. The method operates with minimum data requirements and is efficient in terms of computational time. The map obtained with the developed algorithm is still subject to uncertainties, both introduced by the flood mapping algorithm and inherent in the image itself. In this work, particular attention was given to image uncertainty deriving from speckle. By bootstrapping the original satellite image pixels, several synthetic images were generated and provided as input to the developed flood mapping algorithm. From the analysis performed on the mapping products, speckle uncertainty can be considered as a negligible component of the total uncertainty. In the final step of the proposed processing chain real event water elevations, obtained from satellite observations, were assimilated in a hydraulic model with an adapted version of the Particle Filter, modified to work with non-Gaussian distribution of observations. To deal with model structure error and possibly biased observations, a global and a local weight variant of the Particle Filter were tested. The variant to be preferred depends on the level of confidence that is attributed to the observations or to the model. This study also highlighted the complementarity of remote sensing derived and in-situ data sets. An accurate binary flood map represents an invaluable product for different end users. However, deriving from this binary map additional hydraulic information, such as water elevations, is a way of enhancing the value of the product itself. The derived data can be assimilated into hydraulic models that will fill the gaps where, for technical reasons, Earth Observation data cannot provide information, also enabling a more accurate and reliable prediction of flooded areas.
Entrepreneurship is recognized as an important discipline to achieve sustainable development and to address sustainability goals without losing sight of economic aspects. However, entrepreneurship rates are rather low in many industrialized countries with high income levels. Research clearly shows that there is a gap in the entrepreneurial process between intentions and subsequent actions. This means that not everyone with entrepreneurial ambitions also follows through and implements actions. This gap also exists for aspects of sustainability. As a result, there is a need to better understand the traditional and sustainability-focused entrepreneurial process in order to increase corresponding actions. This dissertation offers such a comprehensive perspective and sheds light on individual and contextual predictors for traditional and sustainability-focused behavior of entrepreneurs and self-employed across four studies.
The first three studies focus on individual predictors. By providing a systematic literature review with 107 articles, Chapter 2 highlights the ambivalent role of religion for the entrepreneurial process. Relying on the theory of planned behavior (TPB) as theoretical basis, religion can have positive effects on entrepreneurial attitudes and behavioral control, but also negative consequences for other aspects of behavioral control and subjective norms due to religious restrictions.
The quantitative empirical study in Chapter 3 similarly relies on the TPB and sheds light on individual perceptual factors influencing the sustainability-related intention-action gap in entrepreneurship. Using data from the 2021 Global Entrepreneurship Monitor (GEM) Adult Population Survey (APS) including 22,008 early-stage entrepreneurs from 44 countries worldwide, the results support our theoretical reasoning that sustainability-focused intentions are positively related to social entrepreneurial actions. In addition, it is demonstrated that positive perceptual moderators such as self-efficacy and knowing other entrepreneurs as role models strengthen this relationship while a negative perception such as fear of failure restricts social actions in early-stage entrepreneurship.
The next quantitative empirical study in Chapter 4 examines the behavioral consequences of well-being at a sample of 6,955 German self-employed during COVID-19. This chapter builds on two complementary behavioral perspectives to predict how reductions in financial and non-financial well-being relate to investments in venture development. In this regard, reductions in financial well-being are positively related to time investments, supporting the performance feedback perspective in terms of higher search efforts under negative performance. In contrast, reductions in non-financial well-being are negatively related to time and monetary investments, yielding support for the broadening-and-build perspective indicating that negative psychological experiences narrow the thought-action repertoire and hinder resource deployment. The insights across these first three studies about individual predictors indicate that many different, subjective beliefs, perceptions and emotional states can influence the entrepreneurial process making entrepreneurship and self-employment highly individualized disciplines.
The last quantitative empirical study provides an explorative view on a large number of contextual predictors for social and ecological considerations in entrepreneurial actions. Combining GEM data from 2021 on country level with further information from the World Bank and the OECD, a machine learning approach is employed on a sample of 84 countries worldwide. The results suggest that governmental and regulatory as well as cultural factors are relevant to predict social and ecological considerations. Moreover, market-related aspects are shown to be relevant predictors, especially socio-economic factors for social considerations and economic factors for ecological considerations. Overall, the four studies in this dissertation highlight the complexity of the entrepreneurial process being determined by many different individual and contextual factors. Due to the multitude of potential predictors, this dissertation can only give an initial overview of a selection of factors with many more aspects and interdependencies still to be examined by future research.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
For grape canopy pixels captured by an unmanned aerial vehicle (UAV) tilt-mounted RedEdge-M multispectral sensor in a sloped vineyard, an in situ Walthall model can be established with purely image-based methods. This was derived from RedEdge-M directional reflectance and a vineyard 3D surface model generated from the same imagery. The model was used to correct the angular effects in the reflectance images to form normalized difference vegetation index (NDVI)orthomosaics of different view angles. The results showed that the effect could be corrected to a certain scope, but not completely. There are three drawbacks that might restrict a successful angular model construction and correction: (1) the observable micro shadow variation on the canopy enabled by the high resolution; (2) the complexity of vine canopies that causes an inconsistency between reflectance and canopy geometry, including effects such as micro shadows and near-infrared (NIR) additive effects; and (3) the resolution limit of a 3D model to represent the accurate real-world optical geometry. The conclusion is that grape canopies might be too inhomogeneous for the tested method to perform the angular correction in high quality.
The concept of art is a lens through which one can explore the thought of Nicho las of Cusa. He uses this notion throughout his work in order to address the pro ductive dynamism of the divine mind as well as the human mind. With a focus on the human arts as likenesses of the divine art, this paper studies the relationship between the art of the word and the illiterate manual arts. Firstly, we examine the ars coniecturalis as a human art form that Cusanus presents for the first time in extenso in “De coniecturis”. Secondly, we address the power of the art of the word through the production of its most precious form, the spoken word. Thirdly, and finally, we inquire into the power of the manual arts through the example of the idiota’s making of wooden spoons in the “De mente” in order to show the relationship between this art and the art of the word.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
This thesis is divided into three main parts: The description of the calibration problem, the numerical solution of this problem and the connection to optimal stochastic control problems. Fitting model prices to given market prices leads to an abstract least squares formulation as calibration problem. The corresponding option price can be computed by solving a stochastic differential equation via the Monte-Carlo method which seems to be preferred by most practitioners. Due to the fact that the Monte-Carlo method is expensive in terms of computational effort and requires memory, more sophisticated stochastic predictor-corrector schemes are established in this thesis. The numerical advantage of these predictor-corrector schemes ispresented and discussed. The adjoint method is applied to the calibration. The theoretical advantage of the adjoint method is discussed in detail. It is shown that the computational effort of gradient calculation via the adjoint method is independent of the number of calibration parameters. Numerical results confirm the theoretical results and summarize the computational advantage of the adjoint method. Furthermore, provides the connection to optimal stochastic control problems is proven in this thesis.
Currently, new business models created in the sharing economy differ considerably and they differ in the formation of trust as well. If and how trust can be created is shown by a comparison of two examples which diverge in their founding philosophy. The chosen example of community-based economy, Community Supported Agriculture (CSA), no longer trusts the capitalist system and therefore distances itself and creates its own environment including a new business model. It is implemented within rather small groups where trust is created by personal relations and face-to-face communication. On the contrary, the example of a platform economy, the accommodation-provider company Airbnb, shows trust in the system and pushes technological innovations through the use of platform applications. It promotes trust and confidence in the progress of technology. For the conceptual analysis, the distinction between personal trust and system trust defined by Niklas Luhmann is adopted. The analysis describes two different modes of trust formation and how they push distrust or improve trust. Grounded on these analyses, assumptions on the process of trust formation within varying models of the sharing economy are formulated as well as a hypothesis about possible developments is introduced for further research.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
This article aims to reconstruct the reception of pre-Socratic philosophy, especially that of Parmenides, in Russian modernism and avant-garde literature. In doing so, it places this reception into two contexts: the contemporary discussion of pre-Socratic ideas in Russian, European and American philosophy, on the one hand, and the proclamation of a third, a Russian and/or Slavic Renaissance, on the other. This Renaissance has been conceived as the intense discussion and reconsideration of ideas, notions, and expressions of ancient Greek thinking. It aimed also to avoid the reduction of Greek philosophy to Plato, as had been practiced by the Russian Orthodox Church and largely pushed through in Russian culture. One of the main points of this reconsideration concerned the quest of the relation between the word, the process of thinking, and human life, while another one connected with it involved the (re-)establishment of a close bond between the poetic word, its meaning, and its sense. The integration of this productive discussion with pre-Socratic Greek philosophy enriches and improves our knowledge of Russian modernism and avant- garde literature.
Reconstructing invisible deviating events: A conformance checking approach for recurring events
(2022)
Conformance checking enables organizations to determine whether their executed processes are compliant with the intended process. However, if the processes contain recurring activities, state-of-the-art approaches unfortunately have difficulties calculating the conformance. The occurrence of complex temporal rules can further increase the complexity of the problem. Identifying this limitation, this paper presents a novel approach towards dealing with recurring activities in conformance checking. The core idea of the approach is to reconstruct the missing events in the event log using defined rules while incorporating specified temporal event characteristics. This approach then enables the use of native conformance checking algorithms. The paper illustrates the algorithmic approach and defines the required temporal event characteristics. Furthermore, the approach is applied and evaluated in a case study on an event log for melanoma surveillance.
Background: The growing production and use of engineered AgNP in industry and private households make increasing concentrations of AgNP in the environment unavoidable. Although we already know the harmful effects of AgNP on pivotal bacterial driven soil functions, information about the impact of silver nanoparticles (AgNP) on the soil bacterial community structure is rare. Hence, the aim of this study was to reveal the long-term effects of AgNP on major soil bacterial phyla in a loamy soil. The study was conducted as a laboratory incubation experiment over a period of 1 year using a loamy soil and AgNP concentrations ranging from 0.01 to 1 mg AgNP/kg soil. Effects were quantified using the taxon-specific 16S rRNA qPCR.
Results: The short-term exposure of AgNP at environmentally relevant concentration of 0.01 mg AgNP/kg caused significant positive effects on Acidobacteria (44.0%), Actinobacteria (21.1%) and Bacteroidetes (14.6%), whereas beta-Proteobacteria population was minimized by 14.2% relative to the control (p ≤ 0.05). After 1 year of exposure to 0.01 mg AgNP/kg diminished Acidobacteria (p = 0.007), Bacteroidetes (p = 0.005) and beta-Proteobacteria (p = 0.000) by 14.5, 10.1 and 13.9%, respectively. Actino- and alpha-Proteobacteria were statistically unaffected by AgNP treatments after 1-year exposure. Furthermore, a statistically significant regression and correlation analysis between silver toxicity and exposure time confirmed loamy soils as a sink for silver nanoparticles and their concomitant silver ions.
Conclusions: Even very low concentrations of AgNP may cause disadvantages for the autotrophic ammonia oxidation (nitrification), the organic carbon transformation and the chitin degradation in soils by exerting harmful effects on the liable bacterial phyla.
Addition of Phosphogypsum to Fire-Resistant Plaster Panels:
A Physic–Mechanical Investigation
(2023)
Gypsum (GPS) has great potential for structural fire protection and is increasingly used in construction due to its high-water retention and purity. However, many researchers aim to improve its physical and mechanical properties by adding other organic or inorganic materials such as fibers, recycled GPS, and waste residues. This study used a novel method to add non-natural GPS from factory waste (phosphogypsum (PG)) as a secondary material for GPS. This paper proposes to mix these two materials to properly study the effect of PG on the physico-mechanical properties and fire performance of two Tunisian GPSs (GPS1 and GPS2). PG initially replaced GPS at 10, 20, 30, 40, and 50% weight percentage (mixing plan A). The PGs were then washed with distilled water several times. Two more mixing plans were run when the pH of the PG was equal to 2.4 (mixing plan B), and the pH was equal to 5 (mixing plan C). Finally, a comparative study was conducted on the compressive strength, flexural strength, density, water retention, and mass loss levels after 90 days of drying, before/after incineration of samples at 15, 30, 45, and 60 min. The results show that the mixture of GPS1 and 30% PG (mixing plan B) obtained the highest compressive strength (41.31%) and flexural strength (35.03%) compared to the reference sample. The addition of 10% PG to GPS1 (mixing plan A) improved fire resistance (33.33%) and the mass loss (17.10%) of the samples exposed to flame for 60 min compared to GPS2. Therefore, PG can be considered an excellent insulating material, which can increase physico-mechanical properties and fire resistance time of plaster under certain conditions.
Properties Evaluation of Composite Materials Based on Gypsum Plaster and Posidonia Oceanica Fibers
(2023)
Estimating the amount of material without significant losses at the end of hybrid casting is a problem addressed in this study. To minimize manufacturing costs and improve the accuracy of results, a correction factor (CF) was used in the formula to estimate the volume percent of the material in order to reduce material losses during the sample manufacturing stage, allowing for greater confidence between the approved blending plan and the results obtained. In this context, three material mixing schemes of different sizes and shapes (gypsum plaster, sand (0/2), gravel (2/4), and Posidonia oceanica fibers (PO)) were created to verify the efficiency of CF and more precisely study the physico-mechanical effects on the samples. The results show that the use of a CF can reduce mixing loss to almost 0%. The optimal compressive strength of the sample (S1B) with the lowest mixing loss was 7.50 MPa. Under optimal conditions, the addition of PO improves mix volume percent correction (negligible), flexural strength (5.45%), density (18%), and porosity (3.70%) compared with S1B. On the other hand, the addition of PO thermo-chemical treatment by NaOH increases the compressive strength (3.97%) compared with PO due to the removal of impurities on the fiber surface, as shown by scanning electron microscopy. We then determined the optimal mixture ratio (PO divided by a mixture of plaster, sand, and gravel), which equals 0.0321 because Tunisian gypsum contains small amounts of bassanite and calcite, as shown by the X-ray diffraction results.
This work is concerned with two kinds of objects: regular expressions and finite automata. These formalisms describe regular languages, i.e., sets of strings that share a comparatively simple structure. Such languages - and, in turn, expressions and automata - are used in the description of textual patterns, workflow and dependence modeling, or formal verification. Testing words for membership in any given such language can be implemented using a fixed - i.e., finite - amount of memory, which is conveyed by the phrasing finite-automaton. In this aspect they differ from more general classes, which require potentially unbound memory, but have the potential to model less regular, i.e., more involved, objects. Other than expressions and automata, there are several further formalisms to describe regular languages. These formalisms are all equivalent and conversions among them are well-known.However, expressions and automata are arguably the notions which are used most frequently: regular expressions come natural to humans in order to express patterns, while finite automata translate immediately to efficient data structures. This raises the interest in methods to translate among the two notions efficiently. In particular,the direction from expressions to automata, or from human input to machine representation, is of great practical relevance. Probably the most frequent application that involves regular expressions and finite automata is pattern matching in static text and streaming data. Common tools to locate instances of a pattern in a text are the grep application or its (many) derivatives, as well as awk, sed and lex. Notice that these programs accept slightly more general patterns, namely ''POSIX expressions''. Concerning streaming data, regular expressions are nowadays used to specify filter rules in routing hardware.These applications have in common that an input pattern is specified in form a regular expression while the execution applies a regular automaton. As it turns out, the effort that is necessary to describe a regular language, i.e., the size of the descriptor,varies with the chosen representation. For example, in the case of regular expressions and finite automata, it is rather easy to see that any regular expression can be converted to a finite automaton whose size is linear in that of the expression. For the converse direction, however, it is known that there are regular languages for which the size of the smallest describing expression is exponential in the size of the smallest describing automaton.This brings us to the subject at the core of the present work: we investigate conversions between expressions and automata and take a closer look at the properties that exert an influence on the relative sizes of these objects.We refer to the aspects involved with these consideration under the titular term of Relative Descriptional Complexity.
This dissertation addresses the measurement and evaluation of the energy and resource efficiency of software systems. Studies show that the environmental impact of Information and Communications Technologies (ICT) is steadily increasing and is already estimated to be responsible for 3 % of the total greenhouse gas (GHG) emissions. Although it is the hardware that consumes natural resources and energy through its production, use, and disposal, software controls the hardware and therefore has a considerable influence on the used capacities. Accordingly, it should also be attributed a share of the environmental impact. To address this softwareinduced impact, the focus is on the continued development of a measurement and assessment model for energy and resource-efficient software. Furthermore, measurement and assessment methods from international research and practitioner communities were compared in order to develop a generic reference model for software resource and energy measurements. The next step was to derive a methodology and to define and operationalize criteria for evaluating and improving the environmental impact of software products. In addition, a key objective is to transfer the developed methodology and models to software systems that cause high consumption or offer optimization potential through economies of scale. These include, e. g., Cyber-Physical Systems (CPS) and mobile apps, as well as applications with high demands on computing power or data volumes, such as distributed systems and especially Artificial Intelligence (AI) systems.
In particular, factors influencing the consumption of software along its life cycle are considered. These factors include the location (cloud, edge, embedded) where the computing and storage services are provided, the role of the stakeholders, application scenarios, the configuration of the systems, the used data, its representation and transmission, or the design of the software architecture. Based on existing literature and previous experiments, distinct use cases were selected that address these factors. Comparative use cases include the implementation of a scenario in different programming languages, using varying algorithms, libraries, data structures, protocols, model topologies, hardware and software setups, etc. From the selection, experimental scenarios were devised for the use cases to compare the methods to be analyzed. During their execution, the energy and resource consumption was measured, and the results were assessed. Subtracting baseline measurements of the hardware setup without the software running from the scenario measurements makes the software-induced consumption measurable and thus transparent. Comparing the scenario measurements with each other allows the identification of the more energyefficient setup for the use case and, in turn, the improvement/optimization of the system as a whole. The calculated metrics were then also structured as indicators in a criteria catalog. These indicators represent empirically determinable variables that provide information about a matter that cannot be measured directly, such as the environmental impact of the software. Together with verification criteria that must be complied with and confirmed by the producers of the software, this creates a model with which the comparability of software systems can be established.
The gained knowledge from the experiments and assessments can then be used to forecast and optimize the energy and resource efficiency of software products. This enables developers, but also students, scientists and all other stakeholders involved in the life cycleof software, to continuously monitor and optimize the impact of their software on energy and resource consumption. The developed models, methods, and criteria were evaluated and validated by the scientific community at conferences and workshops. The central outcomes of this thesis, including a measurement reference model and the criteria catalog, were disseminated in academic journals. Furthermore, the transfer to society has been driven forward, e. g., through the publication of two book chapters, the development and presentation of exemplary best practices at developer conferences, collaboration with industry, and the establishment of the eco-label “Blue Angel” for resource and energy-efficient software products. In the long term, the objective is to effect a change in societal attitudes and ultimately to achieve significant resource savings through economies of scale by applying the methods in the development of software in general and AI systems in particular.
High-resolution projections of the future climate are required to assess climate change realistically at a regional scale. This is in particular important for climate change impact studies since global projections are much too coarse to represent local conditions adequately. A major concern is thereby the change of extreme values in a warming climate due to their severe impact on the natural environment, socio-economical systems and the human health. Regional climate models (RCMs) are, however, able to reproduce much of those local features. Current horizontal resolutions are about 18-25km, which is still too coarse to directly resolve small-scale processes such as deep-convection. For this reason, projections of a possible future climate were simulated in this study with the regional climate model COSMO-CLM at horizontal resolutions of 4.5km and 1.3km for the region of Saarland-Lorraine-Luxemburg and Rhineland-Palatinate for the first time. At a horizontal scale of about 1km deep-convection is treated explicitly, which is expected to improve particularly the simulation of convective summer precipitation and a better resolved orography is expected to improve near surface fields such as 2m temperature. These simulations were performed as 10-year long time-slice experiments for the present climate (1991"2000), the near future (2041"2050) and the end of the century (2091"2100). The climate change signals of the annual and seasonal means and the change of extremes are analysed with respect to precipitation and 2m temperature and a possible added value due to the increased resolution is investigated. To assess changes in extremes, extreme indices have been applied and 10- and 20-year return levels were estimated by "peak-over-threshold" models. Since it is generally known that model output of RCMs should not directly be used for climate change impact studies, the precipitation and temperature fields were bias-corrected with several quantile-matching methods. Among them is a new developed parametric method which includes an extension for extreme values and is hence expected to improve the correction. In addition, the impact of the bias-correction on the climate change signals and on the extreme value statistics was investigated. The results reveal a significant warming of the annual mean by about +1.7 -°C until 2041"2050 and +3.7 -°C until 2091"2100, but considerably stronger signals of up to +5 -°C in summer in the Rhine Valley. Furthermore, the daily variability increases by about +0.8 -°C in summer but decreases by about -0.8 -°C in winter. Consequently, hot extremes increase moderately until the mid of the century but strongly thereafter, in particular in the Rhine Valley. Cold extremes warm continuously in the complete domain in the next 100 years but strongest in mountainous areas. The change signals with regard to annual precipitation are of the order -±10% but not significant. Significant, however, are a predicted increase of +32% of the seasonal precipitation in autumn until 2041"2050 and a decrease of -28% in summer until 2091-2100. No significant changes were found for days with intensities > 20 mm/day, but the results indicate that extremes with return periods ≤2 years increase as well as the frequency and duration of dry periods. The bias-corrections amplified positive signals but dampened negative signals and considerably reduced the power of detection. Moreover, absolute values and frequencies of extremes were altered by the correction but change signals remained approximately constant. The new method outperformed other parametric methods, in particular with regard to extreme value correction and related extreme indices and return levels. Although the bias correction removed systematic errors, it should be treated as an additional layer of uncertainty in climate change studies. Finally, the increased resolution of 1.3km improved predominantly the representation of temperature fields and extremes in terms of spatial heterogeneity. The benefits for summer precipitation were not as clear due to a severe dry-bias in summer, but it could be shown that in principle the onset and intensity of convection improves. This work demonstrates that climate change will have severe impacts in this investigation area and that in particular extremes may change considerably. An increased resolution provides thereby an added value to the results. These findings encourage further investigations, for other variables as for example near-surface wind, which will be more feasible with growing computing resources. These analyses should, however, be repeated with longer time series, different RCMs and anthropogenic scenarios to determine the robustness and uncertainty of these results more extensively.
The main goal of this publication is the development and application of an empirical method, which allows to forecast the transport of radionuclides in soils ad sediments. The calculations are based on data published in the literature. 10 case studies, comprising 30 time series, deal with the transport of Cs-134, Cs-137, Sr-85, Sr-90, and Ru-106. Transport in undisturbed soils and experimental systems like lysimeters and columns in laboratories are dealt with. The soils involved cover a large range of soils, e. g. podsols, cambisols (FAO), and peaty soils. Different speciations are covered, namely ions, aerosols, and fuel particles. Time series analysis centres around the Weibull-distribution. All theoretical models failed to forecast the transport of radionuclides. It can be shown that the parameters D and v, the dispersion coefficient and the advection velocity, appearing in solutions of the advection-dispersion equation (ADE), have no real physical meaning. They are just fitting parameters. The calculation of primary photon fluence rates, caused by Cs-137 in the soil, stresses the unreliability of forecasts based on theoretical models.
This dissertation examines the relevance of regimes for stock markets. In three research articles, we cover the identification and predictability of regimes and their relationships to macroeconomic and financial variables in the United States.
The initial two chapters contribute to the debate on the predictability of stock markets. While various approaches can demonstrate in-sample predictability, their predictive power diminishes substantially in out-of-sample studies. Parameter instability and model uncertainty are the primary challenges. However, certain methods have demonstrated efficacy in addressing these issues. In Chapter 1 and 2, we present frameworks that combine these methods meaningfully. Chapter 3 focuses on the role of regimes in explaining macro-financial relationships and examines the state-dependent effects of macroeconomic expectations on cross-sectional stock returns. Although it is common to capture the variation in stock returns using factor models, their macroeconomic risk sources are unclear. According to macro-financial asset pricing, expectations about state variables may be viable candidates to explain these sources. We examine their usefulness in explaining factor premia and assess their suitability for pricing stock portfolios.
In summary, this dissertation improves our understanding of stock market regimes in three ways. First, we show that it is worthwhile to exploit the regime dependence of stock markets. Markov-switching models and their extensions are valuable tools for filtering the stock market dynamics and identifying and predicting regimes in real-time. Moreover, accounting for regime-dependent relationships helps to examine the dynamic impact of macroeconomic shocks on stock returns. Second, we emphasize the usefulness of macro-financial variables for the stock market. Regime identification and forecasting benefit from their inclusion. This is particularly true in periods of high uncertainty when information processing in financial markets is less efficient. Finally, we recommend to address parameter instability, estimation risk, and model uncertainty in empirical models. Because it is difficult to find a single approach that meets all of these challenges simultaneously, it is advisable to combine appropriate methods in a meaningful way. The framework should be as complex as necessary but as parsimonious as possible to mitigate additional estimation risk. This is especially recommended when working with financial market data with a typically low signal-to-noise ratio.
The fragmentation of landscapes has an important impact on the conservation of biodiversity. The genetic diversity is an important factor for a population- viability, influenced by the landscape structure. However, different species with differing ecological demands react rather differently on the same landscape pattern. To address this feature, we studied ten xerothermophilous butterfly species with differing habitat requirements (habitat specialists with low dispersal power in contrast to habitat generalists with low dispersal power and habitat generalists with higher dispersal power). We analysed allozyme loci for about 10 populations (Ã 40 individuals) of each species in a western German study region with adjoining areas in Luxemburg and north-eastern France. The genetic diversity and genetic differentiation between local populations was discussed under conservation genetic aspects. For generalists we detected a more or less panmictic structure and for species with lower abundance and sedentarily behaviour the effect of isolation by distance. On the other hand, the isolation of specialists was mostly reflected by strong genetic differentiation patterns between the investigated populations. Parameters of genetic diversity were mostly significantly higher in generalists, compared to specialists. Substructures within populations as an answer of low intrapatch migration, low population densities and high population fluctuations could be shown as well. Aspects of landscape history (the historical distribution of habitats resulting of the presence of limestone areas) and the changes of extensive sheep pasturing and the loss of potential habitats in the last few decades (recent fragmentation) are discussed against the gained genetic data-set of the ten butterflies.
Due to the breath-taking growth of the World Wide Web (WWW), the need for fast and efficient web applications becomes more and more urgent. In this doctoral thesis, the emphasis will be on two concrete tasks for improving Internet applications. On the one hand, a major problem of many of today's Internet applications may be described as the performance of the Client/Server-communication: servers often take a long time to respond to a client's request. There are several strategies to overcome this problem of high user-perceived latencies; one of them is to predict future user-requests. This way, time-consuming calculations on the server's side can be performed even before the corresponding request is being made. Furthermore, in certain situations, also the pre-fetching or the pre-sending of data might be appropriate. Those ideas will be discussed in detail in the second part of this work. On the other hand, a focus will be placed on the problem of proposing hyperlinks to improve the quality of rapid written texts, at first glance, an entirely different problem to predicting client requests. Ultra-modern online authoring systems that provide possibilities to check link-consistencies and administrate link management should also propose links in order to improve the usefulness of the produced HTML-documents. In the third part of this elaboration, we will describe a possibility to build a hyperlink-proposal module based on statistical information retrieval from hypertexts. These two problem categories do not seem to have much in common. It is one aim of this work to show that there are certain, similar solution strategies to look after both problems. A closer comparison and an abstraction of both methodologies will lead to interesting synergetic effects. For example, advanced strategies to foresee future user-requests by modeling time and document aging can be used to improve the quality of hyperlink-proposals too.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.
Using validated stimulus material is crucial for ensuring research comparability and replicability. However, many databases rely solely on bidimensional valence ratings, ranging from negative to positive. While this material might be appropriate for certain studies, it does not reflect the complexity of attitudes and therefore might hamper the unambiguous interpretation of some study results. In fact, most databases cannot differentiate between neutral (i.e., neither positive nor negative) and ambivalent (i.e., simultaneously positive and negative) attitudes. Consequently, even presumably univalent (only positive or negative) stimuli cannot be clearly distinguished from ambivalent ones when selected via bipolar rating scales. In the present research, we introduce the Trier Univalence Neutrality Ambivalence (TUNA) database, a database containing 304,262 validation ratings from heterogeneous samples of 3,232 participants and at least 20 (M = 27.3, SD = 4.84) ratings per self-report scale per picture for a variety of attitude objects on split semantic differential scales. As these scales measure positive and negative evaluations independently, the TUNA database allows to distinguish univalence, neutrality, and ambivalence (i.e., potential ambivalence). TUNA also goes beyond previous databases by validating the stimulus materials on affective outcomes such as experiences of conflict (i.e., felt ambivalence), arousal, anger, disgust, and empathy. The TUNA database consists of 796 pictures and is compatible with other popular databases. It sets a focus on food pictures in various forms (e.g., raw vs. cooked, non-processed vs. highly processed), but includes pictures of other objects that are typically used in research to study univalent (e.g., flowers) and ambivalent (e.g., money, cars) attitudes for comparison. Furthermore, to facilitate the stimulus selection the TUNA database has an accompanying desktop app that allows easy stimulus selection via a ultitude of filter options.
Many people are aware of the negative consequences of plastic use on the environment. Nevertheless, they use plastic due to its functionality. In the present paper, we hypothesized that this leads to the experience of ambivalence—the simultaneous existence of positive and negative evaluations of plastic. In two studies, we found that participants showed greater ambivalence toward plastic packed food than unpacked food. Moreover, they rated plastic packed food less favorably than unpacked food in response evaluations. In Study 2, we tested whether one-sided (only positive vs. only negative) information interventions could effectively influence ambivalence. Results showed that ambivalence is resistant to (social) influence. Directions for future research were discussed.
Evaluative conditioning (EC) refers to changes in liking that are due to the pairing of stimuli, and is one of the effects studied in order to understand the processes of attitude formation. Initially, EC had been conceived of as driven by processes that are unique to the formation of attitudes, and that occur independent of whether or not individuals engage in conscious and effortful propositional processes. However, propositional processes have gained considerable popularity as an explanatory concept for the boundary conditions observed in EC studies, with some authors going as far as to suggest that the evidence implies that EC is driven primarily by propositional processes. In this monograph I present research which questions the validity of this claim, and I discuss theoretical challenges and avenues for future EC research.
Cinema programming, the composition of films to make a specific "show," remains a neglected way to research the relation between audiences and film form. As a mode of exhibition " advertised, promoted, and circulating in the public sphere even before an audience is gathered " the program can be seen as an active social relation between cinema managers and their audiences. Changes in the composition of film programs, in my case the years before the First World War in Mannheim, Germany, are thus not taken as part of a teleological evolution of film form, but instead reveal emerging practices of cinema-going, a changing relation among showmen, distributors, audiences, and the city they are all part of. The category of "the audience" becomes a compliment to narrative, economic and technical influences. Selecting the city of Mannheim further allows me to draw upon the pioneering German sociological study of cinema audiences, conducted there by Emilie Altenloh in 1911 and 1912. Thus, I am able to compare her survey data to the film programs that were actually advertised and offered to the public at the time, and also include knowledge of the social history of the city, to approximate a description of the historical audiences she studied. Here I follow the findings of Miriam Hansen and Heide Schlüpmann, who both stress the importance of the female audience in Imperial Germany. I account for a reciprocal relation between female spectators and the film industry- local programming practice to describe the transitional period from the short film programme of the "cinema of attractions" to the dominance of the long feature film, i.e. from 1906-1918. Looking closely at the advertised programmes of Mannheim I show that almost all of the first multiple-reel feature films deal with women- topics, i.e. with the fate and fortune of women, concluding that the presence of women in the audience helped established the long feature as central to the institutionalized cinema program. The film program and the specific feature films represented female identity on the screen, responding to the perceived wishes and needs of the women who gathered as audiences. Taking this "program analysis" approach, because it provides a synopsis of the social relation between audience, industry, and film form, is a valuable tool for comparing the social place of film comparatively, across many films, and potentially across regions, countries, and cultures.
In my paper I will talk about the mutual influences between the female spectators and the programming practices of Imperial Germany- cinema. I will focus on the period of the transition from the short film programme of the "cinema of attractions" to the dominance of the long feature film, i.e. from 1906-1918. I will ask questions how the presence of women in the cinema (the place where they first entered the public sphere) influenced the practice of programming. So I will deal with the relatively new topic of the programme (and its structural changes) as a mode of exhibition and I will try to connect this to the role the female audience played in shaping this format: how does the female audience affect the changes of the programme patterns, the modification of genres and their meaning within the structure of the programme and does it finally bring about a change in the mode of reception. And on the other hand how does the cinematographical programme represent and influence the female identity, and women- wishes and needs. One must ask for the reasons why the early cinema, that was characterised by diversity concerning class, gender and cultural issues and that built a kind of alternative public sphere, was displaced by an institutionalised, state monitored and nationalised German cinema. Taking into account that this change of film forms was not a teleological evolution, "gender" might be a more useful and insightful category than "class" is to explain the changes of the programme and the essence of early cinema. First I am going to present the main ideas of my project, then I"ll talk about the composition of the audience and the relation between audience and program, after that I"ll make some remarks on the reform movement and what the reformers thought about women in the cinema and about the programming practices, and as a last part, we"ll have a look at what actually happened in the cinema and at the program of the year 1911/1912 and how this program catered to the interests of the female audience. I"ll conclude with a short outlook on the changes that occurred during WW I.
The allergic contact dermatitis (ACD) to small molecular weight compounds is a common inflammatory skin reaction. ACD is restricted to industrialized countries, has an enormous sociomedical and socioeconomic impact. About 2,800 compounds from the six million chemicals known in our environment are believed to have allergic, and to a lesser degree also contact-sensitizing or immunogenic properties causing allergic contact dermatitis. ACD results from T cell responses to harmless, low molecular weight chemicals (haptens) applied to the skin. Haptens are not directly recognized by the cells of the immune system. They need to be presented by subsets of antigen presenting cells to the cells of the immune system. In this regard, epidermal Langerhans cells (LC) and the cells into which they mature (dendritic cells) are believed to play a pivotal role in the sensitization process for ACD. LC are able to bind the haptens, internalize them, and present them to naive T cells and induce thereby the development of effector T cells. They are so-called professional antigen presenting cells. This process is initiated and maintained by the release of several mediators, which are released by various cells after their contact with the haptens. One of the first proteins secreted into the environment is interleukin (IL)-1ß. This cytokine is produced and secreted minutes after an antigen enters the cell. It is commonly believed that the large amounts of this protein and other cytokines such as granulocyte-colony stimulation factor (GM-CSF) and tumor necrosis factor alpha (TNF-ï¡) needed for the initiation and activation of ACD are coming first from other cells residing in the skin, e.g., keratinocytes, monocytes and macrophages. These cytokines provide the danger signals needed for the activation of the Langerhans cell (LC), which then produce via a positive feedback loop various cytokines themselves. In addition, other proteins such as chemokines influence the generation of danger signals, migration, homing of T cells in the local lymph nodes as well as the recruitment of T cells into the skin. Thus, a small molecular compounds or hapten needs to be able to induce danger signals in order to become immunogenic. In this study, we investigated whether para-phenylenediamine (PPD), an arylamine and common contact allergen, is able to induce danger signals and likely provide the signals needed for an initiation of an immune response[162, 163]. PPD is used as an antioxidant, an ingredient of hair dyes, intermediate of dyestuff, and PPD is found in chemicals used for photographic processing. But up to date, it has not been clearly demonstrated if PPD itself is a sensitizing agent. Thus, this study aimed on the potential of PPD to provide the danger signals by studying IL-1β, TNF-ï¡, and monocyte chemoattractant proteins (MCP-1) in human monocytes, peripheral blood mononuclear cells (PBMC) from healthy volunteers, and also in two human monocyte cell lines namely U937, and THP-1. This study found that PPD decreased dose- and time-dependently the expression and release of three relevant mediators involved in the generation of danger signals. Namely, PPD reduced the mRNA and protein levels for IL-1ß, TNF-ï¡, and MCP-1 in primary human monocytes from various donors. These findings were extended and validated by investigations using the cell line U937. The data were highly specific for PPD, and no such results were gained for its known auto oxidation product called Bandrowski- base or for meta-phenylenediamine (MPD), and ortho-phenylenediamine (OPD). Therefore, we can speculate that this effect is likely to be dependent on the para-substitution. Based on these results we conclude that PPD itself is not able to mount a cascade for the induction of danger signals. It should be mentioned that it is still possible that PPD induces danger signals for sensitization by other unknown processes. Therefore, more research is still needed focusing on this subject especially in professional antigen presenting cells in order to solve the still open question whether PPD itself sensitizes naive T cells or if PPD is solely an allergen. Independently we found unexpectedly that PPD as well as other haptens such as 2, 4-Dinitrochlorobenzene, nickelsulfate, as well as some terpenoide increased clearly the expression of CC chemokin receptor 2 (CCR2), the receptor for the chemokine MCP-1. Up to date, the main importance for the CCR2 receptor comes from results demonstrating that CCR2 is critical for the migration of monocytes after encounter with bacterial lipopolysaccharides. Under these circumstances the receptor disappears from the cell surface and is down regulated. An up regulation of CCR2 has not been reported for haptens, and deserves further investigations.
Biotic communities experienced significant changes in recent decades. Climate change, the overexploitation of natural resources and the immigration of invasive species are major drivers for this change and present unknown challenges for communities worldwide. To assess the impact of these drivers, standardised long-term studies are required, which are currently lacking for many species and ecosystems. Analysing environmental samples and the DNA of associated organisms using metabarcoding and high-throughput sequencing provides a cost-efficient and rapid way to generate the high-resolution biodiversity data which is so direly needed.
In this thesis, I demonstrate the great potential of using samples from the German Environ- mental Specimen Bank (ESB), a long-term monitoring archive that has been collecting and cryogenically storing highly standardised environmental samples since 1985. Modern analytical methods enable retrospective long-term biodiversity monitoring using these samples. In the first chapter, I illustrate metabarcoding as a central method, discussing its strengths and drawbacks, how to avoid them, and new application approaches. This chapter provides the methodological basis for the following studies.
In subsequent chapters, I present time series analyses of communities associated with these environmental samples. While for Chapter two the focus is on terrestrial arthropod communities, in Chapter three aquatic and terrestrial communities across the tree of life are analysed. A null model was developed for this survey for robust conclusions. The studies covered the last three decades and revealed substantial compositional changes across all ecosystems. These changes deviated significantly from the model, indicating that the changes are occurring faster than expected. Moreover, a trend toward homogenization in many terrestrial communities was uncovered. Climate change and the immigration of invasive species in combination with the loss of site-specific species are suspected to be the main drivers for this. In a follow-up study, changes of arthropod communities in German and South Korean terrestrial ecosystems were compared using ESB leaf samples from these two countries. Since both ESBs are harmonised in sample collection and processing, comparative analyses were applicable. This research covered the last decade and revealed substantial declines in species richness in Korea. Abiotic and biotic factors are discussed as potential drivers of these results.
Finally, the possibility of assessing tree health by analysing changes in functional fungal groups using German ESB samples was investigated. The results indicate that increasing infestation of specific functional groups is a proxy for declining tree health, with further analyses planned. In this dissertation, I present the great potential of samples from long-term monitoring archives to conduct retrospective biodiversity trend analyses across the tree of life. As technologies evolve, these samples will help to understand past and predict future ecosystem changes.