Filtern
Erscheinungsjahr
Dokumenttyp
Sprache
- Englisch (526) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (20)
- Fernerkundung (18)
- Optimierung (18)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (57)
- Mathematik (47)
- Fachbereich 6 (39)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (25)
- Informatik (19)
- Anglistik (15)
- Rechtswissenschaft (14)
- Fachbereich 2 (12)
- Medienwissenschaft (4)
- Politikwissenschaft (3)
- Universitätsbibliothek (3)
- Fachbereich 3 (2)
- Fachbereich 5 (2)
- Pädagogik (2)
- Soziologie (2)
- Computerlinguistik und Digital Humanities (1)
- Geschichte, mittlere und neuere (1)
- Japanologie (1)
- Pflegewissenschaft (1)
- Phonetik (1)
- Sinologie (1)
In addition to the well-recognised effects of both, genes and adult environment, it is now broadly accepted that adverse conditions during pregnancy contribute to the development of mental and somatic disorders in the offspring, such as cardiovascular disorders, endocrinological disorders, metabolic disorders, schizophrenia, anxious and depressive behaviour and attention deficit hyperactivity disorder (ADHD). Early life events may have long lasting impact on tissue structure and function and these effects appear to underlie the developmental origins of vulnerability to chronic diseases. The assumption that prenatal adversity, such as maternal emotional states during pregnancy, may have adverse effects on the developing infant is not new. Accordant references can be found in an ancient Indian text (ca. 1050 before Christ), in biblical texts and in documents originating during the Middle Ages. Even Hippocrates stated possible effects of maternal emotional states on the developing fetus. Since the mid-1950s, research examining the effects of maternal psychosocial stress during pregnancy appeared in the literature. Extensive research in this field has been conducted since the early 1990s. Thus, the relationship between early life events and long-term health outcomes was already postulated over 20 years ago. David Barker and colleagues demonstrated that children of lower birth weight - which represents a crude marker of an adverse intrauterine environment - were at increased risk of high blood pressure, cardiovascular disorders, and type-2 diabetes later in life. These provocative findings led to a large amount of subsequent research, initially focussing on the role of undernutrition in determining fetal outcomes. The phenomenon of prenatal influences that determine in part the risk of suffering from chronic disease later in life has been named the "fetal origins of health and disease" paradigm. The concept of "prenatal programming" has now been extended to many other domains, such as the effects of prenatal maternal stress, prenatal tobacco exposure, alcohol intake, medication, toxins, as well as maternal infection and diseases. During the process of prenatal programming, environmental agents are transmitted across the placenta and act on specific fetal tissues during sensitive periods of development. Thus, developmental trajectories are changed and the organisation and function of tissue structure and organ system is altered. The biological purpose of those "early life programming" may consist in evolutionary advantages. The offspring adapts its development to the expected extrauterine environment which is forecast by the clues available during fetal life. If the fetus receives signals of a challenging environment, e.g. due to maternal stress hormones or maternal undernutrition, its survival may be promoted due to developmental adaptation processes. However, if the expected environment does not match with the real environment, maladapation and later disease risk may result. For example, a possible indicator of a "response ready" trait, such as hyperactivity/inattention may have been advantageous in an adverse ancient environment. However, it is of disadvantage when the postnatal environment demands oppositional skills, such as attention and concentration " e.g. in the classroom, at school, to achieve academic success. Borderline personality disorder (BPD) is a prevalent psychiatric disorder, characterized by impulsivity, affective instability, dysfunctional interpersonal relationships and identity disturbance. Although many studies report different risk factors, the exact etiologic mechanisms are not yet understood. In addition to the well-recognised effects of genetic components and adverse childhood experiences, BPD may potentially be co-determined by further environmental influences, acting very early in life: during pre- and perinatal period. There are several hints that may suggest possible prenatal programming processes in BPD. For example, patients with BPD are characterized by elevated stress sensitivity and reactivity and dysfunctions of the neuroendocrine stress system, such as the hypothalamic pituitary adrenal (HPA) axis. Furthermore, patients with BPD show a broad range of somatic comorbidities " especially those disorders for which prenatal programming processes have been described. During infancy and childhood, BPD patients already show behavioural and emotional abnormalities as well as pronounced temperamental traits, such as impulsivity, emotional dysregulation and inattention that may potentially be co-determined by prenatal programming processes. Such temperamental traits - similar to those, seen in patients with ADHD - have been described to be associated with low birthweight which indicates a suboptimal intrauterine environment. Moreover, the functional and structural alterations in the central nervous system (CNS) in patients with BPD might also be mediated in part by prenatal agents, such as prenatal tobacco exposure. Prenatal adversity may thus constitute a further, additional component in the multifactorial genesis of BPD. The association between BPD and prenatal risk factors has not yet been studied in such detail. We are not aware of any further study that assessed pre- and perinatal risk factors, such as maternal psychoscocial stress, smoking, alcohol intake, obstetric complications and lack of breastfeeding in patients with BPD.
In this thesis we focus on the development and investigation of methods for the computation of confluent hypergeometric functions. We point out the relations between these functions and parabolic boundary value problems and demonstrate applications to models of heat transfer and fluid dynamics. For the computation of confluent hypergeometric functions on compact (real or complex) intervals we consider a series expansion based on the Hadamard product of power series. It turnes out that the partial sums of this expansion are easily computable and provide a better rate of convergence in comparison to the partial sums of the Taylor series. Regarding the computational accuracy the problem of cancellation errors is reduced considerably. Another important tool for the computation of confluent hypergeometric functions are recurrence formulae. Although easy to implement, such recurrence relations are numerically unstable e.g. due to rounding errors. In order to circumvent these problems a method for computing recurrence relations in backward direction is applied. Furthermore, asymptotic expansions for large arguments in modulus are considered. From the numerical point of view the determination of the number of terms used for the approximation is a crucial point. As an application we consider initial-boundary value problems with partial differential equations of parabolic type, where we use the method of eigenfunction expansion in order to determine an explicit form of the solution. In this case the arising eigenfunctions depend directly on the geometry of the considered domain. For certain domains with some special geometry the eigenfunctions are of confluent hypergeometric type. Both a conductive heat transfer model and an application in fluid dynamics is considered. Finally, the application of several heat transfer models to certain sterilization processes in food industry is discussed.
Objective: Attunement is a novel measure of nonverbal synchrony reflecting the duration of the present moment shared by two interaction partners. This study examined its association with early change in outpatient psychotherapy.
Methods: Automated video analysis based on motion energy analysis (MEA) and cross-correlation of the movement time-series of patient and therapist was conducted to calculate movement synchrony for N = 161 outpatients. Movement-based attunement was defined as the range of connected time lags with significant synchrony. Latent change classes in the HSCL-11 were identified with growth mixture modeling (GMM) and predicted by pre-treatment covariates and attunement using multilevel multinomial regression.
Results: GMM identified four latent classes: high impairment, no change (Class 1); high impairment, early response (Class 2); moderate impairment (Class 3); and low impairment (Class 4). Class 2 showed the strongest attunement, the largest early response, and the best outcome. Stronger attunement was associated with a higher likelihood of membership in Class 2 (b = 0.313, p = .007), Class 3 (b = 0.251, p = .033), and Class 4 (b = 0.275, p = .043) compared to Class 1. For highly impaired patients, the probability of no early change (Class 1) decreased and the probability of early response (Class 2) increased as a function of attunement.
Conclusions: Among patients with high impairment, stronger patient-therapist attunement was associated with early response, which predicted a better treatment outcome. Video-based assessment of attunement might provide new information for therapists not available from self-report questionnaires and support therapists in their clinical decision-making.
Memory consists of multiple anatomically and functionally distinct systems. Animal studies suggest that stress modulates multiple memory systems in a manner that favors nucleus caudatus-based stimulus-response learning at the expense of hippocampus-based spatial learning. The present work aimed (i) to translate these findings to humans, (ii) to determine the involvement of the stress hormone cortisol in this effect, and (iii) to assess whether the use of stimulus-response and spatial strategies is a long lasting person characteristic. To address these issues we developed a new paradigm that differentiates the use of spatial and stimulus-response learning in humans. Our findings indicate that (i) psychosocial stress (Trier Social Stress Test) modulates the use of spatial and stimulus-response learning in humans, (ii) cortisol plays a key role in this modulatory effect of stress, and (iii) the use of spatial and stimulus-response learning is affected by situational rather than long lasting person factors.
In this thesis, we study the convergence behavior of an efficient optimization method used for the identification of parameters for underdetermined systems. The research is motivated by optimization problems arising from the estimation of parameters in neural networks as well as in option pricing models. In the first application, we are concerned with neural networks used to forecasting stock market indices. Since neural networks are able to describe extremely complex nonlinear structures they are used to improve the modelling of the nonlinear dependencies occurring in the financial markets. Applying neural networks to the forecasting of economic indicators, we are confronted with a nonlinear least squares problem of large dimension. Furthermore, in this application the number of parameters of the neural network to be determined is usually much larger than the number of patterns which are available for the determination of the unknowns. Hence, the residual function of our least squares problem is underdetermined. In option pricing, an important but usually not known parameter is the volatility of the underlying asset of the option. Assuming that the underlying asset follows a one-factor continuous diffusion model with nonconstant drift and volatility term, the value of an European call option satisfies a parabolic initial value problem with the volatility function appearing in one of the coefficients of the parabolic differential equation. Using this system equation, the estimation of the volatility function is described by a nonlinear least squares problem. Since the adaption of the volatility function is based only on a small number of observed market data these problems are naturally ill-posed. For the solution of these large-scale underdetermined nonlinear least squares problems we use a fully iterative inexact Gauss-Newton algorithm. We show how the structure of a neural network as well as that of the European call price model can be exploited using iterative methods. Moreover, we present theoretical statements for the convergence of the inexact Gauss-Newton algorithm applied to the less examined case of underdetermined nonlinear least squares problems. Finally, we present numerical results for the application of neural networks to the forecasting of stock market indices as well as for the construction of the volatility function in European option pricing models. In case of the latter application, we discretize the parabolic differential equation using a finite difference scheme and we elucidate convergence problems of the discrete scheme when the initial condition is not everywhere differentiable.
Globalization and Divergence: Dynamics of Dissensus in Non-Dominant Cinema Cultures of South India
(2002)
Based on her field studies between 1999 and 2003 in the South Indian State Kerala, the author critically reflects about Habermas's concept of the (bourgeois) public sphere, and also about later critiques of Habermas (eg. Eley). Schulze adds the new dimensions of human emotionality and humane ethics to the discussion of today's public (spheres) and civil societies which are part of globalising modernisations. It is poor and marginalized women's strongly felt compassion and love practised in their daily lives, which Schulze focusses on: these Marginalized ethics of the 'Good life' do sharply contrast the dominant societies' value systems; these latters consequently don't provide to the Marginaliezed a 'model'. However, Kerala, which is widely refered to as a development model - particularly with respect to the situation/ education of its women - is thus analysed by the author as a historically and culturally specific kind of 'modernity', which follows are rather violent and aggressive path of development in consonance with the general ruling anti-human/ nature philosophy of 'globalization'. Schulze's tool in her field work is 'participatory action research' and also her 'empathic camera' (camcoder). She mixed with local women who had organized themselves in women's groups with the urge to truly represent themSelves and their own ethics and goals in life - without the usual intervention of men/ of nationalist politics ruling Kerala's public sphere(s). In the course of Schulze and the local women groups becoming acquainted with each other, the scholar and the Marginalized felt the desire to support each other in their respective struggles for empowerment and for being respected as a human being. The author finally understood the fallacy and cynicism which lies in applying as a scholar the term 'women in Kerala' as if there wasn't the day to day particular violence which women of dalit ('untouchable'), or of adivasi (indigenous) background experience. Women's lives are moulded by networks of violence which are inherent to Kerala's castes, classes, and ethnicities, parallel to the basic oppression which women face because they are women. A group of dalit women in Kerala became particularly close companions in Schulze's quest for unravelling seemingly contradictory facts: Kerala's official claim to provide to women and other persons who were generally discriminated against in the larger Indian context, a supportive social and educational environment, on the one hand, and on the other hand the comparatively high number of suicides among Keralite women (and men), and the absence of women in what appears as Kerala's public sphere and 'civil society'. In several analytical steps which always centre around the experiences and feelings of the many poor and marginalized women, their life-worlds, their daily life philosophies, their views, voices, their ethics, dreams, Schulze unfolds these Marginalized visions, and tries to interpret them on their own terms. In this manner not only the mainstream society's propaganda about the 'Kerala development model' is demystified, but also to the reader insights become possible into a totally different set of ethics held by these women. They transgress notions of competition, of the 'necessary' monetarisation of all spheres of human life and of nature, of caste, religious, or gender conflicts. By means of 13 small video films the women together with Schulze showed and reflected upon their philosophy of an empowered 'Good life'.
The present thesis addresses the validity of Binge Eating Disorder (BED) as well as underlying mechanisms of BED from three different angles. Three studies provide data discriminating obesity with BED from obesity without BED. Study 1 demonstrates differences between obese individuals with and without BED regarding eating in the natural environment, psychiatric comorbidity, negative affect as well as self reported tendencies in eating behavior. Evidence for possible psychological mechanisms explaining increased intake of BED individuals in the natural environment was given by analyzing associations of negative affect, emotional eating, restrained eating and caloric intake in obese BED compared to NBED controls. Study 2 demonstrated stress-induced changes in the eating behavior of obese individuals with BED. The impact of a psychosocial stressor, the Trier Social Stress Test (TSST, Kirschbaum, Pirke, & Hellhammer, 1993), on behavioral patterns of eating behavior in laboratory was investigated. Special attention was given to stress-induced changes in variables that reflect mechanisms of appetite regulation in obese BED individuals compared to controls. To further explore by which mechanisms stress might trigger binge eating, study 3 investigated differences in stress-induced cortisol secretion after a socially evaluated cold pressure test (SECPT, Schwabe, Haddad, & Schachinger, 2008) in obese BED as compared to obese NBED individuals.
The present work considers the normal approximation of the binomial distribution and yields estimations of the supremum distance of the distribution functions of the binomial- and the corresponding standardized normal distribution. The type of the estimations correspond to the classical Berry-Esseen theorem, in the special case that all random variables are identically Bernoulli distributed. In this case we state the optimal constant for the Berry-Esseen theorem. In the proof of these estimations several inequalities regarding the density as well as the distribution function of the binomial distribution are presented. Furthermore in the estimations mentioned above the distribution function is replaced by the probability of arbitrary, not only unlimited intervals and in this new situation we also present an upper bound.
Interoception - the perception of bodily processes - plays a crucial role in the subjective experience of emotion, consciousness and symptom genesis. As an alternative to interoceptive paradigms that depend on the participants" active cooperation, five studies are presented to show that startle methodology may be employed to study visceral afferent processing. Study 1 (38 volunteers) showed that startle responses to acoustic stimuli of 105 dB(A) intensity were smaller when elicited during the cardiac systole (R-wave +230 ms) as compared to the diastole (R +530 ms). In Study 2, 31 diabetic patients were divided into two groups with normal or diminished (< 6 ms/mmHg) baroreflex sensitivity (BRS) of heart rate control. Patients with normal BRS showed a startle inhibition during the cardiac systole as was found for healthy volunteers. Diabetic patients with diminished BRS did not show this pattern. Because diminished BRS is an indicator of impaired baro-afferent signal transmission, we concluded that cardiac modulation of startle is associated with intact arterial baro-afferent feedback. Thus, pre-attentive startle methodology is feasible to study visceral afferent processing. rnVisceral- and baro-afferent information has been found to be mainly processed in the right hemisphere. To explore whether cardiac modulation of startle eye blink is lateralized as well, in Study 3, 37 healthy volunteers received 160 unilateral acoustic startle stimuli presented to both ears, one at a time (R +0, 100, 230, 530 ms). Startle response magnitude was only diminished at R +230 ms and for left-ear presentation. This lateralization effect in the cardiac modulation of startle eye blink may reflect the previously described advantages of right-hemispheric brain structures in relaying viscero- and baro-afferent signal transmission. rnThis lateralization effect implies that higher cognitive processes may also play a role in the cardiac modulation of startle. To address this question, in Study 4, 25 volunteers responded first by 'fast as possible' button pushes (reaction time, RT), and second, rated perceived intensity of 60 acoustic startle stimuli (85, 95, or 105 dB; R +230, 530 ms). RT was divided into evaluation and motor response time. Increasing stimulus intensity enhanced startle eye blink, intensity ratings, and RT components. Eye blinks and intensity judgments were lower when startle was elicited at a latency of R +230 ms, but RT components were differentially affected. It is concluded that the cardiac cycle affects the attentive processing of acoustic startle stimuli. rnBeside the arterial baroreceptors, the cardiopulmonary baroreceptors represent another important system of cardiovascular perception that may have similar effects on startle responsiveness. To clarify this issue, in Study 5, Lower Body Negative Pressure at gradients of 0, -10, -20, and -30 mmHg was applied to unload cardiopulmonary baroreceptors in 12 healthy males, while acoustic startle stimuli were presented (R +230, 530 ms). Unloading of cardiopulmonary baroreceptors increased startle eye blink responsiveness. Furthermore, the effect of relative loading/unloading of arterial baroreceptors on startle eye blink responsiveness was replicated. These results demonstrate that the loading status of cardiopulmonary baroreceptors also has an impact on brainstem-based CNS processes. rnThus, the cardiac modulation of acoustic startle is feasible to reflect baro-afferent signal transmission of multiple neural sources, it represents a pre-attentive method that is independent of active cooperation, but its modulatory effects also reach higher cognitive, attentive processes.rn
This thesis comprises of four research papers on the economics of education and industrial relations, which contribute to the field of empirical economic research. All of the corresponding papers focus on analysing how much time individuals spend on specific activities. The allocation of available time resources is a decision that individuals make throughout their lifetime. In this thesis, we consider individuals at different stages of their lives - students at school, university students, and dependent employees at the workplace.
Part I includes two research studies on student's behaviour in secondary and tertiary education.
Chapter 2 explores whether students who are relatively younger or older within the school year exhibit differential time allocation. Building on previous findings showing that relatively younger students perform worse in school, the study shows that relatively younger students are aware of their poor performance in school and feel more strain as a result. Nevertheless, there are no clear differences to be found in terms of time spent on homework, while relatively younger students spend more time watching television and less time on sports activities. Thus, the results suggest that the lower learning outcomes are not associated with different time allocations between school-related activities and non-school-related activities.
Chapter 3 analyses how individual ability and labour market prospects affect study behaviour. The theoretical modelling predicts that both determinants increase study effort. The empirical investigation is based on cross-sectional data from the National Educational Panel Study (NEPS) and includes thousands of students in Germany. The analyses show that more gifted students exhibit lower subjective effort levels and invest less time in self-study. In contrast, very good labour market prospects lead to more effort exerted by the student, both qualitatively and quantitatively. The potential endogeneity problem is taken into account by using regional unemployment data as an instrumental variable.
Part II includes two labour economic studies on determinants of overtime. Both studies belong to the field of industrial relations, as they focus on union membership on the one hand and the interplay of works councils and collective bargaining coverage on the other.
Chapter 4 shows that union members work less overtime than non-members do. The econometric approach takes the problem of unobserved heterogeneity into account; but provides no evidence that this issue affects the results. Different channels that could lead to this relationship are analysed by examining relevant subgroups separately. For example, this effect of union membership can also be observed in establishments with works councils and for workers who are very likely to be covered by collective bargaining agreements. The study concludes that the observed effect is due to the fact that union membership can protect workers from corresponding increased working time demands by employers.
Chapter 5 builds on previous studies showing a negative effect of works councils on overtime. In addition to co-determination by works councils at the firm level, collective bargaining coverage is an important factor in the German industrial relations system. Corresponding data was not available in the SOEP for quite some time. Therefore, the study uses recent SOEP data, which also contains information on collective bargaining coverage. A cross-sectional analysis is conducted to examine the effects of works councils in establishments with and without collective bargaining coverage. Similar to studies analysing other outcome variables, the results show that the effect of works councils exists only for employees covered by a collective bargaining agreement.
Chemical communication in the reproductive behaviour of Neotropical poison frogs (Dendrobatidae)
(2013)
Chemical communication is the evolutionary oldest communication system in the animal kingdom that triggers intra- and interspecific interactions. It is initiated by the emitter releasing either a signal or a cue that causes a reaction of the receiving individual. Compared to other animals there are relatively few studies regarding chemical communication in anurans. In this thesis the impact of chemical communication on the behaviour of the poison frog Ranitomeya variabilis (Dendrobatidae) and its parental care performance was investigated. This species uses phytotelmata (small water bodies in plants) for both clutch and tadpole depositions. Since tadpoles are cannibalistic, adult frogs do not only avoid conspecifics when depositing their eggs but also transport their tadpoles individually into separated phytotelmata. The recognition of already occupied phytotelmata was shown to be due to chemical substances released by the conspecific tadpoles. In order to gain a deeper comprehension about the ability of adult R. variabilis to generally recognize and avoid tadpoles, in-situ pool choice experiments were conducted, offering chemical substances of tadpole of different species to the frogs (Chapter I). It turned out that they were able to recognize all species and avoid their chemical substances for clutch depositions. However, for tadpole depositions only dendrobatid tadpoles occurring in phytotelmata were avoided, while those species living in rivers were not. Additionally, the chemical substances of a treefrog tadpole (Hylidae) were recognized by R. variabilis. Yet, they were not avoided but preferred for tadpole depositions; thus these tadpoles might be recognized as a potential prey for the predatory poison frog larvae. One of the poison frog species which was avoided for both tadpole and clutch depositions, was the phytotelmata breeding Hyloxalus azureiventris. The chemical substances released by its tadpoles were analysed together with those of the R. variabilis tadpoles (Chapter II). After finding a suitable solid-phase extraction sorbent (DSC-18), the active chemical compounds from the water of both tadpole species were extracted and fractionated. In order to determine which fractions triggered the avoidance behaviour of the frogs, in-situ bioassays were conducted. It was found that the biologically active compounds differed between both species. Since the avoidance of the conspecific tadpoles is not advantageous to the releaser tadpoles (losing a potential food resource) the chemicals released by them might be defined as chemical cues. However, as it turned out that the avoidance of the heterospecific tadpoles was not triggered by a mere byproduct based on the close evolutionary relationship between the two species, the chemical compounds released by H. azureiventris tadpoles might be defined as chemical signals (being advantageous to the releasing tadpoles) or, more specifically as synomones, interspecificly acting chemicals that are advantageous for both emitter and receiver (since R. variabilis avoids a competition situation for its offspring, too). Another interspecific communication system investigated in this thesis was the avoidance of predator kairomones (Chapter III). Using chemical substances from damselfly larvae, it could be shown that R. variabilis was unable to recognize and avoid kairomones of these tadpole predators. However, when physically present, damselfly larvae were avoided by the frogs. For the recognition of conspecific tadpoles in contrast, chemical substances were necessary, since purely visible artificial tadpole models were not avoided. If R. variabilis is also capable to chemically communicate with adult conspecifics was investigated by presenting chemical cues/signals of same-sex or opposite-sex conspecifics to the frogs (Chapter IV). It was suggested that males would be attracted to chemical substances of females and repelled by those of conspecific males. But instead all individuals showed avoidance behaviour towards the conspecific chemicals. This was suggested to be an artefact due to confinement stress of the releaser animals, emitting disturbance cues that triggered avoidance behaviour in their conspecifics. The knowledge gained about chemical communication in parental care thus far, was used to further investigate a possible provisioning behaviour in R. variabilis. In-situ pool-choice experiments with chemical cues of conspecific tadpoles were carried out throughout the change from rainy to dry season (Chapter V). With a changepoint analysis, the exact seasonal change was defined and differences between frogs" choices were analysed. It turned out that R. variabilis does not avoid but prefer conspecific cues during the dry season for tadpole depositions, what might be interpreted as a way to provide their tadpoles with food (i.e. younger tadpoles) in order to accelerate their development when facing desiccation risk. That tadpoles were also occasionally fed with fertilized eggs could be shown in a comparative study, where phytotelmata that contained a tadpole deposited by the frogs themselves received more clutch depositions than freshly erected artificial phytotelmata containing unfamiliar tadpoles (i.e. their chemical cues; Chapter VI). Conducting home range calculations with ArcGIS, it turned out that R. variabilis males showed unexpectedly strong site fidelity, leading to the suggestion that they recognize their offspring by phytotelmata location. However, in order to test if R. variabilis is furthermore able to perform chemical offspring recognition, frogs were confronted in in-situ pool-choice experiments with chemical cues of single tadpoles that were found in their home ranges (Chapter VII). Genetic kinship analyses were conducted between those tadpoles emitting the chemical cues and those deposited together with or next to them. The results, however, indicated that frogs did not choose to deposit their offspring with or without another tadpole due to relatedness, i.e. kin recognition by chemical cues could not be confirmed in R. variabilis.
Social innovation became a widely discussed topic in politics, research funding programs, and business development. Recent European and US economic and science policies have set aside significant funds to generate and foster social innovation. In view of current challenges such as digitization, Work 4.0, inclusion or migrant integration, the question of how organizations can be empowered to develop new and innovative approaches and service models to social challenges is becoming increasingly urgent. This especially applies to organizations in the fields of education and social services. In education, implementing new ideas and concepts is usually discussed as educational reform, which mostly addresses changes in policy agendas with consequences for national and international education systems. The concept of social innovation however has a different starting point: the source of new ideas and services are identified new, emergent needs in society or re-conceptualized. Such need-based perspectives might bring new impulses to the field of education. Therefore, this paper identifies important existing strands of social innovation research, which need to be considered in the emerging academic discourse on social innovation in education. Looking at social innovation through an education research lens reveals the close relation between learning, creativity, and innovation. Individuals, teams, and even organizations learn, engage in creative problem solving to create new and innovative products and services. From an organizational education perspective, the questions arise, how social innovation emerges and even more important, how the process of developing social innovation can be supported. After a brief introduction in the concept of social innovation, the paper discusses therefore the sites, where social innovation emerges, social innovators, approaches to foster social innovation as well as promoting and hindering factors for social innovation.
In this psycho-neuro-endocrine study the molecular basis of different variants of steroid receptors as well as highly conserved non steroidal receptors was investigated. These nuclear receptors (NRs) are important key regulators of a wide variety of different physiological and pathophysiological challenges ranging from inflammation and stress to complex behaviour and disease. NRs control gene transcription in a ligand dependent manner and are embedded in the huge interaction network of the neuroendocrine and immune system. Two receptors, the glucocorticoid receptor (GR) and the chicken ovalbumin upstream promoter-transcription factorII (Coup-TFII), both expressed in the immune and nervous system, were investigated regarding possible splice variants and their implication in the control of gene transcription. Both NRs are known to interact and modulate each other- target gene regulation. This study could be shown that both NRs have different splice variants that are expressed in a tissue specific manner. The different 5-´alternative transcript variants of the human GR were in silico identified in other species and evidence for a highly conserved and tightly controlled function was provided. Investigations of the N-terminal transactivation domain of the GR showed a deletion suggesting an altered glucocorticoid-dependent transactivation profile. The newly identified alternative transcript variant of Coup-TFII leads to a DNA binding deficient Coup-TFII isoform that is highly expressed in the brain. This Coup-TFII isoform alters Coup-TFII target gene expression and is suggested to interact with GR via its ligand binding domain resulting in an impaired GR target gene regulation in the nervous system. In this thesis it was demonstrated that NR variants are important for the understanding of the enormous regulatory potential of this receptor family and have to be taken into account for the development of therapeutic strategies for complex diseases such as stress related and neurodegenerative disorders.
Primary focal hyperhidrosis (PFH, OMIM %144110) is a genetically influenced condition characterised by excessive sweating. Prevalence varies between 1.0–6.1% in the general population, dependent on ethnicity. The aetiology of PFH remains unclear but an autosomal dominant mode of inheritance, incomplete penetrance and variable phenotypes have been reported. In our study, nine pedigrees (50 affected, 53 non-affected individuals) were included. Clinical characterisation was performed at the German Hyperhidrosis Centre, Munich, by using physiological and psychological questionnaires. Genome-wide parametric linkage analysis with GeneHunter was performed based on the Illumina genome-wide SNP arrays. Haplotypes were constructed using easyLINKAGE and visualised via HaploPainter. Whole-exome sequencing (WES) with 100x coverage in 31 selected members (24 affected, 7 non-affected) from our pedigrees was achieved by next generation sequencing. We identified four genome-wide significant loci, 1q41-1q42.3, 2p14-2p13.3, 2q21.2-2q23.3 and 15q26.3-15q26.3 for PFH. Three pedigrees map to a shared locus at 2q21.2-2q23.3, with a genome-wide significant LOD score of 3.45. The chromosomal region identified here overlaps with a locus at chromosome 2q22.1-2q31.1 reported previously. Three families support 1q41-1q42.3 (LOD = 3.69), two families share a region identical by descent at 2p14-2p13.3 (LOD = 3.15) and another two families at 15q26.3 (LOD = 3.01). Thus, our results point to considerable genetic heterogeneity. WES did not reveal any causative variants, suggesting that variants or mutations located outside the coding regions might be involved in the molecular pathogenesis of PFH. We suggest a strategy based on whole-genome or targeted next generation sequencing to identify causative genes or variants for PFH.
Background: Hyperhidrosis (excessive sweating, OMIM %114110) is a complex disorder with multifactorial causes. Emotional strains and social stress increase symptoms and lead to a vicious circle. Previously, we showed significantly higher depression scores, and normal cortisol awakening responses in patients with primary focal hyperhidrosis (PFH). Stress reactivity in response to a (virtual) Trier Social Stress Test (TSST-VR) has not been studied so far. Therefore, we measured sweat secretion, salivary cortisol and alpha amylase (sAA) concentrations, and subjective stress ratings in affected and non-affected subjects in response to a TSST-VR.
Method: In this pilot study, we conducted TSST-VRs and performed general linear models with repeated measurements for salivary cortisol and sAA levels, heart rate, axillary sweat and subjective stress ratings for two groups (diagnosed PFH (n = 11), healthy controls (n = 16)).
Results: PFH patients showed significantly heightened sweat secretion over time compared to controls (p = 0.006), with highest quantities during the TSST-VR. In both groups, sweating (p < 0.001), maximum cortisol levels (p = 0.002), feelings of stress (p < 0.001), and heart rate (p < 0.001) but not sAA (p = 0.068) increased significantly in response to the TSST-VR. However, no differences were detected in subjective ratings, cortisol concentrations and heart rate between PFH patients and controls (pall > 0.131).
Conclusion: Patients with diagnosed PFH showed stress-induced higher sweat secretion compared to healthy controls but did not differ in the stress reactivity with regard to endocrine or subjective markers. This pilot study is in need of replication to elucidate the role of the sympathetic nervous system as a potential pathway involved in the stress-induced emotional sweating of PFH patients.
We are living in a connected world, surrounded by interwoven technical systems. Since they pervade more and more aspects of our everyday lives, a thorough understanding of the structure and dynamics of these systems is becoming increasingly important. However - rather than being blueprinted and constructed at the drawing board - many technical infrastructures like for example the Internet's global router network, the World Wide Web, large scale Peer-to-Peer systems or the power grid - evolve in a distributed fashion, beyond the control of a central instance and influenced by various surrounding conditions and interdependencies. Hence, due to this increase in complexity, making statements about the structure and behavior of tomorrow's networked systems is becoming increasingly complicated. A number of failures has shown that complex structures can emerge unintentionally that resemble those which can be observed in biological, physical and social systems. In this dissertation, we investigate how such complex phenomena can be controlled and actively used. For this, we review methodologies stemming from the field of random and complex networks, which are being used for the study of natural, social and technical systems, thus delivering insights into their structure and dynamics. A particularly interesting finding is the fact that the efficiency, dependability and adaptivity of natural systems can be related to rather simple local interactions between a large number of elements. We review a number of interesting findings about the formation of complex structures and collective dynamics and investigate how these are applicable in the design and operation of large scale networked computing systems. A particular focus of this dissertation are applications of principles and methods stemming from the study of complex networks in distributed computing systems that are based on overlay networks. Here we argue how the fact that the (virtual) connectivity in such systems is alterable and widely independent from physical limitations facilitates a design that is based on analogies between complex network structures and phenomena studied in statistical physics. Based on results about the properties of scale-free networks, we present a simple membership protocol by which scale-free overlay networks with adjustable degree distribution exponent can be created in a distributed fashion. With this protocol we further exemplify how phase transition phenomena - as occurring frequently in the domain of statistical physics - can actively be used to quickly adapt macroscopic statistical network parameters which are known to massively influence the stability and performance of networked systems. In the case considered in this dissertation, the adaptation of the degree distribution exponent of a random, scale-free overlay allows - within critical regions - a change of relevant structural and dynamical properties. As such, the proposed scheme allows to make sound statements about the relation between the local behavior of individual nodes and large scale properties of the resulting complex network structures. For systems in which the degree distribution exponent cannot easily be derived for example from local protocol parameters, we further present a distributed, probabilistic mechanism which can be used to monitor a network's degree distribution exponent and thus to reason about important structural qualities. Finally, the dissertation shifts its focus towards the study of complex, non-linear dynamics in networked systems. We consider a message-based protocol which - based on the Kuramoto model for coupled oscillators - achieves a stable, global synchronization of periodic heartbeat events. The protocol's performance and stability is evaluated in different network topologies. We further argue that - based on existing findings about the interrelation between spectral network properties and the dynamics of coupled oscillators - the proposed protocol allows to monitor structural properties of networked computing systems. An important aspect of this dissertation is its interdisciplinary approach towards a sensible and constructive handling of complex structures and collective dynamics in networked systems. The associated investigation of distributed systems from the perspective of non-linear dynamics and statistical physics highlights interesting parallels both to biological and physical systems. This foreshadows systems whose structures and dynamics can be analyzed and understood in the conceptual frameworks of statistical physics and complex systems.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
The Role of Dopamine and Acetylcholine as Modulators of Selective Attention and Response Speed
(2015)
The principles of top-down and bottom-up processing are essential to cognitive psychology. At their broadest, most general definition, they denote that processing can be driven either by the salience of the stimulus input or by individual goals and strategies. Selective top-down attention, specifically, consists in the deliberate prioritizing of stimuli that are deemed goal-relevant, while selective bottom-up attention relies on the automatic allocation of attention to salient stimuli (Connor, Egeth, & Yantis, 2004; Schneider, Schote, Meyer, & Frings, 2014). Variations within neurotransmitter systems can modulate cognitive performance in a domain-specific fashion (Greenwood, Fossella, & Parasuraman, 2005). Noudoost and Moore (2011a) proposed that the influence of the dopaminergic neurotransmitter system on selective top-down attention might be greater than the influence of this system on selective bottom-up attention; likewise, they assumed that the cholinergic neurotransmitter system might be more important for selective bottom-up than top-down attention. To test this hypothesis, naturally occurring variations within the two neurotransmitter systems were assessed. Five polymorphisms were selected; two of the dopaminergic system (the COMT Val158Met polymorphism and the DAT1 polymorphism) and three of the cholinergic system (the CHRNA4 rs1044396 polymorphism, the CHRNA5 rs3841324 polymorphism, and the CHRNA5 rs16969968 polymorphism). It was tested whether these polymorphisms modulated the performance in tasks of selective top-down attention (a Stroop task and a Negative priming task) and in a task of selective bottom-up attention (a Posner-Cuing task). Indeed, the dopaminergic polymorphisms influenced selective top-down attention, but exerted no effects on bottom-up attention. This aligned with the hypothesis proposed by Noudoost and Moore (2011a). In contrast, the cholinergic polymorphisms were not found to modulate selective bottom-up attention. The three cholinergic polymorphisms, however, affected the general response speed in the Stroop task, Negative priming task, and Posner-Cuing task (irrespective of attentional processing). In sum, the findings of this study provide strong indications that the dopaminergic system modulates selective top-down attention, while the cholinergic system is highly relevant for the general speed of information processing.
Dry tropical forests undergo massive conversion and degradation processes. This also holds true for the extensive Miombo forests that cover large parts of Southern Africa. While the largest proportional area can be found in Angola, the country still struggles with food shortages, insufficient medical and educational supplies, as well as the ongoing reconstruction of infrastructure after 27 years of civil war. Especially in rural areas, the local population is therefore still heavily dependent on the consumption of natural resources, as well as subsistence agriculture. This leads, on one hand, to large areas of Miombo forests being converted for cultivation purposes, but on the other hand, to degradation processes due to the selective use of forest resources. While forest conversion in south-central rural Angola has already been quantitatively described, information about forest degradation is not yet available. This is due to the history of conflicts and the therewith connected research difficulties, as well as the remote location of this area. We apply an annual time series approach using Landsat data in south-central Angola not only to assess the current degradation status of the Miombo forests, but also to derive past developments reaching back to times of armed conflicts. We use the Disturbance Index based on tasseled cap transformation to exclude external influences like inter-annual variation of rainfall. Based on this time series, linear regression is calculated for forest areas unaffected by conversion, but also for the pre-conversion period of those areas that were used for cultivation purposes during the observation time. Metrics derived from linear regression are used to classify the study area according to their dominant modification processes.rnWe compare our results to MODIS latent integral trends and to further products to derive information on underlying drivers. Around 13% of the Miombo forests are affected by degradation processes, especially along streets, in villages, and close to existing agriculture. However, areas in presumably remote and dense forest areas are also affected to a significant extent. A comparison with MODIS derived fire ignition data shows that they are most likely affected by recurring fires and less by selective timber extraction. We confirm that areas that are used for agriculture are more heavily disturbed by selective use beforehand than those that remain unaffected by conversion. The results can be substantiated by the MODIS latent integral trends and we also show that due to extent and location, the assessment of forest conversion is most likely not sufficient to provide good estimates for the loss of natural resources.
Dry tropical forests are facing massive conversion and degradation processes and they are the most endangered forest type worldwide. One of the largest dry forest types are Miombo forests that stretch across the Southern African subcontinent and the proportionally largest part of this type can be found in Angola. The study site of this thesis is located in south-central Angola. The country still suffers from the consequences of the 27 years of civil war (1975-2002) that provides a unique socio-economic setting. The natural characteristics are a representative cross section which proved ideal to study underlying drivers as well as current and retrospective land use change dynamics. The major land change dynamic of the study area is the conversion of Miombo forests to cultivation areas as well as modification of forest areas, i.e. degradation, due to the extraction of natural resources. With future predictions of population growth, climate change and large scale investments, land pressure is expected to further increase. To fully understand the impacts of these dynamics, both, conversion and modification of forest areas were assessed. By using the conceptual framework of ecosystem services, the predominant trade-off between food and timber in the study area was analyzed, including retrospective dynamics and impacts. This approach accounts for products that contribute directly or indirectly to human well-being. For this purpose, data from the Landsat archive since 1989 until 2013 was applied in different study area adapted approaches. The objectives of these approaches were (I) to detect underlying drivers and their temporal and spatial extent of impact, (II) to describe modification and conversion processes that reach from times of armed conflicts over the ceasefire and the post-war period and (III) to provide an assessment of drivers and impacts in a comparative setting. It could be shown that major underlying drivers for the conversion processes are resettlement dynamics as well as the location and quality of streets and settlements. Furthermore, forests that are selectively used for resource extraction have a higher chance of being converted to a field. Drivers of forest degradation are on one hand also strongly connected to settlement and infrastructural structures. But also to a large extent to fire dynamics that occur mostly in more remote and presumably undisturbed forest areas. The loss of woody biomass as well as its slow recovery after the abandonment of fields could be quantified and stands in large contrast to the amount of potentially cultivated food that is necessarily needed. The results of the thesis support the fundamental understanding of drivers and impacts in the study area and can thus contribute to a sustainable resource management.
A phenomenon of recent decades is that digital marketplaces on the Internet are establishing themselves for a wide variety of products and services. Recently, it has become possible for private individuals to invest in young and innovative companies (so-called "start-ups"). Via Internet portals, potential investors can examine various start-ups and then directly invest in their chosen start-up. In return, investors receive a share in the firm- profit, while companies can use the raised capital to finance their projects. This new way of financing is called "Equity Crowdfunding" (ECF) or "Crowdinvesting". The aim of this dissertation is to provide empirical findings about the characteristics of ECF. In particular, the question of whether ECF is able to overcome geographic barriers, the interdependence of ECF and capital structure, and the risk of failure for funded start-ups and their chances of receiving follow-up funding by venture capitalists or business angels will be analyzed. The results of the first part of this dissertation show that investors in ECF prefer local companies. In particular, investors who invest larger amounts have a stronger tendency to invest in local start-ups. The second part of the dissertation provides first indications of the interdependencies between capital structure and ECF. The analysis makes clear that the capital structure is not a determinant for undertaking an ECF campaign. The third part of the dissertation analyzes the success of companies financed by ECF in a country comparison. The results show that after a successful ECF campaign German companies have a higher chance of receiving follow-up funding by venture capitalists compared to British companies. The probability of survival, however, is slightly lower for German companies. The results provide relevant implications for theory and practice. The existing literature in the area of entrepreneurial finance will be extended by insights into investor behavior, additions to the capital structure theory and a country comparison in ECF. In addition, implications are provided for various actors in practice.
This doctoral thesis examines intergenerational knowledge, its antecedents as well as how participation in intergenerational knowledge transfer is related to the performance evaluation of employees. To answer these questions, this doctoral thesis builds on a literature review and quantitative research methods. A systematic literature study shows that empirical evidence on intergenerational knowledge transfer is limited. Building on prior literature, effects of various antecedents at the interpersonal and organizational level regarding their effects on intergenerational and intragenerational knowledge transfer are postulated. By questioning 444 trainees and trainers, this doctoral thesis also demonstrates that interpersonal antecedents impact how trainees participate in intergenerational knowledge transfer with their trainers. Thereby, the results of this study provide support that interpersonal antecedents are relevant for intergenerational knowledge transfer, yet, also emphasize the implications attached to the assigned roles in knowledge transfer (i.e., whether one is a trainee or trainer). Moreover, the results of an experimental vignette study reveal that participation in intergenerational knowledge transfer is linked to the performance evaluation of employees, yet, is susceptible to whether the employee is sharing or seeking knowledge. Overall, this doctoral thesis provides insights into this topic by covering a multitude of antecedents of intergenerational knowledge transfer, as well as how participation in intergenerational knowledge transfer may be associated with the performance evaluation of employees.
Large scale non-parametric applied shape optimization for computational fluid dynamics is considered. Treating a shape optimization problem as a standard optimal control problem by means of a parameterization, the Lagrangian usually requires knowledge of the partial derivative of the shape parameterization and deformation chain with respect to input parameters. For a variety of reasons, this mesh sensitivity Jacobian is usually quite problematic. For a sufficiently smooth boundary, the Hadamard theorem provides a gradient expression that exists on the surface alone, completely bypassing the mesh sensitivity Jacobian. Building upon this, the gradient computation becomes independent of the number of design parameters and all surface mesh nodes are used as design unknown in this work, effectively allowing a free morphing of shapes during optimization. Contrary to a parameterized shape optimization problem, where a smooth surface is usually created independently of the input parameters by construction, regularity is not preserved automatically in the non-parametric case. As part of this work, the shape Hessian is used in an approximative Newton method, also known as Sobolev method or gradient smoothing, to ensure a certain regularity of the updates, and thus a smooth shape is preserved while at the same time the one-shot optimization method is also accelerated considerably. For PDE constrained shape optimization, the Hessian usually is a pseudo-differential operator. Fourier analysis is used to identify the operator symbol both analytically and discretely. Preconditioning the one-shot optimization by an appropriate Hessian symbol is shown to greatly accelerate the optimization. As the correct discretization of the Hadamard form usually requires evaluating certain surface quantities such as tangential divergence and curvature, special attention is also given to discrete differential geometry on triangulated surfaces for evaluating shape gradients and Hessians. The Hadamard formula and Hessian approximations are applied to a variety of flow situations. In addition to shape optimization of internal and external flows, major focus lies on aerodynamic design such as optimizing two dimensional airfoils and three dimensional wings. Shock waves form when the local speed of sound is reached, and the gradient must be evaluated correctly at discontinuous states. To ensure proper shock resolution, an adaptive multi-level optimization of the Onera M6 wing is conducted using more than 36, 000 shape unknowns on a standard office workstation, demonstrating the applicability of the shape-one-shot method to industry size problems.
This work is concerned with arbitrage bounds for prices of contingent claims under transaction costs, but regardless of other conceivable market frictions. Assumptions on the underlying market are held as weak as convenient for the deduction of meaningful results that make good economic sense. In discrete time we also allow for underlying price processes with uncountable state space. In continuous time the underlying price process is modeled by a semimartingale. For the most part we could avoid any stronger assumptions. The main problems with which we deal in this work are the modelling of (proportional) transaction costs, Fundamental Theorems of Asset Pricing under transaction costs, dual characterizations of arbitrage bounds under transaction costs, Quantile-Hedging under transaction costs, alternatives to the Black-Scholes model in continuous time (under transaction costs). The results apply to stock and currency markets.
Academic self-concept (ASC) is comprised of individual perceptions of one- own academic ability. In a cross-sectional quasi-representative sample of 3,779 German elementary school children in grades 1 to 4, we investigated (a) the structure of ASC, (b) ASC profile formation, an aspect of differentiation that is reflected in lower correlations between domain-specific ASCs with increasing grade level, (c) the impact of (internal) dimensional comparisons of one- own ability in different school subjects for profile formation of ASC, and (d) the role played by differences in school grades between subjects for these dimensional comparisons. The nested Marsh/Shavelson model, with general ASC at the apex and math, writing, and reading ASC as specific factors nested under general ASC fitted the data at all grade levels. A first-order factor model with math, writing, reading, and general ASCs as correlated factors provided a good fit, too. ASC profile formation became apparent during the first two to three years of school. Dimensional comparisons across subjects contributed to ASC profile formation. School grades enhanced these comparisons, especially when achievement profiles were uneven. In part, findings depended on the assumed structural model of ASCs. Implications for further research are discussed with special regard to factors influencing and moderating dimensional comparisons.
Fostering positive and realistic self-concepts of individuals is a major goal in education worldwide (Trautwein & Möller, 2016). Individuals spend most of their childhood and adolescence in school. Thus, schools are important contexts for individuals to develop positive self-perceptions such as self-concepts. In order to enhance positive self-concepts in educational settings and in general, it is indispensable to have a comprehensive knowledge about the development and structure of self-concepts and their determinants. To date, extensive empirical and theoretical work on antecedents and change processes of self-concept has been conducted. However, several research gaps still exist, and several of these are the focus of the present dissertation. Specifically, these research gaps encompass (a) the development of multiple self-concepts from multiple perspectives regarding stability and change, (b) the direction of longitudinal interplay between self-concept facets over the entire time period from childhood to late adolescence, and (c) the evidence that a recently developed structural model of academic self-concept (nested Marsh/Shavelson model [Brunner et al., 2010]) fits the data in elementary school students, (d) the investigation of structural changes in academic self-concept profile formation within this model, (e) the investigation of dimensional comparison processes as determinants of academic self-concept profile formation in elementary school students within the internal/external frame of reference model (I/E model; Marsh, 1986), (f) the test of moderating variables for dimensional comparison processes in elementary school, (g) the test of the key assumptions of the I/E model that effects of dimensional comparisons depend to a large degree on the existence of achievement differences between subjects, and (h) the generalizability of the findings regarding the I/E model over different statistical analytic methods. Thus, the aim of the present dissertation is to contribute to close these gaps with three studies. Thereby, data from German students enrolled in elementary school to secondary school education were gathered in three projects comprising the developmental time span from childhood to adolescence (ages 6 to 20). Three vital self-concept areas in childhood and adolescence were in-vestigated: general self-concept (i.e., self-esteem), academic self-concepts (general, math, reading, writing, native language), and social self-concepts (of acceptance and assertion). In all studies, data were analyzed within a latent variable framework. Findings are discussed with respect to the research aims of acquiring more comprehensive knowledge on the structure and development of significant self-concept in childhood and adolescence and their determinants. In addition, theoretical and practical implications derived from the findings of the present studies are outlined. Strengths and limitations of the present dissertation are discussed. Finally, an outlook for future research on self-concepts is given.
The demand for reliable statistics has been growing over the past decades, because more and more political and economic decisions are based on statistics, e.g. regional planning, allocation of funds or business decisions. Therefore, it has become increasingly important to develop and to obtain precise regional indicators as well as disaggregated values in order to compare regions or specific groups. In general, surveys provide the information for these indicators only for larger areas like countries or administrative divisions. However, in practice, it is more interesting to obtain indicators for specific subdivisions like on NUTS 2 or NUTS 3 levels. The Nomenclature of Units for Territorial Statistics (NUTS) is a hierarchical system of the European Union used in statistics to refer to subdivisions of countries. In many cases, the sample information on such detailed levels is not available. Thus, there are projects such as the European Census, which have the goal to provide precise numbers on NUTS 3 or even community level. The European Census is conducted amongst others in Germany and Switzerland in 2011. Most of the participating countries use sample and register information in a combined form for the estimation process. The classical estimation methods of small areas or subgroups, such as the Horvitz-Thompson (HT) estimator or the generalized regression (GREG) estimator, suffer from small area-specific sample sizes which cause high variances of the estimates. The application of small area methods, for instance the empirical best linear unbiased predictor (EBLUP), reduces the variance of the estimates by including auxiliary information to increase the effective sample size. These estimation methods lead to higher accuracy of the variables of interest. Small area estimation is also used in the context of business data. For example during the estimation of the revenues of specific subgroups like on NACE 3 or NACE 4 levels, small sample sizes can occur. The Nomenclature statistique des activités économiques dans la Communauté européenne (NACE) is a system of the European Union which defines an industry standard classification. Besides small sample sizes, business data have further special characteristics. The main challenge is that business data have skewed distributions with a few large companies and many small businesses. For instance, in the automotive industry in Germany, there are many small suppliers but only few large original equipment manufacturers (OEM). Altogether, highly influential units and outliers can be observed in business statistics. These extreme values in connection with small sample sizes cause severe problems when standard small area models are applied. These models are generally based on the normality assumption, which does not hold in the case of outliers. One way to solve these peculiarities is to apply outlier robust small area methods. The availability of adequate covariates is important for the accuracy of the above described small area methods. However, in business data, the auxiliary variables are hardly available on population level. One of several reasons for that is the fact that in Germany a lot of enterprises are not reflected in business registers due to truncation limits. Furthermore, only listed enterprises or companies which trespass specific thresholds are obligated to publish their results. This limits the number of potential auxiliary variables for the estimation. Even though there are issues with available covariates, business data often include spatial dependencies which can be used to enhance small area methods. Next to spatial information based on geographic characteristics, group-specific similarities like related industries based on NACE codes can be used. For instance, enterprises from the same NACE 2 level, e.g. sector 47 retail trade, behave more similar than two companies from different NACE 2 levels, e.g. sector 05 mining of coal and sector 64 financial services. This spatial correlation can be incorporated by extending the general linear mixed model trough the integration of spatially correlated random effects. In business data, outliers as well as geographic or content-wise spatial dependencies between areas or domains are closely linked. The coincidence of these two factors and the resulting consequences have not been fully covered in the relevant literature. The only approach that combines robust small area methods with spatial dependencies is the M-quantile geographically weighted regression model. In the context of EBLUP-based small area models, the combination of robust and spatial methods has not been considered yet. Therefore, this thesis provides a theoretical approach to this scientific and practical problem and shows its relevance in an empirical study.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
It has been the overall aim of this research work to assess the potential of hyperspectral remote sensing data for the determination of forest attributes relevant to forest ecosystem simulation modeling and forest inventory purposes. A number of approaches for the determination of structural and chemical attributes from hyperspectral remote sensing have been applied to the collected data sets. Many of the methods to be found in the literature were up to now just applied to broadband multispectral data, applied to vegetation canopies other than forests, reported to work on the leaf level or with modelled data, not validated with ground truth data, or not systematically compared to other methods. Attributes that describe the properties of the forest canopy and that are potentially open to remote sensing were identified, appropriate methods for their retrieval were implemented and field, laboratory and image data (HyMap sensor) were acquired over a number of forest plots. The study on structural attributes compared statistical and physical approaches. In the statistical section, linear predictive models between vegetation indices derived from HyMap data and field measurements of structural forest stand attributes were systematically evaluated. The study demonstrates that for hyperspectral image data, linear regression models can be applied to quantify leaf area index and crown volume with good accuracy. For broadband multispectral data, the accuracy was generally lower. The physically-based approach used the invertible forest reflectance model (INFORM), a combination of well established sub-models FLIM, SAIL and LIBERTY. The model was inverted with HyMap data using a neural network approach. In comparison to the statistical approach, it could be shown that the reflectance model inversion works equally well. In opposition to empirically derived prediction functions that are generally limited to the local conditions at a certain point in time and to a specified sensor type, the calibrated reflectance model can be applied more easily to different optical remote sensing data acquired over central European forests. The study on chemical forest attributes evaluated the information content of HyMap data for the estimation of nitrogen, chlorophyll and water concentration. A number of needle samples of Norway spruce were analysed for their total chlorophyll, nitrogen and water concentrations. The chemical data was linked to needle spectra measured in the laboratory and canopy spectra measured by the HyMap sensor. Wavebands selected in statistical models were often located in spectral regions that are known to be important for chlorophyll detection (red edge, green peak). Predictive models were applied on the HyMap image to compute maps of chlorophyll concentration and nitrogen concentration. Results of map overlay operations revealed coherence between total chlorophyll and zones of stand development stage and between total chlorophyll and zones of soil type. Finally, it can be stated that the hyperspectral remote sensing data generally contains more information relevant to the estimation of the forest attributes compared to multispectral data. Structural forest attributes, except biomass, can be determined with good accuracy from a hyperspectral sensor type like HyMap. Among the chemical attributes, chlorophyll concentration can be determined with good accuracy and nitrogen concentration with moderate accuracy. For future research, additional dimensions have to be taken into account, for instance through exploitation of multi-view angle data. Additionally, existing forest canopy reflectance models should be further improved.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Recently, optimization has become an integral part of the aerodynamic design process chain. However, because of uncertainties with respect to the flight conditions and geometrical uncertainties, a design optimized by a traditional design optimization method seeking only optimality may not achieve its expected performance. Robust optimization deals with optimal designs, which are robust with respect to small (or even large) perturbations of the optimization setpoint conditions. The resulting optimization tasks become much more complex than the usual single setpoint case, so that efficient and fast algorithms need to be developed in order to identify, quantize and include the uncertainties in the overall optimization procedure. In this thesis, a novel approach towards stochastic distributed aleatory uncertainties for the specific application of optimal aerodynamic design under uncertainties is presented. In order to include the uncertainties in the optimization, robust formulations of the general aerodynamic design optimization problem based on probabilistic models of the uncertainties are discussed. Three classes of formulations, the worst-case, the chance-constrained and the semi-infinite formulation, of the aerodynamic shape optimization problem are identified. Since the worst-case formulation may lead to overly conservative designs, the focus of this thesis is on the chance-constrained and semi-infinite formulation. A key issue is then to propagate the input uncertainties through the systems to obtain statistics of quantities of interest, which are used as a measure of robustness in both robust counterparts of the deterministic optimization problem. Due to the highly nonlinear underlying design problem, uncertainty quantification methods are used in order to approximate and consequently simplify the problem to a solvable optimization task. Computationally demanding evaluations of high dimensional integrals resulting from the direct approximation of statistics as well as from uncertainty quantification approximations arise. To overcome the curse of dimensionality, sparse grid methods in combination with adaptive refinement strategies are applied. The reduction of the number of discretization points is an important issue in the context of robust design, since the computational effort of the numerical quadrature comes up in every iteration of the optimization algorithm. In order to efficiently solve the resulting optimization problems, algorithmic approaches based on multiple-setpoint ideas in combination with one-shot methods are presented. A parallelization approach is provided to overcome the amount of additional computational effort involved by multiple-setpoint optimization problems. Finally, the developed methods are applied to 2D and 3D Euler and Navier-Stokes test cases verifying their industrial usability and reliability. Numerical results of robust aerodynamic shape optimization under uncertain flight conditions as well as geometrical uncertainties are presented. Further, uncertainty quantification methods are used to investigate the influence of geometrical uncertainties on quantities of interest in a 3D test case. The results demonstrate the significant effect of uncertainties in the context of aerodynamic design and thus the need for robust design to ensure a good performance in real life conditions. The thesis proposes a general framework for robust aerodynamic design attacking the additional computational complexity of the treatment of uncertainties, thus making robust design in this sense possible.
Fast and Slow Effects of Cortisol on Several Functions of the Central Nervous System in Humans
(2014)
Cortisol is one of the key substances released during stress to restore homeostasis. Our knowledge of the impact of this glucocorticoid on cognition and behavior in humans is, however, still limited. Two modes of action of cortisol are known, a rapid, nongenomic and a slow, genomic mode. Both mechanisms appear to be involved in mediating the various effects of stress on cognition. Here, three experiments are presented that investigated fast and slow effects of cortisol on several functions of the human brain. The first experiment investigated the interaction between insulin and slow, genomic cortisol effects on resting regional cerebral blood flow (rCBF) in 48 young men. A bilateral, locally distinct increase in rCBF in the insular cortex was observed 37 to 58 minutes after intranasal insulin admission. Cortisol did not influence rCBF, neither alone nor in interaction with insulin. This finding suggests that cortisol does not influence resting cerebral blood flow within a genomic timeframe. The second experiment examined fast cortisol effects on memory retrieval. 40 participants (20 of them female) learned associations between neutral male faces and social descriptions and were tested for recall one week later. Cortisol administered intravenously 8 minutes before retrieval influenced recall performance in an inverted U-shaped dose-response relationship. This study demonstrates a rapid, presumably nongenomic cortisol effect on memory retrieval in humans. The third experiment studied rapid cortisol effects on early multisensory integration. 24 male participants were tested twice in a focused cross-modal choice reaction time paradigm, once after cortisol and once after placebo infusion. Cortisol acutely enhanced the integration of visual targets and startling auditory distractors, when both stimuli appeared in the same sensory hemi-field. The rapidity of effect onset strongly suggests that cortisol changes multisensory integration by a nongenomic mechanism. The work presented in this thesis highlights the essential role of cortisol as a fast acting agent during the stress response. Both the second and the third experiment provide new evidence of nongenomic cortisol effects on human cognition and behavior. Future studies should continue to investigate the impact of rapid cortisol effects on the functioning of the human brain.
Theoretical and empirical research assumes a negative development of student achievement motivation over the course of their school careers (i.e., mean-level declines of achievement motivation). However, the exact magnitude of this motivational change remains elusive and it is unclear whether different motivational constructs show similar developmental trends. Furthermore, it is unknown whether motivational declines are related to a particular school stage (i.e., elementary, middle, or high school) or the school transition, and which additional changes are associated with motivational decreases (e.g., changes in student achievement). Finally, previous research has remained inconsistent regarding the question whether ability grouping of students helps prevent motivational declines or results in additional motivational “costs” for students.
This dissertation presents three articles that were designed to address these research questions. In Article 1, a meta-analysis based on 107 independent longitudinal studies investigated student mean-level changes in self-esteem, academic self-concept, academic self-efficacy, intrinsic motivation, and achievement goals from first to 13th grade. Article 2 comprised two longitudinal studies with German adolescents (Study: n = 745 students assessed in four waves in grades 5-7; Study 2: n = 1420 students assessed in four waves in grades 5-8). Both longitudinal studies investigated the separate and the joint development of achievement goals, interest, and achievement in math. In Article 3, a longitudinal study (n = 296 high-ability students assessed in four waves in grades 5-7) investigated the effects of full-time ability grouping on student development of academic self-concept and achievement in math.
The meta-analysis revealed significant decreases in math and language academic self-concept, intrinsic motivation, and mastery and performance-approach goals, whereas no significant changes in self-esteem, general academic self-concept, academic self-efficacy, and performance-avoidance goals were found. Interestingly, motivational declines were not related to school stage or school transition. In Article 2, decreases in interest and mastery, performance-approach, and performance-avoidance goals were indicated by both longitudinal studies. Development of mastery and performance-approach goals was positively related or unrelated to development in interest and achievement, whereas development of performance-avoidance goals was negatively related or unrelated to development of interest and achievement. Finally, the longitudinal study in Article 3 revealed no significant change in student academic self-concept in math over time. Ability grouping showed no positive or negative effects on student academic self-concept. However, high-ability students that were grouped together demonstrated greater gains in their achievement than high-ability students in regular classes.
This thesis contains four parts that are all connected by their contributions to the Efficient Market Hypothesis and decision-making literature. Chapter two investigates how national stock market indices reacted to the news of national lockdown restrictions in the period from January to May 2020. The results show that lockdown restrictions led to different reactions in a sample of OECD and BRICS countries: there was a general negative effect resulting from the increase in lockdown restrictions, but the study finds strong evidence for underreaction during the lockdown announcement, followed by some overreaction that is corrected subsequently. This under-/overreaction pattern, however, is observed mostly during the first half of our time series, pointing to learning effects. Relaxation of the lockdown restrictions, on the other hand, had a positive effect on markets only during the second half of our sample, while for the first half of the sample, the effect was negative. The third chapter investigates the gender differences in stock selection preferences on the Taiwan Stock Exchange. By utilizing trading data from the Taiwan Stock Exchange over a span of six years, it becomes possible to analyze trading behavior while minimizing the self-selection bias that is typically present in brokerage data. To study gender differences, this study uses firm-level data. The percentage of male traders in a company is the dependent variable, while the company’s industry and fundamental/technical aspects serve as independent variables. The results show that the percentage of women trading a company rises with a company’s age, market capitalization, a company’s systematic risk, and return. Men trade more frequently and show a preference for dividend-paying stocks and for industries with which they are more familiar. The fourth chapter investigated the relationship between regret and malicious and benign envy. The relationship is analyzed in two different studies. In experiment 1, subjects had to fill out psychological scales that measured regret, the two types of envy, core self-evaluation and the big 5 personality traits. In experiment 2, felt regret is measured in a hypothetical scenario, and the subject’s felt regret was regressed on the other variables mentioned above. The two experiments revealed that there is a positive direct relationship between regret and benign envy. The relationship between regret and malicious envy, on the other hand, is mostly an artifact of core self-evaluation and personality influencing both malicious envy and regret. The relationship can be explained by the common action tendency of self-improvement for regret and benign envy. Chapter five discusses the differences in green finance regulation and implementation between the EU and China. China introduced the Green Silk Road, while the EU adopted the Green Deal and started working with its own green taxonomy. The first difference comes from the definition of green finance, particularly with regard to coal-fired power plants. Especially the responsibility of nation-states’ emissions abroad. China is promoting fossil fuel projects abroad through its Belt and Road Initiative, but the EU’s Green Deal does not permit such actions. Furthermore, there are policies in both the EU and China that create contradictory incentives for economic actors. On the one hand, the EU and China are improving the framework conditions for green financing while, on the other hand, still allowing the promotion of conventional fuels. The role of central banks is also different between the EU and China. China’s central bank is actively working towards aligning the financial sector with green finance. A possible new role of the EU central bank or the priority financing of green sectors through political decision-making is still being debated.
Industrial companies mainly aim for increasing their profit. That is why they intend to reduce production costs without sacrificing the quality. Furthermore, in the context of the 2020 energy targets, energy efficiency plays a crucial role. Mathematical modeling, simulation and optimization tools can contribute to the achievement of these industrial and environmental goals. For the process of white wine fermentation, there exists a huge potential for saving energy. In this thesis mathematical modeling, simulation and optimization tools are customized to the needs of this biochemical process and applied to it. Two different models are derived that represent the process as it can be observed in real experiments. One model takes the growth, division and death behavior of the single yeast cell into account. This is modeled by a partial integro-differential equation and additional multiple ordinary integro-differential equations showing the development of the other substrates involved. The other model, described by ordinary differential equations, represents the growth and death behavior of the yeast concentration and development of the other substrates involved. The more detailed model is investigated analytically and numerically. Thereby existence and uniqueness of solutions are studied and the process is simulated. These investigations initiate a discussion regarding the value of the additional benefit of this model compared to the simpler one. For optimization, the process is described by the less detailed model. The process is identified by a parameter and state estimation problem. The energy and quality targets are formulated in the objective function of an optimal control or model predictive control problem controlling the fermentation temperature. This means that cooling during the process of wine fermentation is controlled. Parameter and state estimation with nonlinear economic model predictive control is applied in two experiments. For the first experiment, the optimization problems are solved by multiple shooting with a backward differentiation formula method for the discretization of the problem and a sequential quadratic programming method with a line search strategy and a Broyden-Fletcher-Goldfarb-Shanno update for the solution of the constrained nonlinear optimization problems. Different rounding strategies are applied to the resulting post-fermentation control profile. Furthermore, a quality assurance test is performed. The outcomes of this experiment are remarkable energy savings and tasty wine. For the next experiment, some modifications are made, and the optimization problems are solved by using direct transcription via orthogonal collocation on finite elements for the discretization and an interior-point filter line-search method for the solution of the constrained nonlinear optimization problems. The second experiment verifies the results of the first experiment. This means that by the use of this novel control strategy energy conservation is ensured and production costs are reduced. From now on tasty white wine can be produced at a lower price and with a clearer conscience at the same time.
The main objective of the present thesis was to investigate whether antibody effects observed in earlier in vitro studies can translate into the protection against chemical carcinogenesis in vivo as the basis of an immunoprophylactic approach against carcinogens. As model for chemical carcinogenesis, we selected B[a]P the prototype polycyclic aromatic hydrocarbon (PAH), an environmental pollutant emanating from both natural and anthropogenic sources. Many in vivo models conveniently use high doses of carcinogens mostly given as single bolus, which provides simple surrogate readouts, but poorly reflects chronic exposure to the low concentrations found in the environment. In addition, these concentrations cannot be matched with equimolar antibody concentrations obtained by immunisation. However, low B[a]P concentrations do not permit to directly measure chemical carcinogenesis. Therefore, in the present thesis, the pharmacokinetic, metabolism and B[a]P mediated immunotoxicity were chosen as experimental read-outs. B[a]P conjugate vaccines based on ovalbumin, tetanus toxoid and diphtheria toxoid (DT) as carrier proteins were developed to actively immunise mice against B[a]P. B[a]P-DT conjugate induced the most robust immune response. The antibodies reacted not only with B[a]P but also with the proximate carcinogen 7,8-diol-B[a]P. Antibodies modulated the bioavailability of B[a]P and its metabolic activation in a dose-dependent manner by sequestration in the blood. In order to further improve the vaccination, we replaced the protein carrier by promiscuous T-helper cell epitopes to induce higher antibody titer with increased specificity for the B[a]P hapten. We hypothesised that a reduction of B cell binding sites on the carrier, compared to whole protein carrier, should favour the activation of B cells recognising the hapten instead of the carrier protein. An internal processing of the carrier and cleavage of the B[a]P-BA and subsequent presentation of the carrier peptide by MHC II molecules to T cell receptor should induce a B cell dependent immune response by activating B cells capable to recognise B[a]P. We demonstrated that a vaccination against B[a]P using promiscuous T-helper cell epitopes as a carrier is feasible and some tested peptide conjugates were more immunogenic as whole protein conjugates with increased specificity. We showed that vaccination against B[a]P reduces immunotoxicity. B[a]P suppressed the proliferative response of both T and B cells after a sub-acute administration, an effect that was completely reversed by vaccination. In immunized mice the immunotoxic effect of B[a]P on IFN-γ, Il-12, TNF-ï¡ production and B cell activation was restored. In addition, specific antibodies inhibited the induction of Cyp1a1 by B[a]P in lymphocytes and Cyp1b1 in the liver, enzymes that are known to convert the procarcinogen B[a]P to the ultimate DNA-adduct forming metabolite, a major risk factor of chemical carcinogenesis. In order to replace Freund adjuvant and to improve the immunisation strategy in terms of antibody quantity and quality, several adjuvants that are potentially compatible with their use in humans were tested. In combination with Freund adjuvant, the conjugate-vaccine induced high levels of B[a]P-specific antibodies. We showed that all adjuvants tested induced specific antibodies against B[a]P and 7,8-diol-B[a]P, its carcinogenic metabolite. The highest antibody levels were obtained with Quil A, MF-59 and Alum. Biological activity in terms of enhanced retention of B[a]P was confirmed in mice immunised with Quil A, Montanide, Alum and MF-59. Our findings demonstrate that a vaccination against B[a]P is feasible in combination with adjuvants licensed in humans. Based on these results and with the current understanding of the mechanisms of chemical carcinogenesis of the ubiquitous carcinogen B[a]P and of the effects of specific antibodies, an immunoprophylactic approach against chemical carcinogenesis is absolutely warranted. Nevertheless, the direct effects of B[a]P-specific antibodies on the different stages of carcinogenesis (e.g. adduct formation) and whether these effects may translate into long-term protective effect against tumourigenesis needs to be proven in further experiments.
It is generally assumed that the temperature increase associated with global climate change will lead to increased thunderstorm intensity and associated heavy precipitation events. In the present study it is investigated whether the frequency of thunderstorm occurrences will in- or decrease and how the spatial distribution will change for the A1B scenario. The region of interest is Central Europe with a special focus on the Saar-Lor-Lux region (Saarland, Lorraine, Luxembourg) and Rhineland-Palatinate.Daily model data of the COSMO-CLM with a horizontal resolution of 4.5 km is used. The simulations were carried out for two different time slices: 1971"2000 (C20), and 2071"2100 (A1B). Thunderstorm indices are applied to detect thunderstorm-prone conditions and differences in their frequency of occurrence in the two thirty years timespans. The indices used are CAPE (Convective Available Potential Energy), SLI (Surface Lifted Index), and TSP (Thunderstorm Severity Potential).The investigation of the present and future thunderstorm conducive conditions show a significant increase of non-thunderstorm conditions. The regional averaged thunderstorm frequencies will decrease in general, but only in the Alps a potential increase in thunderstorm occurrences and intensity is found. The comparison between time slices of 10 and 30 years length show that the number of gridpoints with significant signals increases only slightly. In order to get a robust signal for severe thunderstorm, an extension to more than 75 years would be necessary.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
The larval stage of the European fire salamander (Salamandra salamandra) inhabits both lentic and lotic habitats. In the latter, they are constantly exposed to unidirectional water flow, which has been shown to cause downstream drift in a variety of taxa. In this study, a closed artificial creek, which allowed us to keep the water flow constant over time and, at the same time, to simulates with predefined water quantities and durations, was used to examine the individual movement patterns of marked larval fire salamanders exposed to unidirectional flow. Movements were tracked by marking the larvae with VIAlpha tags individually and by using downstream and upstream traps. Most individuals showed stationarity, while downstream drift dominated the overall movement pattern. Upstream movements were rare and occurred only on small distances of about 30 cm; downstream drift distances exceeded 10 m (until next downstream trap). The simulated flood events increased drift rates significantly, even several days after the flood simulation experiments. Drift probability increased with decreasing body size and decreasing nutritional status. Our results support the production hypothesis as an explanation for the movements of European fire salamander larvae within creeks.
Recent non-comparative studies diverge in their assessments of the extent to which German and Japanese post-Cold War foreign policies are characterized by continuity or change. While the majority of analyses on Germany find overall continuity in policies and guiding principles, prominent works on Japan see the country undergoing drastic and fundamental change. Using an explicitly comparative framework for analysis based on a role theoretical approach, this study reevaluates the question of change and continuity in the two countries" regional foreign policies, focusing on the time period from 1990 to 2010. Through a qualitative content analysis of key foreign policy speeches, this dissertation traces and compares German and Japanese national role conceptions (NRCs) by identifying policymakers" perceived duties and responsibilities of their country in international politics. Furthermore, it investigates actual foreign policy behavior in two case studies about German and Japanese policies on missile defense and on textbook disputes. The dissertation examines whether the NRCs identified in the content analysis are useful to understand and explain each country- particular conduct. Both qualitative content analysis and case studies demonstrate the influence of normative and ideational variables in foreign policymaking. Incremental adaptations in foreign policy preferences can be found in Germany as well as Japan, but they are anchored in established normative guidelines and represent attempts to harmonize existing preferences with the conditions of the post-Cold War era. The dissertation argues that scholars have overstated and misconstrued the changes underway by asserting that Japan is undergoing a sweeping transformation in its foreign policy.
Hardware bugs can be extremely expensive, financially. Because microprocessors and integrated circuits have become omnipresent in our daily live and also because of their continously growing complexity, research is driven towards methods and tools that are supposed to provide higher reliability of hardware designs and their implementations. Over the last decade Ordered Binary Decision Diagrams (OBDDs) have been well proven to serve as a data structure for the representation of combinatorial or sequential circuits. Their conciseness and their efficient algorithmic properties are responsible for their huge success in formal verification. But, due to Shannon's counting argument, OBDDs can not always guarantee the concise representation of a given design. In this thesis, Parity Ordered Binary Decision Diagrams are presented, which are a true extension of OBDDs. In addition to the regular branching nodes of an OBDD, functional nodes representing a parity operation are integrated into the data structure, thus resulting in Parity-OBDDs. Parity-OBDDs are more powerful than OBDDs are, but, they are no longer a canonical representation. Besides theoretical aspects of Parity-OBDDs, algorithms for their efficient manipulation are the main focus of this thesis. Furthermore, an analysis on the factors that influence the Parity-OBDD representation size gives way for the development of heuristic algorithms for their minimization. The results of these analyses as well as the efficiency of the data structure are also supported by experiments. Finally, the algorithmic concept of Parity-OBDDs is extended to Mod-p-Decision Diagrams (Mod-p-DDs) for the representation of functions that are defined over an arbitrary finite domain.
There is a lot of evidence for the impact of acute glucocorticoid treatment on hippocampus-dependent explicit learning and memory (memory for facts and events). But there have been few studies, investigating the effect of glucocorticoids on implicit learning and memory. We conducted three studies with different methodology to investigate the effect of glucocorticoids on different forms of implicit learning. In Study 1, we investigated the effect of cortisol depletion on short-term habituation in 49 healthy subjects. 25 participants received oral metyrapone (1500 mg) to suppress endogenous cortisol production, while 24 controls received oral placebo. Eye blink electromyogram (EMG) responses to 105 dB acoustic startle stimuli were assessed. Effective endogenous cortisol suppression had no effect on short-term habituation of the startle reflex, but startle eye blink responses were significantly increased in the metyrapone group. The latter findings are in line with previous human studies, which have shown that excess cortisol, sufficient to fully occupy central nervous system (CNS) corticosteroid receptors, may reduce startle eye blink. This effect may be mediated by CNS mechanisms controlling cortisol feedback. In Study 2, we investigated delay or trace eyeblink conditioning in a patient group with a relative hypocortisolism (30 patients with fibromyaligia syndrome/FMS) compared to 20 healthy control subjects. Conditioned eyeblink response probability was assessed by EMG. Morning cortisol levels, ratings of depression, anxiety and psychosomatic complaints as well as general symptomatology and psychological distress were assessed. As compared to healthy controls FMS patients showed lower morning cortisol levels, and trace eyeblink conditioning was facilitated whereas delay eyeblink conditioning was reduced. Cortisol measures correlate significantly only with trace eyeblink conditioning. Our results are in line with studies of pharmacologically induced hyper- and hypocortisolism, which affected trace eyeblink conditioning. We suggest that endocrine mechanisms affecting hippocampus-mediated forms of associative learning may play a role in the generation of symptoms in these patients.rnIn Study 3, we investigated the effect of excess cortisol on implicit sequence learning in healthy subjects. Oral cortisol (30 mg) was given to 29 participants, whereas 31 control subjects received placebo. All volunteers performed a 5-choice serial reaction time task (SRTT). The reaction speed of every button-press was determined and difference-scores were calculated as a proof of learning. Compared to the control group, we found a delayed learning in the cortisol group at the very beginning of the task. This study is the first human investigation, indicating impaired implicit memory function after exogenous administration of the stress hormone cortisol. Our findings support a previous neuroimaging study, which suggested that the medial temporal lobe (including the hippocampus) is also active in implicit sequence learning, but our results may also depend on the engagement of other brain structures.
In past years, desertification and land degradation have been acknowledged as a major threat to human welfare world-wide, and their environmental and societal implications have sparked the formulation of the UN Convention to Combat Desertification (UNCCD). Any measure taken against desertification, or the design of dedicated early warning systems, must take into account both the spatial and temporal dimensions of process driving factors. Equally important, past and present reactions of ecosystems to physical and socio-economical disturbances or management interventions need to be understood. In this context, remote sensing and geoinformation processing support the required assessment, monitoring and modelling approaches, and hence provide an essential contribution to the scientific component of the struggle against desertification. Supported by DG Research of the European Commission, the Remote Sensing Department of the University of Trier convened RGLDD to promote scientific exchange between specialists working on the interface of remote sensing, geoinformation processing, desertification/land degradation research and its socio-economic implications. Although targeted at the scientific community, contributions with application perspectives were of crucial importance and both an overview of the current state of the art as well as operational opportunities were presented. Hosted at the Robert-Schuman Haus in Trier, the conference gained widespread attention and attracted an international audience from all parts of the world, which underlines the global dimension of land degradation and desertification processes. Based on a rigorous review of submitted abstracts, more than 100 contributions were accepted for oral and poster presentation, which are found in these proceedings edition in full paper form. Please note: This document is optimised for screen resolution, to receive a high-resolution version please contact the editors.
Two areas were selected to represent major process regimes of Mediterranean rangelands. In the County of Lagads (Greece), situated east of the city of Thessaloniki, livestock grazing with sheep and goats is a major factor of the rural economy. In suitable areas, it is complemented by agricultural use. The region of Ayora (Spain) is located west of the city of Valencia. It is one of regions most affected by fires in Spain. First of all, long time series of satellite data were compiled for both regions on the basis of Landsat sensors, which cover the time until 1976 (Ayora) and 1984 (Lagadas) with one image per year. Using a rigorous processing scheme, the data were geometrically and radiometrically corrected Specific attention was given to an exact sensor calibration, the radiometric intercalibration of Landsat-TM and "MSS. Proportional cover of photosynthetically active vegetation was identified as a suitable quantitative indicator for assessing the state of rangelands. Using Spectral Mixture Analysis (SMA) it was inferred for all data sets. The extensive data base procured this way enabled to map fire events in the Ayora area based on sequential diachronic sets and provide fire dates, perimeter as well as fire recurrence for each pixel. The increasing fire frequency in the past decades is in large parts attributed to the accelerated abandonment of the area that leads to an encroachment of shrublands and the accumulation of combustible biomass. On the basis of the fire mapping results, a spatial and temporal stratification of the data set allowed to asses plant recovery dynamics on the landscape level through linear trend analysis. The long history of fire events in the Mediterranean frequently leads to processes of auto-succession. Following an initial dominance of herbaceous vegetation this commonly leads to similar plant communities as the ones present before the fire. On a temporal axis, this results in typical exponential post-fire trajectories which could also be shown in this study. The analysis of driving factors for post-fire dynamics confirmed the importance of aspect and slope. Locations with lower amounts of solar irradiation and favourable water supply yielded faster recovery rates and higher post-fire vegetation cover levels. In most cases, the vegetation cover levels observed before the fire were not reached within the post-fire observation period. In the area of Lagadas, linear trend analysis and additional statistical parameters were used to infer a degradation index. This could be used to illustrate a complex pattern of stability, regeneration and degradation of vegetation cover. These different processes and states are found in close proximity and are clearly determined by topography and elevation. Following a sequence of analyses, it was found that in particular steep, narrow valleys show positive trends, while negative trends are more abundant on plain or gently undulating areas. Considering the local grazing regime, this spatial differentiation was related to the accessibility of specific locations. Subsequently, animal numbers on community level were used to calculate efficient stocking rates and assess the temporal development of their relation with vegetation cover. This calculation of temporal trajectories illustrated that only some communities show the expected negative relation. To the contrary, a positive relation or even changing relation patterns are observed. This signifies recent concentration and intensification processes in the grazing scheme, as a result of which animals are kept in sheds, where additional feedstuffs are provided. In these cases, free roaming of livestock animals is often confined to some hours every day, which explains the spatial preference of easily accessible areas by the shepherds. Beyond these temporal trends, it was analysed whether the grazing pattern is equally reflected in a spatial trend. Making use of available geospatial information layers, the efforts required to reach each location was expressed as a cost. Then, cost zones could be defined and woody vegetation cover as a grazing indicator could be inferred for the different zones. Animal sheds were employed as starting features for this piospheric analysis, which could be mapped from very high spatial resolution Quickbird image data. The result was a clearly structured gradient showing increasing woody vegetation cover with increasing cost distance. On the basis of these two pilot studies, the elements of a monitoring and interpretation framework identified at the beginning of the work were evaluated and a formal interpretation scheme was presented.
This thesis focus on threats as an experience of stress. Threats are distinguished from challenges and hindrances as another dimension of stress in challenge-hindrance models (CHM) of work stress (Tuckey et al., 2015). Multiple disciplines of psychology (e.g. stereotype, Fingerhut & Abdou, 2017; identity, Petriglieri, 2011) provide a variety of possible events that can trigger threats (e.g., failure expe-riences, social devaluation; Leary et al., 2009). However, systematic consideration of triggers and thus, an overview of when does the danger of threats arises, has been lacking to date. The explanation why events are appraised as threats is related to frustrated needs (e.g., Quested et al., 2011; Semmer et al., 2007), but empirical evidence is rare and needs can cover a wide range of content (e.g., relatedness, competence, power), depending on need approaches (e.g., Deci & Ryan, 2000; McClelland, 1961). This thesis aims to shed light on triggers (when) and the need-based mechanism (why) of threats.
In the introduction, I introduce threats as a dimension of stress experience (cf. Tuckey et al., 2015) and give insights into the diverse field of threat triggers (the when of threats). Further, I explain threats in terms of a frustrated need for positive self-view, before presenting specific needs as possible deter-minants in the threat mechanism (the why of threats). Study 1 represents a literature review based on 122 papers from interdisciplinary threat research and provides a classification of five triggers and five needs identified in explanations and operationalizations of threats. In Study 2, the five triggers and needs are ecologically validated in interviews with police officers (n = 20), paramedics (n = 10), teach-ers (n = 10), and employees of the German federal employment agency (n = 8). The mediating role of needs in the relationship between triggers and threats is confirmed in a correlative survey design (N = 101 Leaders working part-time, Study 3) and in a controlled laboratory experiment (N = 60 two-person student teams, Study 4). The thesis ends with a general discussion of the results of the four studies, providing theoretical and practical implications.
The dissertation deals with methods to improve design-based and model-assisted estimation techniques for surveys in a finite population framework. The focus is on the development of the statistical methodology as well as their implementation by means of tailor-made numerical optimization strategies. In that regard, the developed methods aim at computing statistics for several potentially conflicting variables of interest at aggregated and disaggregated levels of the population on the basis of one single survey. The work can be divided into two main research questions, which are briefly explained in the following sections.
First, an optimal multivariate allocation method is developed taking into account several stratification levels. This approach results in a multi-objective optimization problem due to the simultaneous consideration of several variables of interest. In preparation for the numerical solution, several scalarization and standardization techniques are presented, which represent the different preferences of potential users. In addition, it is shown that by solving the problem scalarized with a weighted sum for all combinations of weights, the entire Pareto frontier of the original problem can be generated. By exploiting the special structure of the problem, the scalarized problems can be efficiently solved by a semismooth Newton method. In order to apply this numerical method to other scalarization techniques as well, an alternative approach is suggested, which traces the problem back to the weighted sum case. To address regional estimation quality requirements at multiple stratification levels, the potential use of upper bounds for regional variances is integrated into the method. In addition to restrictions on regional estimates, the method enables the consideration of box-constraints for the stratum-specific sample sizes, allowing minimum and maximum stratum-specific sampling fractions to be defined.
In addition to the allocation method, a generalized calibration method is developed, which is supposed to achieve coherent and efficient estimates at different stratification levels. The developed calibration method takes into account a very large number of benchmarks at different stratification levels, which may be obtained from different sources such as registers, paradata or other surveys using different estimation techniques. In order to incorporate the heterogeneous quality and the multitude of benchmarks, a relaxation of selected benchmarks is proposed. In that regard, predefined tolerances are assigned to problematic benchmarks at low aggregation levels in order to avoid an exact fulfillment. In addition, the generalized calibration method allows the use of box-constraints for the correction weights in order to avoid an extremely high variation of the weights. Furthermore, a variance estimation by means of a rescaling bootstrap is presented.
Both developed methods are analyzed and compared with existing methods in extensive simulation studies on the basis of a realistic synthetic data set of all households in Germany. Due to the similar requirements and objectives, both methods can be successively applied to a single survey in order to combine their efficiency advantages. In addition, both methods can be solved in a time-efficient manner using very comparable optimization approaches. These are based on transformations of the optimality conditions. The dimension of the resulting system of equations is ultimately independent of the dimension of the original problem, which enables the application even for very large problem instances.
Energy transition strategies in Germany have led to an expansion of energy crop cultivation in landscape, with silage maize as most valuable feedstock. The changes in the traditional cropping systems, with increasing shares of maize, raised concerns about the sustainability of agricultural feedstock production regarding threats to soil health. However, spatially explicit data about silage maize cultivation are missing; thus, implications for soil cannot be estimated in a precise way. With this study, we firstly aimed to track the fields cultivated with maize based on remote sensing data. Secondly, available soil data were target-specifically processed to determine the site-specific vulnerability of the soils for erosion and compaction. The generated, spatially-explicit data served as basis for a differentiated analysis of the development of the agricultural biogas sector, associated maize cultivation and its implications for soil health. In the study area, located in a low mountain range region in Western Germany, the number and capacity of biogas producing units increased by 25 installations and 10,163 kW from 2009 to 2016. The remote sensing-based classification approach showed that the maize cultivation area was expanded by 16% from 7305 to 8447 hectares. Thus, maize cultivation accounted for about 20% of the arable land use; however, with distinct local differences. Significant shares of about 30% of the maize cultivation was done on fields that show at least high potentials for soil erosion exceeding 25 t soil ha−1 a−1. Furthermore, about 10% of the maize cultivation was done on fields that pedogenetically show an elevated risk for soil compaction. In order to reach more sustainable cultivation systems of feedstock for anaerobic digestion, changes in cultivated crops and management strategies are urgently required, particularly against first signs of climate change. The presented approach can regionally be modified in order to develop site-adapted, sustainable bioenergy cropping systems.
Perennial energy crops (PECs) are increasingly used as feedstock to produce energy in an environmental friendly way. Compared to traditional conversion strategies like thermal use, sophisticated technologies such as biomethanation defined different re-quirements of the feedstock. Whereas the first concept relies on dry, woody mate-rial, biomethanation requires a moist feedstock. Thus, over time, the spectrum of species used as PECs has widened. Moreover, harvest dates were adjusted to pro-vide the feedstock at suitable moisture contents. It is well known that perennial, lignocellulose- based energy crops, compared to annual, sugar- and starch- based ones, offer ecological advantages such as, inter alia, improving biodiversity in landscape, protecting soil against erosion, and protecting groundwater from nutrient inputs. However, one of the main arguments for PEC cultivation was their undemanding nature concerning external inputs. With respect to the broader spectrum of PEC spe-cies and changed harvest dates, the question arises whether the concept of PECs being low- input energy crops is still valid. This also implies the question of suitable grow-ing conditions and sustainable management. The aims of this opinion paper were to classify different PECs according to their life- form strategy, compare nutrient exports when harvested in different maturation stages, and to discuss the results in the context of sustainable PEC cultivation on marginal land. This study revealed that nutrient exports with yield biomass of PECs harvested in green state are in the same range than those of annual energy crops and therewith several times higher than those of PECs harvested in brown state or of woody short rotation coppices. Thus, PECs can-not universally be claimed as low- input energy crops. These results also imply the consequences of cultivation of PECs on marginal land. Finally, the question has to be raised whether the term PECs should prospectively be better specified in written and spoken words.
Harvesting of silage maize in late autumn on waterlogged soils may result in several ecological problems such as soil compaction and may subsequently be a major threat to soil fertility in Europe. It was hypothesized that perennial energy crops might reduce the vulnerability for soil compaction through earlier harvest dates and improved soil stability. However, the performance of such crops to be grown on soil that are periodically waterlogged and implications for soil chemical and microbial properties are currently an open issue. Within the framework of a two-year pot experiment we investigated the potential of the cup plant (Silphium perfoliatum L.), Jerusalem artichoke (Helianthus tuberosus), giant knotweed (Fallopia japonicum X bohemica), tall wheatgrass (Agropyron elongatum), and reed canary grass (Phalaris arundinacea) for cultivation under periodically waterlogged soil conditions during the winter half year and implications for soil chemical and biological properties. Examined perennial energy crops coped with periodical waterlogging and showed yields 50% to 150% higher than in the control which was never faced with waterlogging. Root formation was similar in waterlogged and non-waterlogged soil layers. Soil chemical and microbial properties clearly responded to different soil moisture treatments. For example, dehydrogenase activity was two to four times higher in the periodically waterlogged treatment compared to the control. Despite waterlogging, aerobic microbial activity was significantly elevated indicating morphological and metabolic adaptation of the perennial crops to withstand waterlogged conditions. Thus, our results reveal first evidence of a site-adapted biomass production on periodical waterlogged soils through the cultivation of perennial energy crops and for intense plant microbe interactions.
Mobile computing poses different requirements on middleware than more traditional desktop systems interconnected by fixed networks. Not only the characteristics of mobile network technologies as for example lower bandwidth and unreliability demand for customized support. Moreover, the devices employed in mobile settings usually are less powerful than their desktop counterparts. Slow processors, a fairly limited amount of memory, and smaller displays are typical properties of mobile equipment, again requiring special treatment. Furthermore, user mobility results in additional requirements on appropriate middleware support. As opposed to the quite static environments dominating the world of desktop computing, dynamic aspects gain more importance. Suitable strategies and techniques for exploring the environment e.g. in order to discover services available locally are only one example. Managing resources in a fault-tolerant manner, reducing the impact ill-behaved clients have on system stability define yet another exemplary prerequisite. Most state of the art middleware has been designed for use in the realm of static, resource rich environments and hence is not immediately applicable in mobile settings as set forth above. The work described throughout this thesis aims at investigating the suitability of different middleware technologies with regard to application design, development, and deployment in the context of mobile networks. Mostly based upon prototypes, shortcomings of those technologies are identified and possible solutions are proposed and evaluated where appropriate. Besides tailoring middleware to specific communication and device characteristics, the cellular structure of current mobile networks may and shall be exploited in favor of more scalable and robust systems. Hence, an additional topic considered within this thesis is to point out and investigate suitable approaches permitting to benefit from such cellular infrastructures. In particular, a system architecture for the development of applications in the context of mobile networks will be proposed. An evaluation of this architecture employing mobile agents as flexible, network-side representatives for mobile terminals is performed, again based upon a prototype application. In summary, this thesis aims at providing several complementary approaches regarding middleware support tailored for mobile, cellular networks, a field considered to be of rising importance in a world where mobile communication and particularly data services emerge rapidly, augmenting the globally interconnecting, wired Internet.
International private equity development is highly volatile with increasing global diversification. This thesis examines the transaction patterns of cross-border private equity investment with a particular focus on the affinity of country pairs. Analysis is based on a comprehensive dataset of 99 countries over 25 years. A three-dimensional gravity model analysis covering source and host country over time exposes the effects of the country determinants: economic mass, economic distance, banking system, corporate endowment, as well as legal, political, and institutional system on the transactions. A new method is developed to examine countries in their dual roles as investor and target. This approach verifies their global importance as source and host, and also makes possible an analysis of overall private equity investment. For private equity-specific multi-investor deals, a scheme is designed to measure cross-border activity with more precision by participation, proportional deal participation, and deal flow. The analysis identifies intense level of affinity between country pairs and reveals that no single country is ideal for private equity activity. Instead, the findings show that the specific push and pull factors within each country constellation define the optimal country as trading partner. The results verify a correlation between cross-border deals and economic masses and reduced economic distance of countries. Geographic distance and cultural similarities, such as language and legal system, intensify the likelihood of initiating transactions. International trade-oriented countries with a high level of development lower the entrance barriers and increase the chances of deal success. A well-funded financial system for the investor and an efficient and competitive banking system of target countries enhance the probability of investment between countries. Also relevant for the likelihood of starting cross-border deals are low corporate tax burdens and advanced scientific competitiveness, and a well-developed stock market in the investor country. Fundamental to frequency and likelihood of success are well-established, high standards of a country- social, political, and legal systems with widespread confidence in the rules of society. In particular, the reliability of contract enforcement, with proven quality of regulations that promote private sector development, proves to be crucial for deal success.
In this thesis, global surrogate models for responses of expensive simulations are investigated. Computational fluid dynamics (CFD) have become an indispensable tool in the aircraft industry. But simulations of realistic aircraft configurations remain challenging and computationally expensive despite the sustained advances in computing power. With the demand for numerous simulations to describe the behavior of an output quantity over a design space, the need for surrogate models arises. They are easy to evaluate and approximate quantities of interest of a computer code. Only a few number of evaluations of the simulation are stored for determining the behavior of the response over a whole range of the input parameter domain. The Kriging method is capable of interpolating highly nonlinear, deterministic functions based on scattered datasets. Using correlation functions, distinct sensitivities of the response with respect to the input parameters can be considered automatically. Kriging can be extended to incorporate not only evaluations of the simulation, but also gradient information, which is called gradient-enhanced Kriging. Adaptive sampling strategies can generate more efficient surrogate models. Contrary to traditional one-stage approaches, the surrogate model is built step-by-step. In every stage of an adaptive process, the current surrogate is assessed in order to determine new sample locations, where the response is evaluated and the new samples are added to the existing set of samples. In this way, the sampling strategy learns about the behavior of the response and a problem-specific design is generated. Critical regions of the input parameter space are identified automatically and sampled more densely for reproducing the response's behavior correctly. The number of required expensive simulations is decreased considerably. All these approaches treat the response itself more or less as an unknown output of a black-box. A new approach is motivated by the assumption that for a predefined problem class, the behavior of the response is not arbitrary, but rather related to other instances of the mutual problem class. In CFD, for example, responses of aerodynamic coefficients share structural similarities for different airfoil geometries. The goal is to identify the similarities in a database of responses via principal component analysis and to use them for a generic surrogate model. Characteristic structures of the problem class can be used for increasing the approximation quality in new test cases. Traditional approaches still require a large number of response evaluations, in order to achieve a globally high approximation quality. Validating the generic surrogate model for industrial relevant test cases shows that they generate efficient surrogates, which are more accurate than common interpolations. Thus practical, i.e. affordable surrogates are possible already for moderate sample sizes. So far, interpolation problems were regarded as separate problems. The new approach uses the structural similarities of a mutual problem class innovatively for surrogate modeling. Concepts from response surface methods, variable-fidelity modeling, design of experiments, image registration and statistical shape analysis are connected in an interdisciplinary way. Generic surrogate modeling is not restricted to aerodynamic simulation. It can be applied, whenever expensive simulations can be assigned to a larger problem class, in which structural similarities are expected.
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
Climate change and habitat fragmentation modify the natural habitat of many wetland biota and lead to new compositions of biodiversity in these ecosystems. While the direct effects of climate are often well known, indirect effects due to biotic interactions remain poorly understood. The water meadow grasshopper, Chorthippus montanus, is a univoltine habitat specialist, which is adapted to permanently moist habitats. Land use change and drainage led to highly fragmented populations of this generally flightless species. In large parts of the Palaearctic Ch. montanus occurs sympatrically with its widespread congener, the meadow grasshopper Chorthippus parallelus. Due to their close relationship and their similar songs, hybridization is likely to occur in syntopic populations. Such a species pair of a habitat specialist and a habitat generalist represents an ideal model system to examine the role of ongoing climate change and an accumulation of extreme climatic events on the life history strategies, population dynamics and inter-specific interactions. In Chapter I a laboratory experiment was conducted to identify the impact of environmental factors on intra-specific life-history traits of Ch. montanus. Like other Orthoptera species, Ch. montanus follows a converse temperature size rule. In line with the dimorphic niche hypothesis, which states that sexual size dimorphism evolved in response to the different sexual reproductive roles, both sexes showed different responses to increasing density at lower temperatures. Males attained smaller body sizes at high densities, whereas females had a prolonged development time. This is the first evidence for a sex-specific phenotypic plasticity in Ch. montanus. Females benefit from the prolonged development as their reproductive success depends on the size and number of egg clutches they may produce. By contrast, the reproductive success of males depends on the chance to fertilize virgin females, which increases with faster development. This may become a disadvantage for Ch. montanus as an intraspecific phenology shift may increase hybridization risk with the sibling species. Despite the widespread assumption that hybridization between two sympatric species is rare due to complete reproductive barriers, the genetic analyses of 16 populations (Chapter II) provided evidence for wide prevalence of hybridization between both species in the wild. As no complete admixture was found in the examined population, it is assumed that hybridization only occurs in ecotones between wetlands and drier parts. Reproductive barriers (habitat isolation, behavior, phenology) seem to prevent the genetic swamping of Ch. montanus populations. Although a behavioral experiment showed that mate choice presents an important reproductive barrier between both species, the experiment also revealed that reproductive barriers could be altered by environmental change (e.g. increasing heterospecific frequency). Chapter III analyzes the impact of extreme climatic events on population dynamics and interspecific hybridization. A mark-recapture analysis combined with weather records over five years provides evidence that the embryonic development in Ch. montanus is vulnerable to extreme climatic events. Strong population declines in Ch. montanus lead to a disequilibrium between Ch. montanus and Ch. parallelus populations and increases the risk of hybridization. The highest hybridization risk was found in the first weeks of a season, when both species had an overlapping phenology. Furthermore, hybrids were generally localized at the edge of the Ch. montanus distribution with higher heterospecific encounter probabilities. The hybridization rate reached up to 19.6%. The genetic analyses in Chapter II and III show that hybridization differentially affects specialists and generalists. While generalists may benefit from hybridization by an increasing genetic diversity, such a positive correlation was not found for Ch. montanus. The results underline the importance of reproductive barriers for the co-existence of these sympatric species. However, climate change and other anthropogenic disturbances alter reproductive barriers and promote hybridization, which may threaten small populations by genetic displacement. As anthropogenic hybridization is recognized as a major threat to biodiversity, it should be considered in environmental law and policy. In Chapter IV the role of hybrids and hybridization in three levels of law and the historical backgrounds of hybrids becoming a part of legal instruments is analyzed. Due to legal uncertainties and the complexity of this topic a legal assessment of hybrids is challenging and argues for species-specific approaches. Nonetheless, existing legal norms provide a suitable basis, but need to be specified. Finally, this chapter discusses different opportunities for the management of hybrids and hybridization in a conservation perspective and their necessity.
Natural hazards are diverse and uneven in time and space, therefore, understanding its complexity is key to save human lives and conserve natural ecosystems. Reducing the outputs obtained after each modelling analysis is key to present the results for stakeholders, land managers and policymakers. So, the main goal of this survey was to present a method to synthesize three natural hazards in one multi-hazard map and its evaluation for hazard management and land use planning. To test this methodology, we took as study area the Gorganrood Watershed, located in the Golestan Province (Iran). First, an inventory map of three different types of hazards including flood, landslides, and gullies was prepared using field surveys and different official reports. To generate the susceptibility maps, a total of 17 geo-environmental factors were selected as predictors using the MaxEnt (Maximum Entropy) machine learning technique. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic-ROC curves and calculating the area under the ROC curve-AUCROC. The MaxEnt model not only implemented superbly in the degree of fitting, but also obtained significant results in predictive performance. Variables importance of the three studied types of hazards showed that river density, distance from streams, and elevation were the most important factors for flood, respectively. Lithological units, elevation, and annual mean rainfall were relevant for detecting landslides. On the other hand, annual mean rainfall, elevation, and lithological units were used for gully erosion mapping in this study area. Finally, by combining the flood, landslides, and gully erosion susceptibility maps, an integrated multi-hazard map was created. The results demonstrated that 60% of the area is subjected to hazards, reaching a proportion of landslides up to 21.2% in the whole territory. We conclude that using this type of multi-hazard map may be a useful tool for local administrators to identify areas susceptible to hazards at large scales as we demonstrated in this research.
Global food security poses large challenges to a fast changing human society and has been a key topic for scientists, agriculturist, and policy makers in the 21st century. The United Nation predicts a total world population of 9.15 billion in 2050 and defines the provision of food security as the second major point in the UN Sustainable Development Goals. As the capacities of both, land and water resources, are finite and locally heavily overused, reducing agriculture’s environmental impact while meeting an increasing demand for food of a constantly growing population is one of the greatest challenges of our century. Therefore, a multifaceted solution is required, including approaches using geospatial data to optimize agricultural food production.
The availability of precise and up-to-date information on vegetation parameters is mandatory to fulfill the requirements of agricultural applications. Direct field measurements of such vegetation parameters are expensive and time-consuming. On the contrary, remote sensing offers a variety of techniques for a cost-effective and non-destructive retrieval of vegetation parameters. Although not widely used, hyperspectral thermal infrared (TIR) remote sensing has demonstrated being a valuable addition to existing remote sensing techniques for the retrieval of vegetation parameters.
This thesis examined the potential of TIR imaging spectroscopy as an important contribution to the growing need of food security. The main scientific question dealt with the extraction of vegetation parameters from imaging TIR spectroscopy. To this end, two studies impressively demonstrated the ability of extracting vegetation related parameters from leaf emissivity spectra: (i) the discrimination of eight plant species based on their emissivity spectra and (ii) the detection of drought stress in potato plants using temperature measures and emissivity spectra.
The datasets used in these studies were collected using the Telops Hyper-Cam LW, a novel imaging spectrometer. Since this FTIR spectrometer presents some particularities, special attention was paid on the development of dedicated experimental data acquisition setups and on data processing chains. The latter include data preprocessing and the development of algorithms for extracting precise surface temperatures, reproducible emissivity spectra and, in the end, vegetation parameters.
The spectrometer’s versatility allows the collection of airborne imaging spectroscopy datasets. Since the general availability of airborne TIR spectrometers is limited, the preprocessing and
data extraction methods are underexplored compared to reflective remote sensing. This counts especially for atmospheric correction (AC) and temperature and emissivity separation (TES) algorithms. Therefore, we implemented a powerful simulation environment for the development of preprocessing algorithms for airborne hyperspectral TIR image data. This simulation tool is designed in a modular way and includes the image data acquisition and processing chain from surface temperature and emissivity to the final at-sensor radiance data. It includes a series of available algorithms for TES, AC as well as combined AC and TES approaches. Using this simulator, one of the most promising algorithms for the preprocessing of airborne TIR data – ARTEMISS – was significantly optimized. The retrieval error of the atmospheric water vapor during the atmospheric characterization was reduced. As a result, this improvement in atmospheric characterization accuracy enhanced the subsequent retrieval of surface temperatures and surface emissivities intensely.
Although, the potential of hyperspectral TIR applications in ecology, agriculture, and biodiversity has been impressively demonstrated, a serious contribution to a global provision of food security requires the retrieval of vegetation related parameters with global coverage, high spatial resolution and at high revisit frequencies.
Emerging from the findings in this thesis, the spectral configuration of a spaceborne TIR spectrometer concept was developed. The sensors spectral configuration aims at the retrieval of precise land surface temperatures and land surface emissivity spectra. Complemented with additional characteristics, i.e. short revisit times and a high spatial resolution, this sensor potentially allows the retrieval of valuable vegetation parameters needed for agricultural optimizations. The technical feasibility of such a sensor concept underlines the potential contribution to the multifaceted solution required for achieving the challenging goal of guaranteeing global food security in a world of increasing population.
In conclusion, thermal remote sensing and more precisely hyperspectral thermal remote sensing has been presented as a valuable technique for a variety of applications contributing to the final goal of a global food security.
Religion, churches and religious communities have growing importance in the Law of the European Union. Since long a distinct law on religion of the European Union is developing. This collection of those norms of European Union Law directly concerning religion mirrors today's status of this dynamic process.
Religion, churches and religious communities have growing importance in the Law of the European Union. Since long a distinct law on religion of the European Union is developing.rnThis collection of those norms of European Union Law directly concerning religion mirrors today's status of this dynamic process.
The outbreak of the COVID-19 pandemic has also led to many conspiracy theories. While the origin of the pandemic in China led some, including former US president Donald Trump, to dub the pathogen “Chinese virus” and to support anti-Chinese conspiracy narratives, it caused Chinese state officials to openly support anti-US conspiracy theories about the “true” origin of the virus. In this article, we study whether nationalism, or more precisely uncritical patriotism, is related to belief in conspiracy theories among normal people. We hypothesize based on group identity theory and motivated reasoning that for the particular case of conspiracy theories related to the origin of COVID-19, such a relation should be stronger for Chinese than for Germans. To test this hypothesis, we use survey data from Germany and China, including data from the Chinese community in Germany. We also look at relations to other factors, in particular media consumption and xenophobia.
In a paper of 1996 the british mathematician Graham R. Allan posed the question, whether the product of two stable elements is again stable. Here stability describes the solvability of a certain infinite system of equations. Using a method from the theory of homological algebra, it is proved that in the case of topological algebras with multiplicative webs, and thus in all common locally convex topological algebras that occur in standard analysis, the answer of Allan's question is affirmative.
In this thesis, three studies investigating the impact of stress on the protective startle eye blink reflex are reported. In the first study a decrease in prepulse inhibition of the startle reflex was observed after intravenous low dose cortisol application. In the second study a decrease in reflex magnitude of the startle reflex was observed after pharmacological suppression of endogenous cortisol production. In the third study, a higher reflex magnitude of the startle reflex was observed at reduced arterial and central venous blood pressure. These results can be interpreted in terms of an adaption to hostile environments.
Digital libraries have become a central aspect of our live. They provide us with an immediate access to an amount of data which has been unthinkable in the past. Support of computers and the ability to aggregate data from different libraries enables small projects to maintain large digital collections on various topics. A central aspect of digital libraries is the metadata -- the information that describes the objects in the collection. Metadata are digital and can be processed and studied automatically. In recent years, several studies considered different aspects of metadata. Many studies focus on finding defects in the data. Specifically, locating errors related to the handling of personal names has drawn attention. In most cases the studies concentrate on the most recent metadata of a collection. For example, they look for errors in the collection at day X. This is a reasonable approach for many applications. However, to answer questions such as when the errors were added to the collection we need to consider the history of the metadata itself. In this work, we study how the history of metadata can be used to improve the understanding of a digital library. To this goal, we consider how digital libraries handle and store their metadata. Based in this information we develop a taxonomy to describe available historical data which means data on how the metadata records changed over time. We develop a system that identifies changes to metadata over time and groups them in semantically related blocks. We found that historical meta data is often unavailable. However, we were able to apply our system on a set of large real-world collections. A central part of this work is the identification and analysis of changes to metadata which corrected a defect in the collection. These corrections are the accumulated effort to ensure data quality of a digital library. In this work, we present a system that automatically extracts corrections of defects from the set of all modifications. We present test collections containing more than 100,000 test cases which we created by extracting defects and their corrections from DBLP. This collections can be used to evaluate automatic approaches for error detection. Furthermore, we use these collections to study properties of defects. We will concentrate on defects related to the person name problem. We show that many defects occur in situations where very little context information is available. This has major implications for automatic defect detection. We also show that properties of defects depend on the digital library in which they occur. We also discuss briefly how corrected defects can be used to detect hidden or future defects. Besides the study of defects, we show that historical metadata can be used to study the development of a digital library over time. In this work, we present different studies as example how historical metadata can be used. At first we describe the development of the DBLP collection over a period of 15 years. Specifically, we study how the coverage of different computer science sub fields changed over time. We show that DBLP evolved from a specialized project to a collection that encompasses most parts of computer science. In another study we analyze the impact of user emails to defect corrections in DBLP. We show that these emails trigger a significant amount of error corrections. Based on these data we can draw conclusions on why users report a defective entry in DBLP.
The presence of sea ice leads in the sea ice cover represents a key feature in polar regions by controlling the heat exchange between the relatively warm ocean and cold atmosphere due to increased fluxes of turbulent sensible and latent heat. Sea ice leads contribute to the sea ice production and are sources for the formation of dense water which affects the ocean circulation. Atmospheric and ocean models strongly rely on observational data to describe the respective state of the sea ice since numerical models are not able to produce sea ice leads explicitly. For the Arctic, some lead datasets are available, but for the Antarctic, no such data yet exist. Our study presents a new algorithm with which leads are automatically identified in satellite thermal infrared images. A variety of lead metrics is used to distinguish between true leads and detection artefacts with the use of fuzzy logic. We evaluate the outputs and provide pixel-wise uncertainties. Our data yield daily sea ice lead maps at a resolution of 1 km2 for the winter months November– April 2002/03–2018/19 (Arctic) and April–September 2003–2019 (Antarctic), respectively. The long-term average of the lead frequency distributions show distinct features related to bathymetric structures in both hemispheres.
Die Polargebiete sind geprägt von harschen Umweltbedingungen mit extrem kalten Temperaturen und Winden. Besonders während der polaren Nacht werden Temperaturen von bis zu -89.2°C}$ auf dem Antarktischen Plateau beobachtet. Infolge der starken Abkühlung beginnt das Ozeanwasser zu gefrieren und die Eisproduktion beginnt. Der Antarktische Ozean ist dabei von einer ausgeprägten zwischen- und innerjährlichen Variabilität geprägt und die Eisbedeckung variiert zwischen 2.07 * 10^6 km^2 im Sommer und 20.14 * 10^6 km^2 im Winter. Die Eisproduktion und Eisschmelze beeinflussen die atmosphärische und ozeanische Zirkulation. Dynamische Prozesse führen zur Bildung von Rissen im Eis und letztlich zum Entstehen von Eisrinnen (leads). Leads sind langgestreckte Risse die mindestens einige Meter breit und hunderte Meter bis hunderte Kilometer lang sein können. In diesen Eisrinnen ist das warme Ozeanwasser in Kontakt mit der kalten Atmosphäre, wodurch die Austauschraten fühlbarer und latenter Wärme, Feuchtigkeit und von Gasen stark erhöht sind. Eisrinnen tragen zur Eisproduktion in den Polargebieten bei und sind Habitat für zahlreiche Tiere. Eisrinnen, zentraler Bestandteil der präsentierten Studie, sind bis heute nur unzureichend im Südpolarmeer erforscht und beobachtet. Daher ist es Ziel einen Algorithmus zu entwickeln, um Eisrinnen in Fernerkundungsdaten automatisiert zu identifizieren. Dabei kommen thermal-Infrarot Satellitendaten des Moderate-Resolution Imaging Spectroradiometer (MODIS) zum Einsatz, welches auf den beiden Satelliten Aqua und Terra montiert ist und seit 2000 (Terra) bzw. 2002 (Aqua) Satellitenbilder bereitstellt. Die einzelnen Satellitenbilder beinhalten die Eisoberflächentemperatur des MOD/MYD 29 Produktes, welche in einem zweistufigen Algorithmus für den Zeitraum April bis September 2003 bis 2019 prozessiert werden.
Im ersten Schritt werden potentielle Eisrinnen anhand der lokalen positiven Temperaturanomalie identifiziert. Aufgrund von Artefakten werden weitere temperatur- und texturbasierte Parameter abgeleitet und zu täglichen Kompositen zusammengefügt. Diese werden in der zweiten Prozessierungsstufe verwendet, um Wolkenartefakte von echten Eisrinnen-Observationen zu trennen. Hier wird Fuzzy Logic genutzt und eine Antarktis-spezifische Konfiguration wird definiert. In diesem werden ausgewählte Eingabedaten aus dem ersten Prozessierungslevel genutzt, um einen finalen Proxy, den Lead Score (LS), zu berechnen. Der LS wird abschließend mittels manueller Qualitätskontrolle in eine Unsicherheit überführt. Die darüber identifizierten Artefakte können so zusätzlich zur MODIS-Wolkenmaske genutzt werden.
Auf Basis der Eisrinnenbeobachtungen wird ein klimatologischer Referenzdatensatz erstellt, der die repräsentative Eisrinnenverteilung im Antarktischen Ozean für die Wintermonate April bis September, 2003 bis 2019 zeigt. In diesem ist sichtbar, dass Eisrinnen in manchen Gegenden systematischer auftreten als in anderen. Das sind vor allem die Regionen entlang der Küstenregion, des kontinentalen Schelfabhangs und einigen Erhebungen und Kanälen in der Tiefsee. Dabei sind die erhöhten Frequenzen entlang des Schelfabhangs besonders interessant und der Einfluss von atmosphärischen und ozeanischen Einflüssen wird untersucht. Ein regionales Eis-Ozeanmodell wird genutzt, um ozeanische Einflüsse in Zusammenhang mit erhöhten Eisrinnenfrequenzen zu setzen.
In der vorliegenden Studie wird außerdem ein umfangreicher Überblick über die großskalige Variabilität von Antarktischem Meereis gegeben. Tägliche Eiskonzentrationsdaten, abgeleitet aus passiven Mikrowellendaten, werden aus dem Zeitraum 1979 bis 2018 für die Klassifikation genutzt. Der dk-means Algorithmus wird verwendet, um zehn repräsentative Eisklassen zu identifizieren. Die geographische Verteilung dieser Klassen wird als Karte dargestellt, in der der typische jährliche Eiszyklus je Klasse sichtbar ist.
Veränderungen in dem räumlichen Auftreten von Eisklassen werden identifiziert und qualitativ interpretiert. Positive Abweichungen hin zu höheren Eisklassen werden im Weddell- und dem Ross-Meer und einigen Regionen in der Ostantarktis identifiziert. Negative Abweichungen sind im Amundsen-Bellingshausen-Meer vorhanden. Der neu entwickelte (Climatological Sea Ice Anomaly Index) wird genutzt, um Klassenabweichungen in der Zeitreihe zu identifizieren. Damit werden drei Jahre (1986, 2007, 2014) für eine Fallstudie ausgewählt und in Relation zu atmosphärischen Daten aus ERA-Interim und Eisdrift-Daten untersucht. Für die beiden Jahre 1986 und 2007 können bestimmte atmosphärische Zirkulationsmuster identifiziert werden, die die entsprechende Eisklassifikation beeinflusst haben. Für das Jahr 2014 können keine besonders ausgeprägten atmosphärischen Anomalien ausgemacht werden.
Der Eisklassen-Datensatz kann in Zukunft als Ergänzung zu vorhandenen Studien und für die Validierung von Meereismodellen genutzt werden. Dabei sind vor allem Anwendungen in Bezug auf den Eisrinnen-Datensatz möglich.
Considering the numerical simulation of mathematical models it is necessary to have efficient methods for computing special functions. We will focus our considerations in particular on the classes of Mittag-Leffler and confluent hypergeometric functions. The PhD Thesis can be structured in three parts. In the first part, entire functions are considered. If we look at the partial sums of the Taylor series with respect to the origin we find that they typically only provide a reasonable approximation of the function in a small neighborhood of the origin. The main disadvantages of these partial sums are the cancellation errors which occur when computing in fixed precision arithmetic outside this neighborhood. Therefore, our aim is to quantify and then to reduce this cancellation effect. In the next part we consider the Mittag-Leffler and the confluent hypergeometric functions in detail. Using the method we developed in the first part, we can reduce the cancellation problems by "modifying" the functions for several parts of the complex plane. Finally, in in the last part two other approaches to compute Mittag-Leffler type and confluent hypergeometric functions are discussed. If we want to evaluate such functions on unbounded intervals or sectors in the complex plane, we have to consider methods like asymptotic expansions or continued fractions for large arguments z in modulus.
Veterinary antibiotics are released to arable agricultural soil together with manure, including nutrients, organic matter, and microorganisms. Previously, the effects of antibiotic-contaminated manure on soil microbial community activity, function, structure, and resistance have been reported under controlled experimental conditions. This thesis further evaluated the antimicrobial effects as influenced by different manure compositions, soil microhabitats and moisture regimes, plants, and different distances to roots. Microbial community responses were determined by phenotypic phospholipid fatty acid (PLFA) and genotypic 16S rRNA gene fragment analyses. (Chapter 3) demonstrates that medication of pigs with difloxacin (DIF) and sulfadiazine (SDZ) alters the molecular-chemical pattern of slurries, confounding the detection of a consistent antibiotic effect in bulk and respective rhizosphere soil. This was evaluated in a 63-day mesocosm experiment considering typical agricultural manure applications to maize planted soil. Fecal bacteria were detected even 14 days after manure amendment. Manure of DIF- and SDZ-medicated pigs clearly affected the microbial community in mesocosm bulk and rhizosphere soil, temporarily matching antibiotic effects reported in previous studies. (Chapter 4) discusses the influences of different soil microhabitats on antibiotic fate and the effects on soil microflora. Total extractable SDZ was more than two-fold larger in earthworm burrows and soil macroaggregate surfaces compared to bulk soil or the interior fraction of aggregates. Furthermore, soil microbial communities were affected by a combination of soil microhabitat and treatment, which was reflected by different structural and functional community responses to SDZ in laboratory and under field conditions. (Chapter 5) evaluates if SDZ effects on microbial communities are more pronounced in soils which undergo periodic changes in soil moisture by drying-rewetting dynamics compared to soils without such moisture fluctuations. This was tested in a 49-day climate chamber soil pot experiment grown with grass. Manure-amended pots without or with SDZ contamination were incubated under a dynamic moisture regime with repeated drying and rewetting changes of more than twenty percent maximum water holding capacity compared to the control moisture regime. The microbial biomass, but less pronouncedly the community structure, showed an increased responsiveness to the combined stress of SDZ and dynamic moisture changes in the laboratory. Similar responses were documented under field conditions. (Chapter 6) indicated adverse effects of SDZ on root geotropism, number of lateral roots, and water uptake by plants in a 40-day greenhouse experiment with willow and maize grown in soil with environmentally relevant and worst-case antibiotic contamination. (Chapter 7) showed that the associated microbial community responded to a combination of plant species, distance to the root, and antibiotic spiking concentration. In highly antibiotic-contaminated soils, the structural and functional responses of the microbial community were dominated by indirect antibiotic effects on plants and roots.
Entrepreneurship has become an essential phenomenon all over the world because it is a major driving force behind the economic growth and development of a country. It is widely accepted that entrepreneurship development in a country creates new jobs, pro-motes healthy competition through innovation, and benefits the social well being of individuals and societies. The policymakers in both developed and developing countries focus on entrepreneurship because it helps to alleviate impediments to economic development and social welfare. Therefore, policymakers and academic researchers consider the promotion of entrepreneurship as essential for the economy and research-based support is needed for further development of entrepreneurship activities.
The impact of entrepreneurial activities on economic and social development also varies from country to country. The effect of entrepreneurial activities on economic and social development also varies from country to country because the level of entrepreneur-ship activities also varies from one region to another or one country to another. To under-stand these variations, policymakers have investigated the determinants of entrepreneur-ship at different levels, such as the individual, industry, and country levels. Moreover, entrepreneurship behavior is influenced by various personal and environmental level factors. However, these personal-level factors cannot be separated from the surrounding environment.
The link between religion and entrepreneurship is well established and can be traced back to Weber (1930). Researchers have analyzed the relationship between religion and entrepreneurship from various perspectives, and the research related to religion and entrepreneurship is diversified and scattered across disciplines. This dissertation tries to explain the link between religion and entrepreneurship, specifically Islamic religion and entrepreneurship. Technically this dissertation comprises three parts. The first part of this dissertation consists of two chapters that discuss the definition and theories of entrepreneurship (Chapter 2) and the theoretical relationship between religion and entrepreneur-ship (Chapter 3).
The second part of this dissertation (Chapter 4) provides an overview of the field with a purpose to gain a better understanding of the field’s current state of knowledge to bridge the different views and perspectives. In order to provide an overview of the field, a systematic literature search leading to a descriptive overview of the field based on 270 articles published in 163 journals Subsequently, bibliometric methods are used to identify thematic clusters, the most influential authors and articles, and how they are connected.
The third part of this dissertation (Chapter 5) empirically evaluates the influence of Islamic values and Islamic religious practices on entrepreneurship intentions within the Islamic community. Using the theory of planned behavior as a theoretical lens, we also take into account that the relationship between religion and entrepreneurial intentions can be mediated by individual’s attitude towards entrepreneurship. A self-administrative questionnaire was used to collect the responses from a sample of 1895 Pakistani university students. A structured equation modeling was adopted to perform a nuanced assessment of the relationship between Islamic values and practices and entrepreneurship intentions and to account for mediating effect of attitude towards entrepreneurship.
The research on religion and entrepreneurship has increased sharply during the last years and is scattered across various academic disciplines and fields. The analysis identifies and characterize the most important publications, journals, and authors in the area and map the analyzed religions and regions. The comprehensive overview of previous studies allows us to identify research gaps and derive avenues for future research in a substantiated way. Moreover, this dissertation helps the research scholars to understand the field in its entirety, identify relevant articles, and to uncover parallels and differences across religions and regions. Besides, the study reveals a lack of empirical research related to specific religions and specific regions. Therefore, scholars can take these regions and religions into consideration when conducting empirical research.
Furthermore, the empirical analysis about the influence of Islamic religious values and Islamic religious practices show that Islamic values served as a guiding principle in shaping people’s attitudes towards entrepreneurship in an Islamic community; they had an indirect influence on entrepreneurship intention through attitude. Similarly, the relationship between Islamic religious practices and the entrepreneurship intentions of students was fully mediated by the attitude towards entrepreneurship. Furthermore, this dissertation contributes to prior research on entrepreneurship in Islamic communities by applying a more fine-grained approach to capture the link between religion and entrepreneurship. Moreover, it contributes to the literature on entrepreneurship intentions by showing that the influence of religion on entrepreneurship intentions is mainly due to religious values and practices, which shape the attitude towards entrepreneurship and thereby influence entrepreneurship intentions in religious communities. The entrepreneur-ship research has put a higher emphasis on assessing the influence of a diverse set of con-textual factors. This dissertation introduces Islamic values and Islamic religious practices as critical contextual factors that shape entrepreneurship in countries that are characterized by the Islamic religion.
The implicit power motive is one of the most researched motives in motivational psychology—at least in adults. Children have rarely been subject to investigation and there are virtually no results on behavioral and affective correlates of the implicit power motive in children. As behavior and affect are important components of conceptual validation, the empirical data in this dissertation focused on identifying three correlates, namely resource control behavior (study 1), power stress (study 2), and persuasive behavior (study 3). In each study, the implicit power motive was measured via the Picture Story Exercise, using an adapted version for children. Children across samples were between 4 and 11 years old.
Results from study 1 and 2 showed that children’s power-related behavior corresponded with evidence from adult samples: children with a high implicit power motive secure attractive resources and show negative reactions to a thwarted attempt to exert influence. Study 3 contradicted existing evidence with adults in that children’s persuasive behavior was not associated with nonverbal, but with verbal strategies of persuasion. Despite this inconsistency, these results are, together with the validation of a child-friendly Picture Story Exercise version, an important step into further investigating and confirming the concept of the implicit power motive and how to measure it in children.
Allocating scarce resources efficiently is a major task in mechanism design. One of the most fundamental problems in mechanism design theory is the problem of selling a single indivisible item to bidders with private valuations for the item. In this setting, the classic Vickrey auction of~\citet{vickrey1961} describes a simple mechanism to implement a social welfare maximizing allocation.
The Vickrey auction for a single item asks every buyer to report its valuation and allocates the item to the highest bidder for a price of the second highest bid. This auction features some desirable properties, e.g., buyers cannot benefit from misreporting their true value for the item (incentive compatibility) and the auction can be executed in polynomial time.
However, when there is more than one item for sale and buyers' valuations for sets of items are not additive or the set of feasible allocations is constrained, then constructing mechanisms that implement efficient allocations and have polynomial runtime might be very challenging. Consider a single seller selling $n\in \N$ heterogeneous indivisible items to several bidders. The Vickrey-Clarke-Groves auction generalizes the idea of the Vickrey auction to this multi-item setting. Naturally, every bidder has an intrinsic value for every subset of items. As in in the Vickrey auction, bidders report their valuations (Now, for every subset of items!). Then, the auctioneer computes a social welfare maximizing allocation according to the submitted bids and charges buyers the social cost of their winning that is incurred by the rest of the buyers. (This is the analogue to charging the second highest bid to the winning bidder in the single item Vickrey auction.) It turns out that the Vickrey-Clarke-Groves auction is also incentive compatible but it poses some problems: In fact, say for $n=40$, bidders would have to submit $2^{40}-1$ values (one value for each nonempty subset of the ground set) in total. Thus, asking every bidder for its valuation might be impossible due to time complexity issues. Therefore, even though the Vickrey-Clarke-Groves auction implements a social welfare maximizing allocation in this multi-item setting it might be impractical and there is need for alternative approaches to implement social welfare maximizing allocations.
This dissertation represents the results of three independent research papers all of them tackling the problem of implementing efficient allocations in different combinatorial settings.
A sustainable development of forests and their ecosystem services requires the monitoring of the forests" state and changes as well as the prediction of their future development. To achieve the latter, eco-physiological forest growth models are usually applied. These models require calibration and validation with forestry reference data. This data includes forest structural parameters such as tree height or stem diameter which are easy to measure and can be used to estimate the core model parameters, i.e. the tree- biomass pools. The methods traditionally applied to derive the structural parameters are mainly manual and time-consuming. Hence, the in situ data acquisition is inefficient and limited in its ability to capture the vertical and horizontal variability in stand structure. Ground-based remote sensing bears the potential to overcome the limitations of the traditional methods. As they can be automated, ground-based remote sensing methods allow a much more efficient data acquisition and a larger spatial coverage. They are also able to capture forest structure in its three dimensions. Nevertheless, at present further research is required, in particular with respect to the practical integration of ground-based remote sensing data into forest growth models as well as regarding factors influencing the structural parameter retrieval from this data. Therefore, the goal of this PhD thesis was to investigate the influencing factors of two ground-based remote sensing methods (terrestrial laser scanning and hemispherical photography), which have not or only scarcely been studied to date. In addition, the use of forest structural parameters derived from these methods for the calibration of a forest growth model was assessed. Both goals were achieved. The results of this thesis could contribute significantly to a comprehensive assessment of ground-based remote sensing and its potential to derive the forest structural parameters. However, the use of these methods to calibrate forest growth models proved to be limited. An optimized data sampling design is expected to eliminate the major limitations, though. Furthermore, the combination of ground-based, airborne, and satellite remote sensing sensors was suggested to provide an optimized framework for the general integration of remotely sensed data into forest growth models. This combination of remote sensing observations at different scales will contribute greatly to a modern forest management with the purpose of warranting a sustainable forest development even under growing economic and ecological pressures.
The present work explores how theories of motivation can be used to enhance video game research. Currently, Flow-Theory and Self-Determination Theory are the most common approaches in the field of Human-Computer Interaction. The dissertation provides an in-depth look into Motive Disposition Theory and how to utilize it to explain interindividual differences in motivation. Different players have different preferences and make different choices when playing games, and not every player experiences the same outcomes when playing the same game. I provide a short overview of the current state of the research on motivation to play video games. Next, Motive Disposition Theory is applied in the context of digital games in four different research papers, featuring seven studies, totaling 1197 participants. The constructs of explicit and implicit motives are explained in detail while focusing on the two social motives (i.e., affiliation and power). As dependent variables, behaviour, preferences, choices, and experiences are used in different game environments (i.e., Minecraft, League of Legends, and Pokémon). The four papers are followed by a general discussion about the seven studies and Motive Disposition Theory in general. Finally, a short overview is provided about other theories of motivation and how they could be used to further our understanding of the motivation to play digital games in the future. This thesis proposes that 1) Motive Disposition Theory represents a valuable approach to understand individual motivations within the context of digital games; 2) there is a variety of motivational theories that can and should be utilized by researchers in the field of Human-Computer Interaction to broaden the currently one-sided perspective on human motivation; 3) researchers should aim to align their choice of motivational theory with their research goals by choosing the theory that best describes the phenomenon in question and by carefully adjusting each study design to the theoretical assumptions of that theory.
The catechol-O-methyltransferase gene (COMT) plays a crucial role in the metabolism of catecholamines in the frontal cortex. A single nucleotide polymorphism (Val158Met SNP, rs4680) leads to either methionine (Met) or valine (Val) at codon 158, resulting in a three- to fourfold reduction in COMT activity. The aim of the present study was to assess the COMT Val158Met SNP as a risk factor for attention-deficit/hyperactivity disorder (ADHD), ADHD symptom severity and co-morbid conduct disorder (CD) in 166 children with ADHD. The main finding of the present study is that the Met allele of the COMT Val158Met SNP was associated with ADHD and increased ADHD symptom severity. No association with co-morbid CD was observed. In addition, ADHD symptom severity and early adverse familial environment were positive predictors of lifetime CD. These findings support previous results implicating COMT in ADHD symptom severity and early adverse familial environment as risk factors for co-morbid CD, emphasizing the need for early intervention to prevent aggressive and maladaptive behavior progressing into CD, reducing the overall severity of the disease burden in children with ADHD.
There is considerable evidence for an association between chronic dysregulation of the hypothalamus-pituitary adrenal (HPA) axis, atrophy of the hippocampus (HC) and cognitive and mood changes in clinical populations and in aging. The present thesis investigated this relationship in young healthy male subjects. Special emphasis was put on measures of HC volume and function derived from structural and functional magnetic resonance imaging (MRI). Higher cortisol levels after awakening were observed in subjects with higher levels of depressive symptomatology. Larger HC volume was associated with higher cortisol levels after awakening and in response to acute stress, whereas cognitive performance was impaired in subjects with larger HC volumes. Hippocampal activation during picture encoding was reduced after stress induction, and positive associations between activation and cognitive performance before stress were not present anymore afterwards. The present findings underscore the importance of structural and functional brain imaging for psychoneuroendocrinological research. The investigation of the association between cortisol levels and hippocampal integrity in young healthy subjects elicited unexpected results and adds to the understanding of HPA dysfunction and HC atrophy in clinical and aged populations.
Imagery-based techniques have received increasing interest in psychotherapy research. Whereas their effectiveness has been shown for various psychological disorders, their underlying mechanisms remain unclear. Current research predominantly investigates intrapersonal processes, while interpersonal processes have received no attention to date. The aim of the current dissertation was to fill this lacuna. The three interrelated studies comprising this dissertation were the first to examine the effectiveness of imagery-based techniques in the treatment of test anxiety, relate physiological arousal to emotional processing, and investigate the association between physiological synchrony and multiple process measures.
Study I investigated the feasibility of a newly developed protocol, which integrates imagery-based and cognitive-behavioral components, to treat test anxiety in a sample of 31 students. The results indicated the protocol as acceptable, feasible, and effective in the treatment of test anxiety. Additionally, the imagery-based component was positively associated with therapeutic bond, session evaluation, and emotional experience.
Study II shifted the focus from the effectiveness of imagery-based techniques to client-therapist physiological synchrony as a putative mechanism of change in the same sample. The results suggested that physiological synchrony was greater than chance during both imagery-based and cognitive-behavioral components. Variability of physiological synchrony on the session-level during the imagery-based components and variability on both levels (session and dyad) during the cognitive-behavioral components were demonstrated. Furthermore, physiological synchrony of the imagery-based segments was positively assocatied with therapeutic bond. No association was found for the cognitive-behavioral components.
Study III examined both intrapersonal (i.e., clients’ electrodermal activity) and interpersonal (i.e., client-therapist electrodermal activity synchrony) processes and their associations with emotional processing in a sample of 49 client-therapist-dyads. The results suggested that higher client physiological arousal and a moderate level of physiological synchrony were associated with deeper emotional processing.
Taken together, the results highlight the effectiveness of imagery-based techniques in the treatment of test anxiety. Furthermore, the results of Studies II and III support the idea of physiological synchrony as a mechanism of change in imagery with and without rescripting. The current dissertation takes an important step towards optimizing process research within psychotherapy and contributes to a better understanding of the potency and mechanisms of change of imagery-based techniques. We hope that these studies’ implications will support everyday clinical practice.
A model-based temperature adjustment scheme for wintertime sea-ice production retrievals from MODIS
(2022)
Knowledge of the wintertime sea-ice production in Arctic polynyas is an important requirement for estimations of the dense water formation, which drives vertical mixing in the upper ocean. Satellite-based techniques incorporating relatively high resolution thermal-infrared data from MODIS in combination with atmospheric reanalysis data have proven to be a strong tool to monitor large and regularly forming polynyas and to resolve narrow thin-ice areas (i.e., leads) along the shelf-breaks and across the entire Arctic Ocean. However, the selection of the atmospheric data sets has a large influence on derived polynya characteristics due to their impact on the calculation of the heat loss to the atmosphere, which is determined by the local thin-ice thickness. In order to overcome this methodical ambiguity, we present a MODIS-assisted temperature adjustment (MATA) algorithm that yields corrections of the 2 m air temperature and hence decreases differences between the atmospheric input data sets. The adjustment algorithm is based on atmospheric model simulations. We focus on the Laptev Sea region for detailed case studies on the developed algorithm and present time series of polynya characteristics in the winter season 2019/2020. It shows that the application of the empirically derived correction decreases the difference between different utilized atmospheric products significantly from 49% to 23%. Additional filter strategies are applied that aim at increasing the capability to include leads in the quasi-daily and persistence-filtered thin-ice thickness composites. More generally, the winter of 2019/2020 features high polynya activity in the eastern Arctic and less activity in the Canadian Arctic Archipelago, presumably as a result of the particularly strong polar vortex in early 2020.
In recent decades, the Arctic has been undergoing a wide range of fast environmental changes. The sea ice covering the Arctic Ocean not only reacts rapidly to these changes, but also influences and alters the physical properties of the atmospheric boundary layer and the underlying ocean on various scales. In that regard, polynyas, i.e. regions of open water and thin ice within thernclosed pack ice, play a key role as being regions of enhanced atmosphere-ice-ocean interactions and extensive new ice formation during winter. A precise long-term monitoring and increased efforts to employ long-term and high-resolution satellite data is therefore of high interest for the polar scientific community. The retrieval of thin-ice thickness (TIT) fields from thermal infrared satellite data and atmospheric reanalysis, utilizing a one-dimensional energy balance model, allows for the estimation of the heat loss to the atmosphere and hence, ice-production rates. However, an extended application of this approach is inherently connected with severe challenges that originate predominantly from the disturbing influence of clouds and necessary simplifications in the model set-up, which all need to be carefully considered and compensated for. The presented thesis addresses these challenges and demonstrates the applicability of thermal infrared TIT distributions for a long-term polynya monitoring, as well as an accurate estimation of ice production in Arctic polynyas at a relatively high spatial resolution. Being written in a cumulative style, the thesis is subdivided into three parts that show the consequent evolution and improvement of the TIT retrieval, based on two regional studies (Storfjorden and North Water (NOW) polynya) and a final large-scale, pan-Arctic study. The first study on the Storfjorden polynya, situated in the Svalbard archipelago, represents the first long-term investigation on spatial and temporal polynya characteristics that is solely based on daily TIT fields derived from MODIS thermal infrared satellite data and ECMWF ERA-Interim atmospheric reanalysis data. Typical quantities such as polynya area (POLA), the TIT distribution, frequencies of polynya events as well as the total ice production are derived and compared to previous remote sensing and modeling studies. The study includes a first basic approach that aims for a compensation of cloud-induced gaps in daily TIT composites. This coverage-correction (CC) is a mathematically simple upscaling procedure that depends solely on the daily percentage of available MODIS coverage and yields daily POLA with an error-margin of 5 to 6 %. The NOW polynya in northern Baffin Bay is the main focus region of the second study, which follows two main goals. First, a new statistics-based cloud interpolation scheme (Spatial Feature Reconstruction - SFR) as well as additional cloud-screening procedures are successfully adapted and implemented in the TIT retrieval for usage in Arctic polynya regions. For a 13-yr period, results on polynya characteristics are compared to the CC approach. Furthermore, an investigation on highly variable ice-bridge dynamics in Nares Strait is presented. Second, an analysis of decadal changes of the NOW polynya is carried out, as the additional use of a suite of passive microwave sensors leads to an extended record of 37 consecutive winter seasons, thereby enabling detailed inter-sensor comparisons. In the final study, the SFR-interpolated daily TIT composites are used to infer spatial and temporal characteristics of 17 circumpolar polynya regions in the Arctic for 2002/2003 to 2014/2015. All polynya regions combined cover an average thin-ice area of 226.6 -± 36.1 x 10-³ km-² during winter (November to March) and yield an average total wintertime accumulated ice production of about 1811 -± 293 km-³. Regional differences in derived ice production trends are noticeable. The Laptev Sea on the Siberian shelf is presented as a focus region, as frequently appearing polynyas along the fast-ice edge promote high rates of new ice production. New affirming results on a distinct relation to sea-ice area export rates and hence, the Transpolar Drift, are shown. This new high-resolution pan-Arctic data set can be further utilized and build upon in a variety of atmospheric and oceanographic applications, while still offering room for further improvements such as incorporating high-resolution atmospheric data sets and an optimized lead-detection.
Cognitive performance is contingent upon multiple factors. Beyond the impact of en-vironmental circumstances, the bodily state may hinder or promote cognitive processing. Af-ferent transmission from the viscera, for instance, is crucial not only for the genesis of affect and emotion, but further exerts significant influences on memory and attention. In particular, afferent cardiovascular feedback from baroreceptors demonstrated subcortical and cortical inhibition. Consequences for human cognition and behavior are the impairment of simple perception and sensorimotor functioning. Four studies are presented that investigate the mod-ulatory impact of baro-afferent feedback on selective attention. The first study demonstrates that the modulation of sensory processing by baroreceptor activity applies to the processing of complex stimulus configurations. By the use of a visual masking task in which a target had to be selected against a visual mask, perceptual interference was reduced when target and mask were presented during the ventricular systole compared to the diastole. In study two, selection efficiency was systematically manipulated in a visual selection task in which a target letter was flanked by distracting stimuli. By comparing participants" performance under homogene-ous and heterogeneous stimulus conditions, selection efficiency was assessed as a function of the cardiac cycle phase in which the targets and distractors were presented. The susceptibility of selection performance to the stimulus condition at hand was less pronounced during the ventricular systole compared to the diastole. Study one and two therefore indicate that inter-ference from irrelevant sensory input, resulting from temporally overlapping processing traces or from the simultaneous presentation of distractor stimuli, is reduced during phases of in-creased baro-afferent feedback. Study three experimentally manipulated baroreceptor activity by systematically varying the participant- body position while a sequential distractor priming task was completed. In this study, negative priming and distractor-response binding effects were obtained as indices of controlled and automatic distractor processing, respectively. It was found that only controlled distractor processing was affected by tonic increases in baro-receptor activity. In line with study one and two these results indicate that controlled selection processes are more efficient during enhanced baro-afferent feedback, observable in dimin-ished aftereffects of controlled distractor processing. Due to previous findings that indicated baro-afferent transmission to affect central, rather than response-related processing stages, study four measured lateralized-readiness potentials (LRPs) and reaction times (RTs), while participants, again, had to selectively respond to target stimuli that were surrounded by dis-tractors. The impact of distractor inhibition on stimulus-related, but not on response-related LRPs suggests that in a sequential distractor priming task, the sensory representations of dis-tractors, rather than motor responses are targeted by inhibition. Together with the results from studies one through three and the finding of baroreceptor-mediated behavioral inhibition tar-geting central processing stages, study four corroborates the presumption of baro-afferent signal transmission to modulate controlled processes involved in selective attention. In sum, the work presented shows that visual selective attention benefits from in-creased baro-afferent feedback as its effects are not confined to simple perception, but may facilitate the active suppression of neural activity related to sensory input from distractors. Hence, due to noise reduction, baroreceptor-mediated inhibition may promote effective selec-tion in vision.
The role of cortisol and cortisol dynamics in patients after aneurysmal subarachnoid hemorrhage
(2011)
Spontaneous aneurysmal subarachnoid hemorrhage (SAH) is a form of stroke which constitutes a severe trauma to the brain and often leads to serious long-term medical and psychosocial sequels which persist for years after the acute event. Recently, adrenocorticotrophic hormone deficiency has been identified as one possible consequence of the bleeding and is assumed to occur in around 20% of all survivors. Additionally, a number of studies report a high prevalence of post-SAH symptoms such as lack of initiative, fatigue, loss of concentration, impaired quality of life and psychiatric symptoms such as depression. The overlap of these symptoms and those of patients with untreated partial or complete hypopituitarism lead to the suggestion that neuroendocrine dysregulations may contribute to the psychosocial sequels of SAH. Therefore, one of the aims of this work is to gain insights into the role of neuroendocrine dysfunction on quality of life and the prevalence of psychiatric sequels in SAH-patients. Additionally, as data on cortisol dynamics after SAH are scarce, diurnal cortisol profiles are investigated in patients in the acute and chronic phase, as well as the cortisol awakening response and feedback sensitivity in the chronic phase after SAH. As a result, it can be shown that some SAH patients exhibit lower serum cortisol levels but at the same time a higher cortisol awakening response in saliva than healthy controls. Also, patients in the chronic phase after SAH do have a stable diurnal cortisol rhythm while there are disturbances in around 50% of all patients in the acute phase, leading to the conclusion that a single baseline measurement of cortisol is of no substantial use for diagnosing cortisol dysregulations in the acute phase after SAH. It is assumed that in SAH patients endocrine changes occur over time and that a combination of adrenal exhaustion and a subsequent downregulation of corticosteroid binding globulin may be the most probable causes for the dissociation of serum cortisol concentrations and salivary cortisol profiles in the investigated SAH patients. These changes may be an emergency response after SAH and, as elevated free cortisol levels are connected to a better psychosocial outcome in patients in the chronic phase after SAH, this reaction may even be adaptive.
The thesis studies the question how universal behavior is inherited by the Hadamard product. The type of universality that is considered here is universality by overconvergence; a definition will be given in chapter five. The situation can be described as follows: Let f be a universal function, and let g be a given function. Is the Hadamard product of f and g universal again? This question will be studied in chapter six. Starting with the Hadamard product for power series, a definition for a more general context must be provided. For plane open sets both containing the origin this has already been done. But in order to answer the above question, it becomes necessary to have a Hadamard product for functions that are not holomorphic at the origin. The elaboration of such a Hadamard product and its properties are the second central part of this thesis; chapter three will be concerned with them. The idea of the definition of such a Hadamard product will follow the case already known: The Hadamard product will be defined by a parameter integral. Crucial for this definition is the choice of appropriate integration curves; these will be introduced in chapter two. By means of the Hadamard product- properties it is possible to prove the Hadamard multiplication theorem and the Borel-Okada theorem. A generalization of these theorems will be presented in chapter four.
The influence of affect on vocal parameters has been well investigated in speech portrayed by actors, but little is known about affect expression in more natural or authentic speech behavior. This is partly due to the difficulty of generating speech samples that represent authentic expression of speaker affect. The present work investigates the influence of speaker affect on the vocal fundamental frequency (F0) in comparatively authentic speech samples. Three well-documented psychophysiological research methods were applied for the induction of affective states in German native speakers in order to obtain speech samples with authentic affect expression: the Cold Pressor Test (CPT), the Stroop Color-Word Test (SCWT) and the presentation of slides from the International Affective Pictures System (IAPS). The here reported results show that the influence of affect on F0 is differentially modulated by psychophysiological processes as well as socio-cultural influences. They also indicate that this approach may be useful for future research and further to gain a deeper understanding of authentic vocal affect expression. Moreover, F0 may constitute an additional non-invasive, easy to obtain measure for the established psychophysiological research methodology.
This guide is meant to provide some initial bibliographical assistance to those who want to study the historical evolution of ecological thinking in Canada on the basis of poetry. A major theoretical assumption underlying this project is that literature gives privileged access to a nation's cultural memory. Even a cursory survey of Canadian literary history supplies ample evidence for the marked presence of ecological attitudes in Canada's mental history. The origin of these attitudes can be traced back to at least the 18th century. By way of generalising, one could argue that literature reflects, and provides subtle insights into, how both native Canadians and immigrant settlers have responded to their 'eco-sphere'. For many Canadian texts bear witness to a thematic preoccupation with the Canadian oikos-area (oikos signifying 'house' in a narrower sense but also 'habitat' in a wider), to which its inhabitants have established a meaningful relationship. No doubt, even a preliminary attempt to explore ecological attitudes in Canadian literature more systematically would be a multi-facetted and difficult task. One of the major practical problems that poses itself immediately is: Which texts could, and ought to be examined? For there are innumerous references to environmental attitudes and ideas in all literary genres -- also in a great many fictional texts, both traditional and contemporary. For the purpose of research and study it would be extremely helpful indeed, if there were comprehensive bibliographical aids that would enable us to approach, and familiarize ourselves with, all these texts more conveniently. But the challenge of collecting pertinent data of this general kind would have been far beyond my scope and resources. This is why the present guide limits its focus to poetry. The working hypotheses motivating this tentative compilation are: i. Poetry is a more ubiquitous literary genre than fiction and drama. According to available evidence, more writers seem to have tried out their skills on poetry than on fiction and drama. Therefore poetry is likely to mirror a greater variety of voices and sentiments. ii. Poems are still a relatively untapped source in the current discussion about the environment. However, a great many poetic texts lend themselves to supplying relevant arguments that could be used in various fields of action such as environmental ethics, evironmental education and, last but not least, conservation. iii. Apart from smaller pieces of the "nature writing" variety, poems dealing with nature and environmental issues are comparatively short, aiming as they do at a single focus and effect. This is why they can be opened up for critical inspection more easily than selected passages from, say, a novel, which would have to be related to the context of the whole work. iv. This guide attempts to direct the user's attention to poems that are accessible in anthologies. A strong argument for selecting poems from anthologies rather than from individual writers' collections is that the anthology editors are likely to have selected precisely those poems of whose appeal to their respective readerships they must have been thoroughly convinced. Thus the mere fact that a poem has been anthologized suggests that it can be considered an important element in the process of Canadian culture building. Therefore, the very poems that have been frequently anthologized could perhaps serve as special barometers of the Canadian ecological sensibility at a given historical moment.
Mechanical and Biological Treatment (MBT) generally aims to reduce the amount of solid waste and emissions in landfills and enhance the recoveries. MBT technology has been studied in various countries in Europe and Asia. Techniques of solid waste treatment are distinctly different in the study areas. A better understanding of MBT waste characteristics can lead to an optimization of the MBT technology. For a sustainable waste management, it is essential to determine the characteristics of the final MBT waste, the effectiveness of the treatment system as well as the potential application of the final material regarding future utilization. This study aims to define and compare the characteristics of the final MBT materials in the following countries: Luxembourg (using a high degree technology), Fridhaff in Diekirch/Erpeldange, Germany (using a well regulated technology), Singhofen in Rhein-Lahn district, Thailand (using a low cost technology): Phitsanulok in Phitsanulok province. The three countries were chosen for this comparative study due to their unique performance in the MBT implementation. The samples were taken from the composting heaps of the final treatment process prior to sending them to landfills, using a random sampling standard strategy from August 2008 onwards. The size of the sample was reduced to manageable sizes before characterization. The size reduction was achieved by the quartering method. The samples were first analyzed for the size fraction on the day of collection. They were screened into three fractions by the method of dry sieving: small size with a diameter of <10 mm, medium size with a diameter of 10-40 mm and large size with a diameter of >40 mm. These fractions were further analyzed for their physical and chemical parameters such as particle size distribution (total into 12 size fractions), particle shape, porosity, composition, water content, water retention capacity and respiratory activity. The extracted eluate was analyzed for pH-value, heavy metals (lead, cadmium and arsenic), chemical oxygen demand, ammonium, sulfate and chloride. In order to describe and evaluate the potential application of the small size material as a final cover of landfills, the fraction of small size samples were tested for the geotechnical properties as well. The geotechnical parameters were the compaction test, permeability test and shear strength test. The detailed description of the treatment facilities and methods of the study areas were included in the results. The samples from the three countries are visibly smaller than waste without pretreatment. Maximum particle size is found to be less than 100 mm. The samples are found to consist of dust to coarse fractions. The small size with a diameter of <10 mm was highest in the sample from Germany (average 60% by weight), secondly in the sample from Luxembourg (average 43% by weight) and lowest in the sample from Thailand (average 15% by weight). The content of biodegradable material generally increased with decreasing particle sizes. Primary components are organic, plastics, fibrous materials and inert materials (glass and ceramics). The percentage of each components greatly depends on the MBT process of each country. Other important characteristics are significantly reduced water content, reduced total organic carbon and reduced potential heavy metals. The geotechnical results show that the small fraction is highly compact, has a low permeability and lot of water adsorbed material. The utilization of MBT material in this study shows a good trend as it proved to be a safe material which contained very low amounts of loadings and concentrations of chemical oxygen demand, ammonium, and heavy metals. The organic part can be developed to be a soil conditioner. It is also suitably utilized as a bio-filter layer in the final cover of landfill or as a temporary cover during the MBT process. This study showed how to identify the most appropriate technology for municipal solid waste disposal through the study of waste characterization.
Repeatedly encountering a stimulus biases the observer’s affective response and evaluation of the stimuli. Here we provide evidence for a causal link between mere exposure to fictitious news reports and subsequent voting behavior. In four pre-registered online experiments, participants browsed through newspaper webpages and were tacitly exposed to names of fictitious politicians. Exposure predicted voting behavior in a subsequent mock election, with a consistent preference for frequent over infrequent names, except when news items were decidedly negative. Follow-up analyses indicated that mere media presence fuels implicit personality theories regarding a candidate’s vigor in political contexts. News outlets should therefore be mindful to cover political candidates as evenly as possible.
Ability self-concept (SC) and self-efficacy (SE) are central competence-related self-perceptions that affect students’ success in educational settings. Both constructs show conceptual differences but their empirical differentiation in higher education has not been sufficiently demonstrated. In the present study, we investigated the empirical differentiation of SC and SE in higher education with N = 1,243 German psychology students (81% female; age M = 23.62 years), taking into account central methodological requirements that, in part, have been neglected in prior studies. SC and SE were assessed at the same level of specificity, only cognitive SC items were used, and multiple academic domains were considered. We modeled the structure of SC and SE taking into account a multidimensional and/or hierarchical structure and investigated the empirical differentiation of both constructs on different levels of generality (i.e., domain-specific and domain-general). Results supported the empirical differentiation of SC and SE with medium-sized positive latent correlations (range r = .57 - .68) between SC and SE on different levels of generality. The knowledge about the internal structure of students’ SC and SE and the differentiation of both constructs can help us to develop construct-specific and domain-specific intervention strategies. Future empirical comparisons of the predictive power of SC and SE can provide further evidence that both represent empirical different constructs.
This thesis sheds light on the heterogeneous hedging behavior of airlines. The focus lies on financial hedging, operational hedging and selective hedging. The unbalanced panel data set includes 74 airlines from 39 countries. The period of analysis is 2005 until 2014, resulting in 621 firm years. The random effects probit and fixed effects OLS models provide strong evidence of a convex relation between derivative usage and a firm’s leverage, opposing the existing financial distress theory. Airlines with lower leverage had higher hedge ratios. In addition, the results show that airlines with interest rate and currency derivatives were more likely to engage in fuel price hedging. Moreover, the study results support the argument that operational hedging is a complement to financial hedging. Airlines with more heterogeneous fleet structures exhibited higher hedge ratios.
Also, airlines which were members of a strategic alliance were more likely to be hedging airlines. As alliance airlines are rather financially sound airlines, the positive relation between alliance membership and hedging reflects the negative results on the leverage
ratio. Lastly, the study presents determinants of an airlines’ selective hedging behavior. Airlines with prior-period derivative losses, recognized in income, changed their hedge portfolios more frequently. Moreover, the sample airlines acted in accordance with herd behavior theory. Changes in the regional hedge portfolios influenced the hedge portfolio of the individual airline in the same direction.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.
Educational researchers have intensively investigated students" academic self-concept (ASC) and self-efficacy (SE). Both constructs are part of the competence-related self-perceptions of students and are considered to support students" academic success and their career development in a positive manner (e.g., Abele-Brehm & Stief, 2004; Richardson, Abraham, & Bond, 2012; Schneider & Preckel, 2017). However, there is a lack of basic research on ASC and SE in higher education in general, and in undergraduate psychology courses in particular. Therefore, according to the within-network and between-network approaches of construct validation (Byrne, 1984), the present dissertation comprises three empirical studies examining the structure (research question 1), measurement (research question 2), correlates (research question 3), and differentiation (research question 4) of ASC and SE in a total sample of N = 1243 psychology students. Concerning research question 1, results of confirmatory factor analysis (CFAs) implied that students" ASC and SE are domain-specific in the sense of multidimensionality, but they are also hierarchically structured, with a general factor at the apex according to the nested Marsh/Shavelson model (NMS model, Brunner et al., 2010). Additionally, psychology students" SE to master specific psychological tasks in different areas of psychological application could be described by a 2-dimensional model with six factors according to the Multitrait-Multimethod (MTMM)-approach (Campbell & Fiske, 1959). With regard to research question 2, results revealed that the internal structure of ASC and SE could be validly assessed. However, the assessment of psychology students" SE should follow a task-specific measurement strategy. Results of research question 3 further showed that both constructs of psychology students" competence-related self-perceptions were positively correlated to achievement in undergraduate psychology courses if predictor (ASC, SE) corresponded to measurement specificity of the criterion (achievement). Overall, ASC provided substantially stronger relations to achievement compared to SE. Moreover, there was evidence for negative paths (contrast effects) from achievement in one psychological domain on ASC of another psychological domain as postulated by the internal/external frame of reference (I/E) model (Marsh, 1986). Finally, building on research questions 1 to 3 (structure, measurement, and correlates of ASC and SE), psychology students" ASC and SE were be differentiated on an empirical level (research question 4). Implications for future research practices are discussed. Furthermore, practical implications for enhancing ASC and SE in higher education are proposed to support academic achievement and the career development of psychology students.
The study at hand deals with madness as it is represented in English Canadian fiction. The topic seemed most interesting and fruitful for analysis due to the fact that as the ways madness has been defined, understood, described, judged and handled differ quite profoundly from society to society, from era to era, as the language, ideas and associations surrounding insanity are both strongly culture-relative and shifting, madness as a theme of myth and literature has always been a excellent vehicle to mirror the assumptions and arguments, the aspirations and nostalgia, the beliefs and values, hopes and fears of its age and society. Thus, while the overall intent of this study is to elucidate some discernible patterns of structure and style which accompany the use of madness in Canadian literature, to investigate the varying sorts of portrayal and the conventions of presentation, to interpret the use of madness as literary devices and to highlight the different statements which are made, the continuity, variation, and changes in the theme of madness provide an informing principle in terms of certain Canadian experiences and perceptions. By examining madness as it represents itself in Canadian literature and considering the respective explorations of the deranged mind within their historical context, I hope to demonstrate that literary interpretations of madness both reflect and question cultural, political, religious and psychological assumptions of their times and that certain symptoms or usages are characteristic of certain periods. Such an approach, it is hoped, might not only contribute towards an assessment of the wealth of associations which surround madness and the ambivalence with which it is viewed, but also shed some light on the Canadian imagination. As such this study can be considered not only as a history of literary madness, but a history of Canadian society and the Canadian mind.
The forward testing effect is an indirect benefit of retrieval practice. It refers to the finding that retrieval practice of previously studied information enhances learning and retention of subsequently studied other information in episodic memory tasks. Here, two experiments were conducted that investigated whether retrieval practice influences participants’ performance in other tasks, i.e., arithmetic tasks. Participants studied three lists of words in anticipation of a final recall test. In the testing condition, participants were immediately tested on lists 1 and 2 after study of each list, whereas in the restudy condition, they restudied lists 1 and 2 after initial study. Before and after study of list 3, participants did an arithmetic task. Finally, participants were tested on list 3, list 2, and list 1. Different arithmetic tasks were used in the two experiments. Participants did a modular arithmetic task in Experiment 1a and a single-digit multiplication task in Experiment 1b. The results of both experiments showed a forward testing effect with interim testing of lists 1 and 2 enhancing list 3 recall in the list 3 recall test, but no effects of recall testing of lists 1 and 2 for participants’ performance in the arithmetic tasks. The findings are discussed with respect to cognitive load theory and current theories of the forward testing effect.
Long-Term Memory Updating: The Reset-of-Encoding Hypothesis in List-Method Directed Forgetting
(2017)
People- memory for new information can be enhanced by cuing them to forget older information, as is shown in list-method directed forgetting (LMDF). In this task, people are cued to forget a previously studied list of items (list 1) and to learn a new list of items (list 2) instead. Such cuing typically enhances memory for the list 2 items and reduces memory for the list 1 items, which reflects effective long-term memory updating. This review focuses on the reset-of-encoding (ROE) hypothesis as a theoretical explanation of the list 2 enhancement effect in LMDF. The ROE hypothesis is based on the finding that encoding efficacy typically decreases with number of encoded items and assumes that providing a forget cue after study of some items (e.g., list 1) resets the encoding process and makes encoding of subsequent items (e.g., early list 2 items) as effective as encoding of previously studied (e.g., early list 1) items. The review provides an overview of current evidence for the ROE hypothesis. The evidence arose from recent behavioral, neuroscientific, and modeling studies that examined LMDF on both an item and a list level basis. The findings support the view that ROE plays a critical role for the list 2 enhancement effect in LMDF. Alternative explanations of the effect and the generalizability of ROE to other experimental tasks are discussed.
List-method directed forgetting (LMDF) is the demonstration that people can intentionally forget previously studied information when they are asked to forget what they have previously learned and remember new information instead. In addition, recent research demonstrated that people can selectively forget when cued to forget only a subset of the previously studied information. Both forms of forgetting are typically observed in recall tests, in which the to-be-forgotten and to-be-remembered information is tested independent of original cuing. Thereby, both LMDF and selective directed forgetting (SDF) have been studied mostly with unrelated item materials (e.g., word lists). The present study examined whether LMDF and SDF generalize to prose material. Participants learned three prose passages, which they were cued to remember or forget after the study of each passage. At the time of testing, participants were asked to recall the three prose passages regardless of original cuing. The results showed no significant differences in recall of the three lists as a function of cuing condition. The findings suggest that LMDF and SDF do not occur with prose material. Future research is needed to replicate and extend these findings with (other) complex and meaningful materials before drawing firm conclusions. If the null effect proves to be robust, this would have implications regarding the ecological validity and generalizability of current LMDF and SDF findings.
The forward testing effect refers to the finding that retrieval practice of previously studied information enhances learning and retention of subsequently studied other information. While most of the previous research on the forward testing effect examined group differences, the present study took an individual differences approach to investigate this effect. Experiment 1 examined whether the forward effect has test-retest reliability between two experimental sessions. Experiment 2 investigated whether the effect is related to participants’ working memory capacity. In both experiments (and each session of Experiment 1), participants studied three lists of items in anticipation of a final cumulative recall test. In the testing condition, participants were tested immediately on lists 1 and 2, whereas in the restudy condition, they restudied lists 1 and 2. In both conditions, participants were tested immediately on list 3. On the group level, the results of both experiments demonstrated a forward testing effect, with interim testing of lists 1 and 2 enhancing immediate recall of list 3. On the individual level, the results of Experiment 1 showed that the forward effect on list 3 recall has moderate test-retest reliability between two experimental sessions. In addition, the results of Experiment 2 showed that the forward effect on list 3 recall does not depend on participants’ working memory capacity. These findings suggest that the forward testing effect is reliable at the individual level and affects learners at a wide range of working memory capacities alike. The theoretical and practical implications of the findings are discussed.
The forward effect of testing refers to the finding that retrieval practice of previously studied information increases retention of subsequently studied other information. It has recently been hypothesized that the forward effect (partly) reflects the result of a reset-of-encoding (ROE) process. The proposal is that encoding efficacy decreases with an increase in study material, but testing of previously studied information resets the encoding process and makes the encoding of the subsequently studied information as effective as the encoding of the previously studied information. The goal of the present study was to verify the ROE hypothesis on an item level basis. An experiment is reported that examined the effects of testing in comparison to restudy on items’ serial position curves. Participants studied three lists of items in each condition. In the testing condition, participants were tested immediately on non-target lists 1 and 2, whereas in the restudy condition, they restudied lists 1 and 2. In both conditions, participants were tested immediately on target list 3. Influences of condition and items’ serial learning position on list 3 recall were analyzed. The results showed the forward effect of testing and furthermore that this effect varies with items’ serial list position. Early target list items at list primacy positions showed a larger enhancement effect than middle and late target list items at non-primacy positions. The results are consistent with the ROE hypothesis on an item level basis. The generalizability of the ROE hypothesis across different experimental tasks, like the list-method directed-forgetting task, is discussed.
Automata theory is the study of abstract machines. It is a theory in theoretical computer science and discrete mathematics (a subject of study in mathematics and computer science). The word automata (the plural of automaton) comes from a Greek word which means "self-acting". Automata theory is closely related to formal language theory [99, 101]. The theory of formal languages constitutes the backbone of the field of science now generally known as theoretical computer science. This thesis aims to introduce a few types of automata and studies then class of languages recognized by them. Chapter 1 is the road map with introduction and preliminaries. In Chapter 2 we consider few formal languages associated to graphs that has Eulerian trails. We place few languages in the Chomsky hierarchy that has some other properties together with the Eulerian property. In Chapter 3 we consider jumping finite automata, i. e., finite automata in which input head after reading and consuming a symbol, can jump to an arbitrary position of the remaining input. We characterize the class of languages described by jumping finite automata in terms of special shuffle expressions and survey other equivalent notions from the existing literature. We could also characterize some super classes of this language class. In Chapter 4 we introduce boustrophedon finite automata, i. e., finite automata working on rectangular shaped arrays (i. e., pictures) in a boustrophedon mode and we also introduce returning finite automata that reads the input, line after line, does not alters the direction like boustrophedon finite automata i. e., reads always from left to right, line after line. We provide close relationships with the well-established class of regular matrix (array) languages. We sketch possible applications to character recognition and kolam patterns. Chapter 5 deals with general boustrophedon finite automata, general returning finite automata that read with different scanning strategies. We show that all 32 different variants only describe two different classes of array languages. We also introduce Mealy machines working on pictures and show how these can be used in a modular design of picture processing devices. In Chapter 6 we compare three different types of regular grammars of array languages introduced in the literature, regular matrix grammars, (regular : regular) array grammars, isometric regular array grammars, and variants thereof, focusing on hierarchical questions. We also refine the presentation of (regular : regular) array grammars in order to clarify the interrelations. In Chapter 7 we provide further directions of research with respect to the study that we have done in each of the chapters.
Retirement, fertility and sexuality are three key life stage events that are embedded in the framework of population economics in this dissertation. Each topic implies economic relevance. As retirement entry shifts labour supply of experienced workers to zero, this issue is particularly relevant for employers, retirees themselves as well as policymakers who are in charge of the design of the pension system. Giving birth has comprehensive economic relevance for women. Parental leave and subsequent part-time work lead to a direct loss of income. Lower levels of employment, work experience, training and career opportunities result in indirect income losses. Sexuality has decisive influence on the quality of partnerships, subjective well-being and happiness. Well-being and happiness, in turn, are significant key determinants not only in private life but also in the work domain, for example in the area of job performance. Furthermore, partnership quality determines the duration of a partnership. And in general, partnerships enable the pooling of (financial) resources - compared to being single. The contribution of this dissertation emerges from the integration of social and psychological concepts into economic analysis as well as the application of economic theory in non-standard economic research topics. The results of the three chapters show that the multidisciplinary approach yields better prediction of human behaviour than the single disciplines on their own. The results in the first chapter show that both interpersonal conflict with superiors and the individual’s health status play a significant role in retirement decisions. The chapter further contributes to existing literature by showing the moderating role of health within the retirement decision-making: On the one hand, all employees are more likely to retire when they are having conflicts with their superior. On the other hand, among healthy employees, the same conflict raises retirement intentions even more. That means good health is a necessary, but not a sufficient condition for continued working. It may be that conflicts with superiors raise retirement intentions more if the worker is healthy. The key findings of the second chapter reveal significant influence of religion on contraceptive and fertility-related decisions. A large part of research on religion and fertility is originated in evidence from the US. This chapter contrasts evidence from Germany. Additionally, the chapter contributes by integrating miscarriages and abortions, rather than limiting the analysis to births and it gains from rich prospective data on fertility biography of women. The third chapter provides theoretical insights on how to incorporate psychological variables into an economic framework which aims to analyse sexual well-being. According to this theory, personality may play a dual role by shaping a person’s preferences for sex as well as the person’s behaviour in a sexual relationship. Results of econometric analysis reveal detrimental effects of neuroticism on sexual well-being while conscientiousness seems to create a win-win situation for a couple. Extraversions and Openness have ambiguous effects on romantic relationships by enhancing sexual well-being on the one hand but raising commitment problems on the other. Agreeable persons seem to gain sexual satisfaction even if they perform worse in sexual communication.
Soil organic matter (SOM) is an indispensable component of terrestrial ecosystems. Soil organic carbon (SOC) dynamics are influenced by a number of well-known abiotic factors such as clay content, soil pH, or pedogenic oxides. These parameters interact with each other and vary in their influence on SOC depending on local conditions. To investigate the latter, the dependence of SOC accumulation on parameters and parameter combinations was statistically assessed that vary on a local scale depending on parent material, soil texture class, and land use. To this end, topsoils were sampled from arable and grassland sites in south-western Germany in four regions with different soil parent material. Principal component analysis (PCA) revealed a distinct clustering of data according to parent material and soil texture that varied largely between the local sampling regions, while land use explained PCA results only to a small extent. The PCA clusters were differentiated into total clusters that contain the entire dataset or major proportions of it and local clusters representing only a smaller part of the dataset. All clusters were analysed for the relationships between SOC concentrations (SOC %) and mineral-phase parameters in order to assess specific parameter combinations explaining SOC and its labile fractions hot water-extractable C (HWEC) and microbial biomass C (MBC). Analyses were focused on soil parameters that are known as possible predictors for the occurrence and stabilization of SOC (e.g. fine silt plus clay and pedogenic oxides). Regarding the total clusters, we found significant relationships, by bivariate models, between SOC, its labile fractions HWEC and MBC, and the applied predictors. However, partly low explained variances indicated the limited suitability of bivariate models. Hence, mixed-effect models were used to identify specific parameter combinations that significantly explain SOC and its labile fractions of the different clusters. Comparing measured and mixed-effect-model-predicted SOC values revealed acceptable to very good regression coefficients (R2=0.41–0.91) and low to acceptable root mean square error (RMSE = 0.20 %–0.42 %). Thereby, the predictors and predictor combinations clearly differed between models obtained for the whole dataset and the different cluster groups. At a local scale, site-specific combinations of parameters explained the variability of organic carbon notably better, while the application of total models to local clusters resulted in less explained variance and a higher RMSE. Independently of that, the explained variance by marginal fixed effects decreased in the order SOC > HWEC > MBC, showing that labile fractions depend less on soil properties but presumably more on processes such as organic carbon input and turnover in soil.
The Covid-19 pandemic and the related border restrictions have had numerous social, economic and political consequences for border regions. The temporary border closures impacted not only the lives of borderlanders whose everyday practices are embedded in cross-border spaces, but also the func-tioning of institutional actors involved in cross-border activities. The aim here is to investigate the communication surrounding the pandemic and the reactions and (new) strategies of cross-border insti-tutional actors in the context of (re)bordering. Applying the concept of resilience, this paper explores coping mechanisms and modes of adaptation as well as strategies developed to adjust to new circum-stances. Against this backdrop, factors that enhanced or hindered the adaptation process were identi-fied. The German-Polish borderland serves here as a case study, although it will be situated within a wider European context.