Refine
Year of publication
Document Type
- Doctoral Thesis (344)
- Article (123)
- Working Paper (19)
- Book (15)
- Conference Proceedings (9)
- Part of Periodical (5)
- Contribution to a Periodical (4)
- Habilitation (3)
- Other (3)
- Master's Thesis (2)
Language
- English (529) (remove)
Keywords
- Stress (27)
- Modellierung (20)
- Fernerkundung (18)
- Optimierung (18)
- Deutschland (17)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institute
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (58)
- Mathematik (47)
- Fachbereich 6 (39)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (25)
- Informatik (19)
- Anglistik (15)
- Rechtswissenschaft (14)
- Fachbereich 2 (12)
- Medienwissenschaft (4)
- Politikwissenschaft (3)
- Universitätsbibliothek (3)
- Fachbereich 3 (2)
- Fachbereich 5 (2)
- Pädagogik (2)
- Soziologie (2)
- Computerlinguistik und Digital Humanities (1)
- Geschichte, mittlere und neuere (1)
- Japanologie (1)
- Pflegewissenschaft (1)
- Phonetik (1)
- Sinologie (1)
The 22nd annual conference of the European Consortium for Church and State Research took place from 11 to 14 November 2010 in Trier, Germany. Founded in 1989, the Consortium unites experts of law and religion of all Member States of the European Union. In annual meetings, various topics of the relations between religions and states within the European Union are discussed. This year- conference was dedicated to the topic "Religion in Public Education". Scholars from 27 European countries discussed inter alia the role of religion in the European member states" educational systems, opting out of school obligations for religious reasons, home schooling as well as religious dress and symbols in public schools. The present proceedings contain the opening lectures, all country reports and a report on the European Union law.
The object of the current Thematic Issue is not to focus on the individuals (the cross-border commuters) but on the organization of the cross-border labor markets. We move from a micro perspective to a macro perspective in order to underline the diversity of the cross-border labor markets (at the French borders, for example) and shed light on the many aspects that impact cross-border supply or demand. Trying to understand the whole system that goes beyond the cross-border flows, the question we address in this thematic issue is about the organization of the labor markets: is the system organized in a cross-border way? Or do the borders still prevent a genuinely integrated cross-border labor market?
B/ordering the Anthropocene: Inter- and Transdisciplinary Perspectives on Nature-Culture Relations
(2020)
In and with this thematic issue we would like to invite you to engage in productive boundary work and to critically examine the relationship between nature and culture in the Anthropocene. A few years ago, the term Anthropocene was proposed by Paul Crutzen as a term for the current geological epoch, in which humankind (the ‘anthropos’) is seen as the central driving force for global changes in ecological systems. This epoch is characterized by the blurring of boundaries between society and nature, science and politics, as well as by the increased drawing of boundaries between social groups, lifestyles, and the Global North and Global South. With this issue, we would like to give an impetus to explore boundary phenomena in the relationship between nature and society, which up to now have not been the focus of Border Studies. The challenges and problems of the Anthropocene require cross-border thinking and research that stimulates a new reflexivity and commitment, to which the multidisciplinary field of Border Studies can contribute.
In recent decades, Border Studies have gained importance and have seen a noticeable increase in development. This manifests itself in an increased institutionalization, a differentiation of the areas of research interest and a conceptual reorientation that is interested in examining processes. So far, however, little attention has been paid to questions about (inter)disciplinary self-perception and methodological foundations of Border Studies and the associated consequences for research activities. This thematic issue addresses these desiderata and brings together articles that deal with their (inter)disciplinary foundations as well as method(olog)ical and practical research questions. The authors also provide sound insights into a disparate field of work, disclose practical research strategies, and present methodologically sophisticated systematizations.
In the course of the COVID-19 pandemic, borders have become relevant (again) in political action and in people's everyday lives within a very short time. This was especially true for the inhabitants of border regions, whose cross-border life worlds were suddenly irritated by closed borders and police controls. However, the COVID-19 pandemic also led to an increased evidence of social, cultural, economic, health and mobility boundaries beyond national borders, which raised pressing questions about social inequalities. The authors shed light on these dynamics from the perspective of territorial borders, social boundaries and (dis)continuities in border regions through a variety of thematic and spatial approaches. The critical observations and scientific comments were made during the lockdown in April and May 2020 and provide insights into the events during the global pandemic.
While women's evolving contribution to entrepreneurship is irrefutable, in almost all nations, gender disparity is an existing reality of entrepreneurship. Social and economic outcomes make women entrepreneurship an important area for scholars and governments. In attempts to find reasons for this gender disparity, academic scholars evaluated various factors and recognised perceptual variables as having outstanding explanatory value in understanding women's entrepreneurship. To advance our knowledge of gender disparity in entrepreneurship, the present study explores the influence of entrepreneurial perceptual variables on women's entrepreneurship and considers the critical role of country-level institutional contexts on the women's entrepreneurial propensity. Therefore, this study examines the impact of perceptual variables in different nations. It also offers connections between entrepreneurial perceptions, women entrepreneurship, and institutional contexts as a critical topic for future studies.
Drawing on the importance of perceptual factors, this dissertation investigates whether and how their perception of entrepreneurial networks influences the individuals' decision to initiate a new venture. Prior scholars considered exposure to entrepreneurial role models as one of the most influential factors on the women's inclination towards entrepreneurship; thus, a systemized analysis makes it possible to identify existing research gaps related to this perception. Hence, to draw a clear picture of the relationship between entrepreneurial role models and entrepreneurship, this dissertation provides a systemized overview of prior studies. Subsequently, Chapter 2 structures the existing literature on entrepreneurial role models and reveals that past literature has focused on the different types of role models, the stage of life at which the exposure to role models occurs, and the context of the exposure. Current discourse argues that the women's lower access to entrepreneurial role models negatively influences their inclination towards entrepreneurship.
Additionally, although the research on women entrepreneurship has proliferated in recent years, little is known about how entrepreneurial perceptual variables form women's propensity towards entrepreneurship in various institutional contexts. The work of Koellinger et al. (2013), hereafter KMS, is one of the most influential papers that investigated the influence of perceptual variables, and it showed that a lower rate of women entrepreneurship is associated with a lower level of their entrepreneurial network, perceived entrepreneurial capability, and opportunity evaluation and with a higher fear of entrepreneurial failure. Thus, this dissertation replicates the work of KMS. Chapter 3 explicitly investigates the influence of the above perceptions on women's entrepreneurial propensity. This research has drawn data from the Global Entrepreneurship Monitor, a cross-national individual-level data set (2001-2006) covering 236,556 individuals across 17 countries. The results of this chapter suggest that gender disparities in entrepreneurial propensity are conditioned by differences in entrepreneurial perceptual variables. Women's lower levels of perceived entrepreneurial capability, entrepreneurial role models and opportunity evaluation and their higher fear of failure lead to lower entrepreneurial propensity.
To extend and generalise the relationship between perceptions and women's entrepreneurial propensity, in Chapter 4, two studies are conducted based on replicated research. Extension 1 generalises the results of KMS by using the same analysis on more recent data. Accordingly, this research implemented the same analysis on 372,069 individuals across the same countries (2011-2016). The recent data show that although gender disparity became significantly weaker, the gender gap is still in men's favour. However, similarly to the replicated study, this research revealed that perceptual factors explain a larger part of the gender disparity. To strengthen prior empirical evidence, in extension 2, utilising a sample of 1,029,863 individuals from 71 countries (2011-2016), the study conducted the same measures and analysis in a more global setting. By including developing countries, gender disparity in entrepreneurial propensity decreased significantly. The study revealed that the relative significance of the influences of perceptions' differs significantly across nations; however, perceptions have a worldwide effect. Moreover, this research found that the ratio of nascent women entrepreneurs in less developed countries to those in more developed nations is 2. More precisely, a higher level of economic development negatively influences the impact of perceptions on women's entrepreneurial propensity.
Whereas prior scholars increasingly underlined the importance of perceptions in explaining a large part of gender disparities in entrepreneurship, most of the prior investigations focused on nascent (early-stage) entrepreneurship, and evidence on the relationship between perceptions and other types of self-employment, such as innovative entrepreneurship, is scant. Innovation is a confirmed key driver of a firm's sustainability, higher competitive capability, and growth. Therefore, Chapter 5 investigates the influence of perceptions on women's innovative entrepreneurship. The chapter points out that entrepreneurial perceptions are the main determinants of the women's decision to offer a new product or service. This chapter also finds that women's innovative entrepreneurship is associated with the country's specific economic setting.
Overall, by underlining the critical role of institutional contexts, this dissertation provides considerable insights into the interaction between perceptions and women entrepreneurship, and its results have implications for policymakers and practitioners, who may find it helpful to consider women entrepreneurship in systemized challenges. Formal and informal barriers affect women's entrepreneurial perceptions and can differ from one country to the other. In this sense, it is crucial to design operational plans to mitigate formal and stereotypical challenges, and thus, more women will be able to start a business, particularly in developing countries in which women significantly comprise a smaller portion of the labour markets. This type of policy could write the "rules of the game" such that these rules enhance the women's propensity towards entrepreneurship.
The goal of this thesis is to transfer the logarithmic barrier approach, which led to very efficient interior-point methods for convex optimization problems in recent years, to convex semi-infinite programming problems. Based on a reformulation of the constraints into a nondifferentiable form this can be directly done for convex semi- infinite programming problems with nonempty compact sets of optimal solutions. But, by means of an involved max-term this reformulation leads to nondifferentiable barrier problems which can be solved with an extension of a bundle method of Kiwiel. This extension allows to deal with inexact objective values and subgradient information which occur due to the inexact evaluation of the maxima. Nevertheless we are able to prove similar convergence results as for the logarithmic barrier approach in the finite optimization. In the further course of the thesis the logarithmic barrier approach is coupled with the proximal point regularization technique in order to solve ill-posed convex semi-infinite programming problems too. Moreover this coupled algorithm generates sequences converging to an optimal solution of the given semi-infinite problem whereas the pure logarithmic barrier only produces sequences whose accumulation points are such optimal solutions. If there are certain additional conditions fulfilled we are further able to prove convergence rate results up to linear convergence of the iterates. Finally, besides hints for the implementation of the methods we present numerous numerical results for model examples as well as applications in finance and digital filter design.
Many combinatorial optimization problems on finite graphs can be formulated as conic convex programs, e.g. the stable set problem, the maximum clique problem or the maximum cut problem. Especially NP-hard problems can be written as copositive programs. In this case the complexity is moved entirely into the copositivity constraint.
Copositive programming is a quite new topic in optimization. It deals with optimization over the so-called copositive cone, a superset of the positive semidefinite cone, where the quadratic form x^T Ax has to be nonnegative for only the nonnegative vectors x. Its dual cone is the cone of completely positive matrices, which includes all matrices that can be decomposed as a sum of nonnegative symmetric vector-vector-products.
The related optimization problems are linear programs with matrix variables and cone constraints.
However, some optimization problems can be formulated as combinatorial problems on infinite graphs. For example, the kissing number problem can be formulated as a stable set problem on a circle.
In this thesis we will discuss how the theory of copositive optimization can be lifted up to infinite dimension. For some special cases we will give applications in combinatorial optimization.
Arctic and Antarctic polynya systems are of high research interest since extensive new ice formation takes place in these regions. The monitoring of polynyas and the ice production is crucial with respect to the changing sea-ice regime. The thin-ice thickness (TIT) distribution within polynyas controls the amount of heat that is released to the atmosphere and has therefore an impact on the ice-production rates. This thesis presents an improved method to retrieve thermal-infrared thin-ice thickness distributions within polynyas. TIT with a spatial resolution of 1 km × 1 km is calculated using the MODIS ice-surface temperature and atmospheric model variables within the Laptev Sea polynya for the winter periods 2007/08 and 2008/09. The improvement of the algorithm is focused on the surface-energy flux parameterizations. Furthermore, a thorough sensitivity analysis is applied to quantify the uncertainty in the thin-ice thickness results. An absolute mean uncertainty of -±4.7 cm for ice below 20 cm of thickness is calculated. Furthermore, advantages and drawbacks using different atmospheric data sets are investigated. Daily MODIS TIT composites are computed to fill the data gaps arising from clouds and shortwave radiation. The resulting maps cover on average 70 % of the Laptev Sea polynya. An intercomparison of MODIS and AMSR-E polynya data indicates that the spatial resolution issue is essential for accurately deriving polynya characteristics. Monthly fast-ice masks are generated using the daily TIT composites. These fast-ice masks are implemented into the coupled sea-ice/ocean model FESOM. An evaluation of FESOM sea-ice concentrations is performed with the result that a prescribed high-resolution fast-ice mask is necessary regarding the accurate polynya location. However, for a more realistic simulation of other small-scale sea-ice features further model improvements are required. The retrieval of daily high-resolution MODIS TIT composites is an important step towards a more precise monitoring of thin sea ice and sea-ice production. Future work will address a combined remote sensing " model assimilation method to simulate fully-covered thin-ice thickness maps that enable the retrieval of accurate ice production values.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
Digital technologies have become central to social interaction and accessing goods and services. Development strategies and approaches to governance have increasingly deployed self-labelled ‘smart’ technologies and systems at various spatial scales, often promoted as rectifying social and geographic inequalities and increasing economic and environmental efficiencies. These have also been accompanied with similarly digitalized commercial and non-profit offers, particularly within the sharing economy. Concern has grown, however, over possible inequalities linked to their introduction. In this paper we critically analyse the role of sharing economies’ contribution to more inclusive, socially equitable
and spatially just transitions. Conceptually, this paper brings together literature on sharing economies, smart urbanism
and just transitions. Drawing on an explorative database of sharing initiatives within the cross-border region of Luxembourg and Germany, we discuss aspects of sustainability as they relate to distributive justice through spatial accessibility, intended benefits, and their operationalization. The regional analysis shows the diversity of sharing models, how they are appropriated in different ways and how intent and operationalization matter in terms of potential benefits.
Results emphasize the need for more fine-grained, qualitative research revealing who is, and is not, participating and
benefitting from sharing economies.
This dissertation focuses on the link between labour market institutions and precautionary savings. It is evaluated whether private households react to changes in social insurance provision such as the income replacement in case of unemployment by increased savings for precautionary reasons. The dissertation consists of three self-contained chapters, each focusing on slightly different aspects of the topic. The first chapter titled "Precautionary saving and the (in)stability of subjective earnings uncertainty" empirically looks at the influence of future income uncertainty on household saving behavior. Numerous cross-section studies on precautionary saving use subjective expectations regarding the income variance one year ahead as a proxy for income uncertainty. Using such proxies observed only at one point in time, however, may give rise to biased estimates for precautionary wealth if expectations are not stable over time. Survey data from the Dutch DNB Household Survey suggest that subjective future income distributions are not stable over the mid-term. Moreover, in this study I contrast estimates of precautionary wealth using the variation coefficient observed at one point in time with those using a simple mid-term average. Estimates of precautionary wealth based on the average are about 40% to 80% higher than the estimates using the variation coefficient observed only once. In addition to that, wealth accumulation for precautionary reasons is estimated for different parts of the income distribution. The share of precautionary wealth is highest for households at the center of the income distribution. By linking saving behaviour with unemployment insurance, the following chapters then shed some light on an issue that has largely been neglected in the literature on labour market institutions so far. Whereas the third chapter models the relevance of unemployment insurance for income uncertainty and intertemporal decision making during institutional reform processes, chapter 4 seeks to establish empirically a relationship between saving behavior and unemployment insurance. Social insurance, especially unemployment insurance, provides agents with income insurance against not marketable income risks. Since the early 1990s, reform measures like more activating policies as suggested by the OECD Jobs Study in 1994 have been observed in Europe. In the third chapter it is argued that such changes in unemployment insurance reduce public insurance and increase income uncertainty. Moreover, a simple three period model is discussed which shows a link between a welfare state reform and agents' saving decisions as one possible reaction of agents to self-insure against income risk. Two sources of uncertainty seem to be important in this context: (1) uncertain results of the reform process concerning the replacement rate, and (2) uncertainty regarding the timing of information about the content of the reform. It can be shown that the precautionary motive for saving explains an increased accumulation of capital in times of reform activities. In addition to that, early information about the expected replacement rate increases agents' utility and reduces under and oversaving. Following the argument of the previous chapters, that an important feature of labour market institutions in modern welfare states is to provide cash transfers as income replacement in case of unemployment, it is hypothesised that unemployment benefits reduce the motive to save for precautionary reasons. Based on consumer sentiment data from the European Commission's Consumer Survey, chapter four finally provides some evidence that aggregate saving intentions are significantly influenced by unemployment benefits. It can be shown that higher benefits lower the intention to save.
The impacts of intense urbanization and associated urban land-use change along coastlines is vast and unprecedented. Several coasts of the world have been be subjected to human-induced coastal changes and it is imperative to monitor, assess and quantify them. This paper provides the state-of-the-art discourses on the changing dynamics of urban land-use driven by the forces of urbanization. Drawing on extant literature mainly from Web of Science and Google scholar, the status quo of the spatio-temporal dynamics of urbanization and urban change processes were explored with specific focus on global, Africa, Ghana and an actual case of Accra coast. Findings show whilst urbanization continues to increase exponentially, urban land also continue to change markedly. Current trends and patterns shows that changing urban dynamics exhibit are distinctly different from that of the past. Particularly, the rate, magnitude, geographic location, urban forms and functions are changing. In the specific case of Accra coast, there is general trend of urbanization moving outwards, i.e. from the core city centre towards the peripheral areas. Additionally, spatial urban pattern is dominated by urban sprawl, characterized by the cyclical process of diffusion and coalescence. The processes of urbanization are further exacerbated within coastal areas with a new and unique spatial urban form, “tourism urbanization” emerging. This new urban form is largely driven by rapid expansion of tourist infrastructure, developing at the instance of government policy to develop coastal tourism. In addition, the coastal conurbation of Accra-Tema is a powerful hub for industrial and commercial activities, which is drawing huge “humanline” to- wards the coastline. The literature illustrates that contemporary approaches and conceptualizations for urbanization and urban land-use change analysis be extended particularly from the mere focus on statistical classifications of cities in different size categories. With the urban fringe spreading outwardly, it should be kept in mind that new forms of urban settlements are emerging along with varying sizes. Considering the multiple scales, magnitude and rates involved as well as the geospatial patterns of urban change processes, experimental case studies that include coastal cities, Peri-urban fringes and interconnections with rural areas across a range of urbanization processes is essential and very urgent.
This paper provides an overview of five major shifts in urban water supply governance in relation to changing paradigms in the water sector as a whole and in water-related research: i) the municipal hydraulic paradigm in the Global North; ii) its travel to cities in the Global South; iii) the shift from government to governance; iv) the (private) utility model and v) its contestation. The articulation of each shift in the Ghanaian context is described from the creation of the first water supply system during colonial time to the recent contestation against private corporate sector participation. Current challenges are outlined together with new pathways for researching urban water governance. The paper is based on a literature review conducted in 2015 and serves as a background study for further research within the WaterPower project.
As in many other cities of the Global South, in Accra and its Greater Metropolitan Area (GAMA) water provision for drinking, domestic and productive uses is coproduced by multiple provisioning and delivery modalities. This paper contributes to the overall understanding of sociospatial conditions of urban water (in)security in GAMA. By looking at the geography of infrastructure and inequalities in water access, it seeks to identify patterns of uneven access to water. The first part provides an overview of urban water supply in GAMA, focusing on water infrastructure and the perspective of water providers. In the second part, households’ access strategies are discussed by combining both quantitative and qualitative perspectives. The paper brings together literature research and empirical material collected during fieldwork in the Ghanaian capital city.
Stakeholder Mapping
(2016)
This report presents the results of a stakeholder mapping exercise carried out in the WaterPower project. The mapping was conducted for the following main research areas of the project: water supply, land use planning and management, wetland management and climate change adaptation/disaster risk reduction. The report gives an overview of the stakeholders that play a role in these respective areas and identifies those who have concomitant responsibilities in different sectors. It represents the first step towards further involvement of stakeholders in the WaterPower project.
Stress represents a significant problem for Western societies inducing costs as high as 3-4 % of the European gross national products, a burden that is continually increasing (WHO Briefing, EUR/04/5047810/B6). The classical stress response system is the hypothalamic-pituitary-adrenal (HPA) axis which acts to restore homeostasis after disturbances. Two major components within the HPA axis system are the glucocorticoid receptor (GR) and the mineralocorticoid receptor (MR). Cortisol, released from the adrenal glands at the end of the HPA axis, binds to MRs and with a 10 fold lower affinity to GRs. Both, impairment of the HPA axis and an imbalance in the MR/GR ratio enhances the risk for infection, inflammation and stress related psychiatric disorders. Major depressive disorder (MDD) is characterised by a variety of symptoms, however, one of the most consistent findings is the hyperactivity of the HPA axis. This may be the result of lower numbers or reduced activity of GRs and MRs. The GR gene consists of multiple alternative first exons resulting in different GR mRNA transcripts whereas for the MR only two first exons are known to date. Both, the human GR promoter 1F and the homologue rat Gr promoter 1.7 seem to be susceptible to methylation during stressful early life events resulting in lower 1F/1.7 transcript levels. It was proposed that this is due to methylation of a NGFI-A binding site in both, the rat promoter 1.7 and the human promoter 1F. The research presented in this thesis was undertaken to determine the differential expression and methylation patterns of GR and MR variants in multiple areas of the limbic brain system in the healthy and depressed human brain. Furthermore, the transcriptional control of the GR transcript 1F was investigated as expression changes of this transcript were associated with MDD, childhood abuse and early life stress. The role of NGFI-A and several other transcription factors on 1F regulation was studied in vitro and the effect of Ngfi-a overexpression on the rat Gr promoter 1.7 in vivo. The susceptibility to epigenetic programming of several GR promoters was investigated in MDD. In addition, changes in methylation levels have been determined in response to a single acute stressor in rodents. Our results showed that GR and MR first exon transcripts are differentially expressed in the human brain, but this is not due to epigenetic programming. We showed that NGFI-A has no effect on endogenous 1F/1.7 expression in vitro and in vivo. We provide evidence that the transcription factor E2F1 is a major element in the transcriptional complex necessary to drive the expression of GR 1F transcripts. In rats, highly individual methylation patterns in the paraventricular nucleus of the hypothalamus (PVN) suggest that this is not related to the stressor but can rather be interpreted as pre-existing differences. In contrast, the hippocampus showed a much more uniform epigenetic status, but still is susceptible to epigenetic modification even after a single acute stress suggesting a differential "state‟ versus "trait‟ regulation of the GR gene in different brain regions. The results of this thesis have given further insight in the complex transcriptional regulation of GR and MR first exons in health and disease. Epigenetic programming of GR promoters seems to be involved in early life stress and acute stress in adult rats; however, the susceptibility to methylation in response to stress seems to vary between brain regions.
External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.
Internet interventions have gained popularity and the idea is to use them to increase the availability of psychological treatment. Research suggests that internet interventions are effective for a number of psychological disorders with effect sizes comparable to those found in face-to-face treatment. However, when provided as an add-on to treatment as usual, internet interventions do not seem to provide additional benefit. Furthermore, adherence and dropout rates vary greatly between studies, limiting the generalizability of the findings. This underlines the need to further investigate differences between internet interventions, participating patients, and their usage of interventions. A stronger focus on the processes of change seems necessary to better understand the varying findings regarding outcome, adherence and dropout in internet interventions. Thus, the aim of this dissertation was to investigate change processes in internet interventions and the factors that impact treatment response. This could help to identify important variables that should be considered in research on internet interventions as well as in clinical settings that make use of internet interventions.
Study I (Chapter 5) investigated early change patterns in participants of an internet intervention targeting depression. Data from 409 participants were analyzed using Growth Mixture Modeling. Specifically a piecewise model was applied to model change from screening to registration (pretreatment) and early change (registration to week four of treatment). Three early change patterns were identified; two were characterized by improvement and one by deterioration. The patterns were predictive of treatment outcome. The results therefore indicated that early change should be closely monitored in internet interventions, as early change may be an important indicator of treatment outcome.
Study II (Chapter 6) picked up on the idea of analyzing change patterns in internet interventions and extended it by using the Muthen-Roy model to identify change-dropout patterns. A sligthly bigger sample of the dataset from Study I was analyzed (N = 483). Four change-dropout patterns emerged; high risk of dropout was associated with rapid improvement and deterioration. These findings indicate that clinicians should consider how dropout may depend on patient characteristics as well as symptom change, as dropout is associated with both deterioration and a good enough dosage of treatment.
Study III (Chapter 7) compared adherence and outcome in different participant groups and investigated the impact of adherence to treatment components on treatment outcome in an internet intervention targeting anxiety symptoms. 50 outpatient participants waiting for face- to-face treatment and 37 self-referred participants were compared regarding adherence to treatment components and outcome. In addition, outpatient participants were compared to a matched sample of outpatients, who had no access to the internet intervention during the waiting period. Adherence to treatment components was investigated as a predictor of treatment outcome. Results suggested that especially adherence may vary depending on participant group. Also using specific measures of adherence such as adherence to treatment components may be crucial to detect change mechanisms in internet interventions. Fostering adherence to treatment components in participants may increase the effectiveness of internet interventions.
Results of the three studies are discussed and general conclusions are drawn.
Implications for future research as well as their utility for clinical practice and decision- making are presented.
Stress position in English words is well-known to correlate with both their morphological properties and their phonological organisation in terms of non-segmental, prosodic categories like syllable structure. While two generalisations capturing this correlation, directionality and stratification, are well established, the exact nature of the interaction of phonological and morphological factors in English stress assignment is a much debated issue in the literature. The present study investigates if and how directionality and stratification effects in English can be learned by means of Naive Discriminative Learning, a computational model that is trained using error-driven learning and that does not make any a-priori assumptions about the higher-level phonological organisation and morphological structure of words. Based on a series of simulation studies we show that neither directionality nor stratification need to be stipulated as a-priori properties of words or constraints in the lexicon. Stress can be learned solely on the basis of very flat word representations. Morphological stratification emerges as an effect of the model learning that informativity with regard to stress position is unevenly distributed across all trigrams constituting a word. Morphological affix classes like stress-preserving and stress-shifting affixes are, hence, not predefined classes but sets of trigrams that have similar informativity values with regard to stress position. Directionality, by contrast, emerges as spurious in our simulations; no syllable counting or recourse to abstract prosodic representations seems to be necessary to learn stress position in English.
This thesis deals with economic aspects of employees' sickness. In addition to the classical case of sickness absence, in which an employee is completely unable to work and hence stays at home, there is the case of sickness presenteeism, in which the employee comes to work despite being sick. Accordingly, the thesis at hand covers research on both sickness states, absence and presenteeism. The first section covers sickness absence and labour market institutions. Chapter 2 presents theoretical and empirical evidence that differences in the social norm against benefit fraud, so-called benefit morale, can explain cross country diversity in the generosity of statutory sick pay entitlements between developed countries. In our political economy model, a stricter benefit morale reduces the absence rate, with counteracting effects on the politically set sick pay replacement rate. On the one hand, less absence caused by a stricter norm, makes the tax-financed insurance cheaper, leading to the usual demand side effect and hence to more generous sick pay entitlements. On the other hand, being less likely to be absent due to a stricter norm, the voters prefer a smaller fee over more insurance. We document both effects in a sample of 31 developed countries, capturing the years from 1981 to 2010. In Chapter 3 we investigate the relationship between the existence of works councils and illness-related absence and its consequences for plants. Using individual data from the German Socio-Economic Panel (SOEP), we find that the existence of a works council is positively correlated with the incidence and the annual duration of absence. Additionally, linked employer-employee data (LIAB) suggests that employers are more likely to expect personnel problems due to absence in plants with a works council. In western Germany, we find significant effects using a difference-in-differences approach, which can be causally interpreted. The second part of this thesis covers two studies on sickness presenteeism. In Chapter 4, we empirically investigate the determinants of the annual duration of sickness presenteeism using the European Working Conditions Survey (EWCS). Work autonomy, workload and tenure are positively related to the number of sickness presenteeism days, while a good working environment comes with less presenteeism. In Chapter 5 we theoretically and empirically analyze sickness absence and presenteeism behaviour with a focus on their interdependence. Specifically, we ask whether work-related factors lead to a substitutive, a complementary or no relationship between sickness absence and presenteeism. In other words, we want to know whether changes in absence and presenteeism behaviour incurred by work-related characteristics point in opposite directions (substitutive), the same direction (complementary), or whether they only affect either one of the two sickness states (no relationship). Our theoretical model shows that the relationship between sickness absence and presenteeism with regard to work-related characteristics is not necessarily of a substitutive nature. Instead, a complementary or no relationship can emerge as well. Turning to the empirical investigation, we find that only one out of 16 work-related factors, namely the supervisor status, leads to a substitutive relationship between absence and presenteeism. Few of the other determinants are complements, while the large majority is either related to sickness absence or presenteeism.
A basic assumption of standard small area models is that the statistic of interest can be modelled through a linear mixed model with common model parameters for all areas in the study. The model can then be used to stabilize estimation. In some applications, however, there may be different subgroups of areas, with specific relationships between the response variable and auxiliary information. In this case, using a distinct model for each subgroup would be more appropriate than employing one model for all observations. If no suitable natural clustering variable exists, finite mixture regression models may represent a solution that „lets the data decide“ how to partition areas into subgroups. In this framework, a set of two or more different models is specified, and the estimation of subgroup-specific model parameters is performed simultaneously to estimating subgroup identity, or the probability of subgroup identity, for each area. Finite mixture models thus offer a fexible approach to accounting for unobserved heterogeneity. Therefore, in this thesis, finite mixtures of small area models are proposed to account for the existence of latent subgroups of areas in small area estimation. More specifically, it is assumed that the statistic of interest is appropriately modelled by a mixture of K linear mixed models. Both mixtures of standard unit-level and standard area-level models are considered as special cases. The estimation of mixing proportions, area-specific probabilities of subgroup identity and the K sets of model parameters via the EM algorithm for mixtures of mixed models is described. Eventually, a finite mixture small area estimator is formulated as a weighted mean of predictions from model 1 to K, with weights given by the area-specific probabilities of subgroup identity.
Major threats to the Spanish Constitutional Court’s independence and authority have come, first, from political parties and the media and, second, by the Catalonian secession movement. The authority and the legitimacy of the Constitutional Court were tested in the stormy
proceedings on the Statute of Autonomy of Catalonia of 2006 that ended in 2010 and, above all, in the period of 2013–2017, when successive acts directed at the secession of were recurrently Catalonia challenged before the Court and subsequently overturned, and to stop the continued disobedience its rulings the of Court was given extended execution powers for its judgments. These new powers include the temporary replacement of any authority or public official that does not comply with a Court’s ruling and the ordering of a substitutive execution through the central government. The Court declared the new powers to be consistent with the Constitution (with three dissenting votes by four constitutional judges) and it even used them for the first time to enforce its prohibition of the referendum on the independence of Catalonia of 1 October 2017. Nevertheless, the Venice Commission has raised doubts about the opportunity of those powers, which are unusual in European constitutional jurisdiction models. At the end, the Court’s powers were not enough to stop the Catalonian secession process, and on 27 October 2017 the state government implemented the federal coercion clause and suspended Catalonian autonomy until new elections were held.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
Up until May 2021, the post-election insecurity in Belarus had mostly been a national affair, but with Lukashenka’s regime starting to retaliate against foreign actors, the crisis internationalised. This article follows the development of Belarus-Lithuania border dynamics between the 2020 Belarusian presidential election and the start of the 2022 Russian invasion of Ukraine. A qualitative content analysis of English-language articles published by Lithuanian public broadcaster LRT shows that shows that there were relatively few changes to the border dynamics in the period between 9 August 2020 and 26 May 2021. After 26 May 2021, the border dynamics changed significantly: The Belarusian regime started facilitating migration, and more than 4,200 irregular migrants crossed into Lithuania from Belarus in 2021. In response, Lithuania reinforced its border protection and tried to deal with the irregular migration flows. Calls for action were made, protests were held, and the country received international support.
The positive consequences of performance pay on the wages and productivity have been well documented in the last decades. Yet, the increased pressure and work commitment associated with performance pay suggest that performance pay may have unintended negative consequences on worker’s health and well-being. As firms increasingly use performance pay worldwide, it becomes crucial to evaluate positive and negative consequences of performance pay. Thus, Chapters 2 – 4 of this doctoral thesis investigate the unintended adverse consequences of performance pay on stress, alcohol consumption, and loneliness, respectively. Chapter 5 investigates the positive role of performance pay on mitigating the overeducation wage penalty and enhancing labor market position of overeducated workers.
In Chapter 2, together with John S. Heywood and Uwe Jirjahn, I examine the hypothesis that performance pay is positively associated with employee stress. Using unique survey data from the German Socio-Economic Panel, I find performance pay consistently and importantly associates with greater stress even controlling for a long list of economic, social, and personality characteristics. The finding also holds in instrumental variable estimations accounting for the potential endogeneity of performance pay. Moreover, I show that risk tolerance and locus of control moderate the relationship between performance pay and stress. Among workers receiving performance pay, the risk tolerant and those believing they can control their environment suffer to a lesser degree from stress.
Chapter 3 examines the relationship between performance pay and alcohol use. Together with John S. Heywood and Uwe Jirjahn, I examine the hypothesis that alcohol use as “self-medication” is a natural response to the stress and uncertainty associated with performance pay. Using data from the German Socio-Economic Panel, I find that the likelihood of consuming each of four types of alcohol (beer, wine, spirits, and mixed drinks) is higher for those receiving performance pay even controlling for a long list of economic, social, and personality characteristics and in sensible instrumental variable estimates. I also show that the number of types of alcohol consumed is larger for those receiving performance pay and that the intensity of consumption increases. Moreover, I find that risk tolerance and gender moderate the relationship between performance pay and alcohol use.
In Chapter 4, I examine the hypothesis that performance pay increases the risk of employee loneliness due to increased stress, job commitment, and uncooperativeness associated with performance pay. Using the German Socio-Economic Panel, I find that performance pay is positively associated with both the incidence and intensity of loneliness. Correspondingly, performance pay decreases the social life satisfaction of workers. The findings also hold in instrumental variable estimations addressing the potential endogeneity of performance pay and in various robustness checks. Interestingly, investigating the potential role of moderating factors reveals that the association between performance pay and loneliness is particularly large for private sector employees.
Finally, in Chapter 5, I study the association between overeducation, performance pay, and wages. Overeducated workers are more productive and have higher wages in comparison to their adequately educated coworkers in the same jobs. However, they face a series of challenges in the labor market, including lower wages in comparison to their similarly educated peers who are in correctly matched jobs. Yet, less consensus exists over the adjustment mechanisms to overcome the negative consequences of overeducation. In this study, I examine the hypotheses that overeducated workers sort into performance pay jobs as an adjustment mechanism and that performance pay enhances their wages. Using the German Socio-Economic Panel, I show that overeducation associates with a higher likelihood of sorting into performance pay jobs and that performance pay moderates the wages of overeducated workers positively. It also holds in endogenous switching regressions accounting for the potential endogeneity of performance pay. Importantly, I show that the positive role of performance pay is particularly larger for the wages of overeducated women.
The Constitution of Latvia
(2004)
The article offers a concise view on the constitution of the Baltic state of Latvia. After an introduction focusing on constitutional history, the author explores basic principles and human rights in the text of the constitution and explains the main constitutional bodies and their functions in legislative, executive and judiciary. Chapters on citizenship and religious rights round up this introduction to the Latvian Constitution.
ENGLISH ACADEMIC LITERARY DISCOURSE IN SOUTH AFRICA 1958-2004: A REVIEW OF 11 ACADEMIC JOURNALS
(2007)
This study examines the discipline of English studies in South Africa through a review of articles published in 11 academic journals over the period 1958"2004. The aims are to gain a better understanding of the functions of peer-reviewed journals, to reveal the presence of rules governing discursive production, and to uncover the historical shifts in approach and choice of disciplinary objects. The Foucauldian typology of procedures determining discursive production, that is: exclusionary, internal and restrictive procedures, is applied to the discipline of English studies in order to elucidate the existence of such procedures in the discipline. Each journal is reviewed individually and comparatively. Static and chronological statistical analyses are undertaken on the articles in the 11 journals in order to provide empirical evidence to subvert the contention that the discipline is unruly and its choice of objects random. The cumulative results of this analysis are used to describe the major shifts primarily in ranges of disciplinary objects, but also in metadiscursive and thematic debates. Each of the journals is characterised in relation to what the overall analysis reveals about the mainstream developments. The two main findings are that, during the period under review, South African imaginative written artefacts have moved from a marginal position to the centre of focus of the discipline; and that the conception of what constitutes the "literary" has returned to a pre-Practical criticism definition, broadly inclusive of a variety of types of artefact including imaginative writing, such as autobiography, letters, journals and orature.
This working paper outlines analytical pathways that could contribute to deepening the understanding of water inequalities in cities of the Global South. It brings together the status quo of research on water inequalities in Accra, the capital of Ghana, and studies on Environmental Justice. In doing so, it argues for the need to analytically distinguish between the terms ‘(in)equality’ and ‘(in)justice’. Studying everyday water practices and per- spectives on water (in)justice of different stakeholders would be a suitable entry point for an in-depth ethnographic study that analytically separates water inequalities and water injustices but considers their interlinkages. The working paper is based on a literature review conducted in 2015 in the scope of the WaterPower project.
This work is concerned with the numerical solution of optimization problems that arise in the context of ground water modeling. Both ground water hydraulic and quality management problems are considered. The considered problems are discretized problems of optimal control that are governed by discretized partial differential equations. Aspects of special interest in this work are inaccurate function evaluations and the ensuing numerical treatment within an optimization algorithm. Methods for noisy functions are appropriate for the considered practical application. Also, block preconditioners are constructed and analyzed that exploit the structure of the underlying linear system. Specifically, KKT systems are considered, and the preconditioners are tested for use within Krylov subspace methods. The project was financed by the foundation Stiftung Rheinland-Pfalz für Innovation and carried out in joint work with TGU GmbH, a company of consulting engineers for ground water and water resources.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
Evidence points to autonomy as having a place next to affiliation, achievement, and power as one of the basic implicit motives; however, there is still some research that needs to be conducted to support this notion.
The research in this dissertation aimed to address this issue. I have specifically focused on two issues that help solidify the foundation of work that has already been conducted on the implicit autonomy motive, and will also be a foundation for future studies. The first issue is measurement. Implicit motives should be measured using causally valid instruments (McClelland, 1980). The second issue addresses the function of motives. Implicit motives orient, select, and energize behavior (McClelland, 1980). If autonomy is an implicit motive, then we need a valid instrument to measure it and we also need to show that it orients, selects, and energizes behavior.
In the following dissertation, I address these two issues in a series of ten studies. Firstly, I present studies that examine the causal validity of the Operant Motive Test (OMT; Kuhl, 2013) for the implicit affiliation and power motives using established methods. Secondly, I developed and empirically tested pictures to specifically assess the implicit autonomy motive and examined their causal validity. Thereafter, I present two studies that investigated the orienting and energizing effects of the implicit autonomy motive. The results of the studies solidified the foundation of the OMT and how it measures nAutonomy. Furthermore, this dissertation demonstrates that nAutonomy fulfills the criteria for two of the main functions of implicit motives. Taken together, the findings of this dissertation provide further support for autonomy as an implicit motive and a foundation for intriguing future studies.
When do anorexic patients perceive their body as too fat? Aggravating and ameliorating factors
(2019)
Objective
Our study investigated body image representations in female patients with anorexia nervosa
and healthy controls using a size estimation with pictures of their own body. We also
explored a method to reduce body image distortions through right hemispheric activation.
Method
Pictures of participants’ own bodies were shown on the left or right visual fields for 130 ms
after presentation of neutral, positive, or negative word primes, which could be self-relevant
or not, with the task of classifying the picture as “thinner than”, “equal to”, or “fatter than”
one’s own body. Subsequently, activation of the left- or right hemispheric through right- or
left-hand muscle contractions for 3 min., respectively. Finally, participants completed the
size estimation task again.
Results
The distorted “fatter than” body image was found only in patients and only when a picture of
their own body appeared on the right visual field (left hemisphere) and was preceded by
negative self-relevant words. This distorted perception of the patients’ body image was
reduced after left-hand muscle contractions (right hemispheric activation).
Discussion
To reduce body image distortions it is advisable to find methods that help anorexia nervosa
patients to increase their self-esteem. The body image distortions were ameliorated after
right hemispheric activation. A related method to prevent distorted body-image representations
in these patients may be Eye Movement Desensitization and Reprocessing (EMDR)
therapy.
Background: The body-oriented therapeutic approach Somatic Experiencing® (SE) treats posttraumatic symptoms by changing the interoceptive and proprioceptive sensations associated with the traumatic experience. Filling a gap in the landscape of trauma treatments, SE has attracted growing interest in research and therapeutic practice, recently.
Objective: To date, there is no literature review of the effectiveness and key factors of SE. This review aims to summarize initial findings on the effectiveness of SE and to outline methodspecific key factors of SE.
Method: To gain a first overview of the literature, we conducted a scoping review including studies until 13 August 2020. We identified 83 articles of which 16 fit inclusion criteria and were systematically analysed.
Results: Findings provide preliminary evidence for positive effects of SE on PTSD-related symptoms. Moreover, initial evidence suggests that SE has a positive impact on affective and somatic symptoms and measures of well-being in both traumatized and non-traumatized
samples. Practitioners and clients identified resource-orientation and use of touch as methodspecific key factors of SE. Yet, an overall studies quality assessment as well as a Cochrane analysis of risk of bias indicate that the overall study quality is mixed.
Conclusions: The results concerning effectiveness and method-specific key factors of SE are promising; yet, require more support from unbiased RCT-research. Future research should focus on filling this gap.
There is no longer any doubt about the general effectiveness of psychotherapy. However, up to 40% of patients do not respond to treatment. Despite efforts to develop new treatments, overall effectiveness has not improved. Consequently, practice-oriented research has emerged to make research results more relevant to practitioners. Within this context, patient-focused research (PFR) focuses on the question of whether a particular treatment works for a specific patient. Finally, PFR gave rise to the precision mental health research movement that is trying to tailor treatments to individual patients by making data-driven and algorithm-based predictions. These predictions are intended to support therapists in their clinical decisions, such as the selection of treatment strategies and adaptation of treatment. The present work summarizes three studies that aim to generate different prediction models for treatment personalization that can be applied to practice. The goal of Study I was to develop a model for dropout prediction using data assessed prior to the first session (N = 2543). The usefulness of various machine learning (ML) algorithms and ensembles was assessed. The best model was an ensemble utilizing random forest and nearest neighbor modeling. It significantly outperformed generalized linear modeling, correctly identifying 63.4% of all cases and uncovering seven key predictors. The findings illustrated the potential of ML to enhance dropout predictions, but also highlighted that not all ML algorithms are equally suitable for this purpose. Study II utilized Study I’s findings to enhance the prediction of dropout rates. Data from the initial two sessions and observer ratings of therapist interventions and skills were employed to develop a model using an elastic net (EN) algorithm. The findings demonstrated that the model was significantly more effective at predicting dropout when using observer ratings with a Cohen’s d of up to .65 and more effective than the model in Study I, despite the smaller sample (N = 259). These results indicated that generating models could be improved by employing various data sources, which provide better foundations for model development. Finally, Study III generated a model to predict therapy outcome after a sudden gain (SG) in order to identify crucial predictors of the upward spiral. EN was used to generate the model using data from 794 cases that experienced a SG. A control group of the same size was also used to quantify and relativize the identified predictors by their general influence on therapy outcomes. The results indicated that there are seven key predictors that have varying effect sizes on therapy outcome, with Cohen's d ranging from 1.08 to 12.48. The findings suggested that a directive approach is more likely to lead to better outcomes after an SG, and that alliance ruptures can be effectively compensated for. However, these effects
were reversed in the control group. The results of the three studies are discussed regarding their usefulness to support clinical decision-making and their implications for the implementation of precision mental health.
Striving for sustainable development by combating climate change and creating a more social world is one of the most pressing issues of our time. Growing legal requirements and customer expectations require also Mittelstand firms to address sustainability issues such as climate change. This dissertation contributes to a better understanding of sustainability in the Mittelstand context by examining different Mittelstand actors and the three dimensions of sustainability - social, economic, and environmental sustainability - in four quantitative studies. The first two studies focus on the social relevance and economic performance of hidden champions, a niche market leading subgroup of Mittelstand firms. At the regional level, the impact of 1,645 hidden champions located in Germany on various dimensions of regional development is examined. A higher concentration of hidden champions has a positive effect on regional employment, median income, and patents. At the firm level, analyses of a panel dataset of 4,677 German manufacturing firms, including 617 hidden champions, show that the latter have a higher return on assets than other Mittelstand firms. The following two chapters deal with environmental strategies and thus contribute to the exploration of the environmental dimension of sustainability. First, the consideration of climate aspects in investment decisions is compared using survey data from 468 European venture capital and private equity investors. While private equity firms respond to external stakeholders and portfolio performance and pursue an active ownership strategy, venture capital firms are motivated by product differentiation and make impact investments. Finally, based on survey data from 443 medium-sized manufacturing firms in Germany, 54% of which are family-owned, the impact of stakeholder pressures on their decarbonization strategies is analyzed. A distinction is made between symbolic (compensation of CO₂-emissions) and substantive decarbonization strategies (reduction of CO₂-emissions). Stakeholder pressures lead to a proactive pursuit of decarbonization strategies, with internal and external stakeholders varying in their influence on symbolic and substantial decarbonization strategies, and the relationship influenced by family ownership.
This study focuses on the representation of British South Asian identities in contemporary British audiovisual media. It attempts to answer the question, whether these identities are represented as hybrid, heterogeneous and ambivalent, or whether these contemporary representations follow in the tradition of colonial and postcolonial racialism. Racialised depictions of British South Asians have been the norm not only in the colonial but also in the postcolonial era until the rise of the Black British movement, whose successes have been also acknowledged in the field of representation. However these achievements have to be scrutinized again, especially in the context of the post 9/11 world, rising Islamophobia, and new forms of institutionalized discrimination on the basis of religion. Since the majority of British Muslims are of South Asian origin, this study tries to answer the question whether the marker of religious origin is racial belonging, i.e. skin colour, and old stereotypes associated with the racialised representation are being perpetuated into current depictions through an examination of the varied genre of popular audio visual media texts.
Evapotranspiration (ET) is one of the most important variables in hydrological studies. In the ET process, energy exchange and water transfer are involved. ET consists of transpiration and evaporation. The amount of plants transpiration dominates in ET. Especially in the forest regions, the ratio of transpiration to ET is in general 80-90 %. Meteorological variables, vegetation properties, precipitation and soil moisture are critical influence factors for ET generation. The study area is located in the forest area of Nahe catchment (Rhineland-Palatinate, Germany). The Nahe catchment is highly wooded. About 54.6 % of this area is covered by forest, with deciduous forest and coniferous forest are two primary types. A hydrological model, WaSiM-ETH, was employed for a long-term simulation from 1971-2003 in the Nahe catchment. In WaSiM-ETH, the potential evapotranspiration (ETP) was firstly calculated by the Penman-Monteith equation, and subsequently reduced according to the soil water content to obtain the actual evapotranspiration (ETA). The Penman-Monteith equation has been widely used and recommended for ETP estimation. The difficulties in applying this equation are the high demand of ground-measured meteorological data and the determination of surface resistance. A method combined remote sensing images with ground-measured meteorological data was also used to retrieve the ETA. This method is based on the surface properties such as surface albedo, fractional vegetation cover (FVC) and land surface temperature (LST) to obtain the latent heat flux (LE, corresponding to ETA) through the surface energy balance equation. LST is a critical variable for surface energy components estimation. It was retrieved from the TM/ETM+ thermal infrared (TIR) band. Due to the high-quality and cloudy-free requirements for TM/ETM+ data selection as well as the overlapping cycle of TM/ETM+ sensor is 16 days, images on only five dates are available during 1971-2003 (model ran) " May 15, 2000, July 05, 2001, July 19, August 04 and September 21 in 2003. It is found that the climate conditions of 2000, 2001 and 2003 are wet, medium wet and dry, respectively. Therefore, the remote sensing-retrieved observations are noncontinuous in a limited number over time but contain multiple climate conditions. Aerodynamic resistance and surface resistance are two most important parameters in the Penman-Monteith equation. However, for forest area, the aerodynamic resistance is calculated by a function of wind speed in the model. Since transpiration and evaporation are separately calculated by the Penman-Monteith equation in the model, the surface resistance was divided into canopy surface resistance rsc and soil surface resistance rse. rsc is related to the plants transpiration and rse is related to the bare soil evaporation. The interception evaporation was not taken into account due to its negligible contribution to ET rate under a dry-canopy (no rainfall) condition. Based on the remote sensing-retrieved observations, rsc and rse were calibrated in the WaSiM-ETH model for both forest types: for deciduous forest, rsc = 150 sm−1, rse = 250 sm−1; for coniferous forest, rsc = 300 sm−1, rse = 650 sm−1. We also carried out sensitivity analysis on rsc and rse. The appropriate value ranges of rsc and rse were determined as (annual maximum): for deciduous forest, [100,225] sm−1 for rsc and [50,450] sm−1 for rse; for coniferous forest, [225,375] sm−1 for rsc and [350,1200] sm−1 for rse. Due to the features of the observations that are in a limited number but contain multiple climate conditions, the statistical indices for model performance evaluation are required to be sensitive to extreme values. In this study, boxplots were found to well exhibit the model performance at both spatial and temporal scale. Nush-Sutcliffe efficiency (NSE), RMSE-observations standard deviation ratio (RSR), percent bias (PBIAS), mean bias error (MBE), mean variance of error distribution (S2d), index of agreement (d), root mean square error (RMSE) were found as appropriate statistical indices to provide additional evaluation information to the boxplots. The model performance can be judged as satisfactory if NSE > 0.5, RSR ≤ 0.7, PBIAS < -±12, MBE < -±0.45, S2d < 1.11, d > 0.79, RMSE < 0.97. rsc played a more important role than rse in ETP and ETA estimation by the Penman-Monteith equation, which is attributed to the fact that transpiration dominates in ET. The ETP estimation was found the most correlated to the relative humidity (RH), followed by air temperature (T), relative sunshine duration (SSD) and wind speed (WS). Under wet or medium wet climate conditions, ETA estimation was found the most correlated to T, followed by RH, SSD and WS. Under a water-stress condition, there were very small correlations between ETA and each meteorological variable.
The glucocorticoid (GC) cortisol, main mediator of the hypothalamic-pituitary-adrenal axis, has many implications in metabolism, stress response and the immune system. GC function is mediated mainly via the glucocorticoid receptor (GR) which binds as a transcription factor to glucocorticoid response elements (GREs). GCs are strong immunosuppressants and used to treat inflammatory and autoimmune diseases. Long-term usage can lead to several irreversible side effects which make improved understanding indispensable and warrant the adaptation of current drugs. Several large scale gene expression studies have been performed to gain insight into GC signalling. Nevertheless, studies at the proteomic level have not yet been made. The effects of cortisol on monocytes and macrophages were studied in the THP-1 cell line using 2D fluorescence difference gel electrophoresis (2D DIGE) combined with MALDI-TOF mass spectrometry. More than 50 cortisol-modulated proteins were identified which belonged to five functional groups: cytoskeleton, chaperones, immune response, metabolism, and transcription/translation. Multiple GREs were found in the promoters of their corresponding genes (+10 kb/-0.2 kb promoter regions including all alternative promoters available within the Database for Transcription Start Sites (DBTSS)). High quality GREs were observed mainly in cortisol modulated genes, corroborating the proteomics results. Differential regulation of selected immune response related proteins were confirmed by qPCR and immuno-blotting. All immune response related proteins (MX1, IFIT3, SYWC, STAT3, PMSE2, PRS7) which were induced by LPS were suppressed by cortisol and belong mainly to classical interferon target genes. Mx1 has been selected for detailed expression analysis since new isoforms have been identified by proteomics. FKBP51, known to be induced by cortisol, was identified as the strongest differentially expressed protein and contained the highest number of strict GREs. Genomic analysis of five alternative FKBP5 promoter regions suggested GC inducibility of all transcripts. 2D DIGE combined with 2D immunoblotting revealed the existence of several previously unknown FKBP51 isoforms, possibly resulting from these transcripts. Additionally multiple post-translational modifications were found, which could lead to different subcellular localization in monocytes and macrophages as seen by confocal microscopy. Similar results were obtained for the different cellular subsets of human peripheral blood mononuclear cells (PBMCs). FKBP51 was found to be constitutively phosphorylated with up to 8 phosphosites in CD19+ B lymphocytes. Differential Co-immunoprecipitation for cytoplasm and nucleus allowed us to identify new potential interaction partners. Nuclear FKBP51 was found to interact with myosin 9, whereas cytosolic FKBP51 with TRIM21 (synonym: Ro52, Sjögren`s syndrome antigen). The GR has been found to interact with THOC4 and YB1, two proteins implicated in mRNA processing and transcriptional regulation. We also applied proteomics to study rapid non-genomic effects of acute stress in a rat model. The nuclear proteome of the thymus was investigated after 15 min restraint stress and compared to the non-stressed control. Most of the identified proteins were transcriptional regulators found to be enriched in the nucleus probably to assist gene expression in an appropriate manner. The proteomic approach allowed us to further understand the cortisol mediated response in monocytes/macrophages. We identified several new target proteins, but we also found new protein variants and post-translational modifications which need further investigation. Detailed study of FKBP51 and GR indicated a complex regulation network which opened a new field of research. We identified new variants of the anti-viral response protein MX1, displaying differential expression and phosphorylation in the cellular compartments. Further, proteomics allowed us to follow the very early effects of acute stress, which happen prior to gene expression. The nuclear thymocyte proteome of restraint stressed rats revealed an active preparation for subsequent gene expression. Proteomics was successfully applied to study differential protein expression, to identify new protein variants and phosphorylation events as well as to follow translocation. New aspects for future research in the field of cortisol-mediated immune modulation have been added.
This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.
Attitudes are "the most distinctive and indispensable concept in contemporary social psychology" (Allport, 1935, p. 798). This outstanding position of the attitude concept in social cognitive research is not only reflected in the innumerous studies focusing on this concept but also in the huge number of theoretical approaches that have been put forth since then. Yet, it is still an open question, what attitudes actually are. That is, the question of how attitude objects are represented in memory cannot be unequivocally answered until now (e.g., Barsalou, 1999; Gawronski, 2007; Pratkanis, 1989, Chapter 4). In particular, researchers strongly differ with respect to their assumptions on the content, format and structural nature of attitude representations (Ferguson & Fukukura, 2012). This prevailing uncertainty on what actually constitutes our likes and dislikes is strongly dovetailed with the question of which processes result in the formation of these representations. In recent years, this issue has mainly been addressed in evaluative conditioning research (EC). In a standard EC-paradigm a neutral stimulus (conditioned stimulus, CS) is repeatedly paired with an affective stimulus (unconditioned stimulus, US). The pairing of stimuli then typically results in changes in the evaluation of the CS corresponding to the evaluative response of the US (De Houwer, Baeyens, & Field, 2005). This experimental approach on the formation of attitudes has primarily been concerned with the question of how the representations underlying our attitudes are formed. However, which processes operate on the formation of such an attitude representation is not yet understood (Jones, Olson, & Fazio, 2010; Walther, Nagengast, & Trasselli, 2005). Indeed, there are several ideas on how CS-US pairs might be encoded in memory. Notwithstanding the importance of these theoretical ideas, looking at the existing empirical work within the research area of EC (for reviews see Hofmann, De Houwer, Perugini, Baeyens, & Crombez, 2010; De Houwer, Thomas, & Baeyens, 2001) leaves one with the impression that scientists have skipped the basic processes. Basic processes hereby especially refer to the attentional processes being involved in the encoding of CSs and USs as well as the relation between them. Against the background of this huge gap in current research on attitude formation, the focus of this thesis will be to highlight the contribution of selective attention processes to a better understanding of the representation underlying our likes and dislikes. In particular, the present thesis considers the role of selective attention processes for the solution of the representation issue from three different perspectives. Before illustrating these different perspectives, Chapter 1 is meant to envision the omnipresence of the representation problem in current theoretical as well as empirical work on evaluative conditioning. Likewise, it emphasizes the critical role of selective attention processes for the representation question in classical conditioning and how this knowledge might be used to put forth the uniqueness of evaluative conditioning as compared to classical conditioning. Chapter 2 then considers the differential influence of attentional resources and goal-directed attention on attitude learning. The primary objective of the presented experiment was thereby to investigate whether attentional resources and goal-directed attention exert their influence on EC via changes in the encoding of CS-US relations in memory (i.e., contingency memory). Taking the findings from this experiment into account, Chapter 3 focuses on the selective processing of the US relative to the CS. In particular, the two experiments presented in this chapter were meant to explore the moderating influence of the selective processing of the US in its relation to the CS on EC. In Chapter 4 the important role of the encoding of the US in relation to the CS, as outlined in Chapter 3, is illuminated in the context of different retrieval processes. Against the background of the findings from the two presented experiments, the interplay between the encoding of CS-US contingencies and the moderation of EC via different retrieval processes will be discussed. Finally, a general discussion of the findings, their theoretical implications and future research lines will be outlined in Chapter 5.
The present thesis is devoted to a construction which defies generalisations about the prototypical English noun phrase (NP) to such an extent that it has been termed the Big Mess Construction (Berman 1974). As illustrated by the examples in (1) and (2), the NPs under study involve premodifying adjective phrases (APs) which precede the determiner (always realised in the form of the indefinite article a(n)) rather than following it.
(1) NoS had not been hijacked – that was too strong a word. (BNC: CHU 1766)
(2) He was prepared for a battle if the porter turned out to be as difficult a customer as his wife. (BNC: CJX 1755)
Previous research on the construction is largely limited to contributions from the realms of theoretical syntax and a number of cursory accounts in reference grammars. No comprehensive investigation of its realisations and uses has as yet been conducted. My thesis fills this gap by means of an exhaustive analysis of the construction on the basis of authentic language data retrieved from the British National Corpus (BNC). The corpus-based approach allows me to examine not only the possible but also the most typical uses of the construction. Moreover, while previous work has almost exclusively focused on the formal realisations of the construction, I investigate both its forms and functions.
It is demonstrated that, while the construction is remarkably flexible as concerns its possible realisations, its use is governed by probabilistic constraints. For example, some items occur much more frequently inside the degree item slot than others (as, too and so stand out for their particularly high frequency). Contrary to what is assumed in most previous descriptions, the slot is not restricted in its realisation to a fixed number of items. Rather than representing a specialised structure, the construction is furthermore shown to be distributed over a wide range of possible text types and syntactic functions. On the other hand, it is found to be much less typical of spontaneous conversation than of written language; Big Mess NPs further display a strong preference for the function of subject complement. Investigations of the internal structural complexity of the construction indicate that its obligatory components can optionally be enriched by a remarkably wide range of optional (if infrequent) elements. In an additional analysis of the realisations of the obligatory but lexically variable slots (head noun and head of AP), the construction is highlighted to represent a productive pattern. With the help of the methods of Collexeme Analysis (Stefanowitsch and Gries 2003) and Co-varying Collexeme Analysis (Gries and Stefanowitsch 2004b, Stefanowitsch and Gries 2005), the two slots are, however, revealed to be strongly associated with general nouns and ‘evaluative’ and ‘dimension’ adjectives, respectively. On the basis of an inspection of the most typical adjective-noun combinations, I identify the prototypical semantics of the Big Mess Construction.
The analyses of the constructional functions centre on two distinct functional areas. First, I investigate Bolinger’s (1972) hypothesis that the construction fulfils functions in line with the Principle of Rhythmic Alternation (e.g. Selkirk 1984: 11, Schlüter 2005). It is established that rhythmic preferences co-determine the use of the construction to some extent, but that they clearly do not suffice to explain the phenomenon under study. In a next step, the discourse-pragmatic functions of the construction are scrutinised. Big Mess NPs are demonstrated to perform distinct information-structural functions in that the non-canonical position of the AP serves to highlight focal information (compare De Mönnink 2000: 134-35). Additionally, the construction is shown to place emphasis on acts of evaluation. I conclude the construction to represent a contrastive focus construction.
My investigations of the formal and functional characteristics of Big Mess NPs each include analyses which compare individual versions of the construction to one another (e.g. the As Big a Mess, Too Big a Mess and So Big a Mess Constructions). It is revealed that the versions are united by a shared core of properties while differing from one another at more abstract levels of description. The question of the status of the constructional versions as separate constructions further receives special emphasis as part of a discussion in which I integrate my results into the framework of usage-based Construction Grammar (e.g. Goldberg 1995, 2006).
Mankind has dramatically influenced the nitrogen (N) fluxes between soil, vegetation, water and atmosphere " the global N cycle. Increasing intensification of agricultural land use, caused by the growing demand for agricultural products, has had major impacts on ecosystems worldwide. Particularly nitrogenous gases such as ammonia (NH3) have increased mainly due to industrial livestock farming. Countries with high N deposition rates require a variety of deposition measurements and effective N monitoring networks to assess N loads. Due to high costs, current "conventional"-deposition measurement stations are not widespread and therefore provide only a patchy picture of the real extent of the prevailing N deposition status over large areas. One tool that allows quantification of the exposure and the effects of atmospheric N impacts on an ecosystem is the use of bioindicators. Due to their specific physiology and ecology, especially lichens and mosses are suitable to reflect the atmospheric N input at ecosystem level. The present doctoral project began by investigating the general ability of epiphytic lichens to qualify and quantify N deposition by analysing both lichens and total N and δ15N along a gradient of different N emission sources and severity. The results showed that this was a viable monitoring method, and a grid-based monitoring system with nitrophytic lichens was set up in the western part of Germany. Finally, a critical appraisal of three different monitoring techniques (lichens, mosses and tree bark) was carried out to compare them with national relevant N deposition assessment programmes. In total 1057 lichen samples, 348 tree bark samples, 153 moss samples and 24 deposition water samples, were analysed in this dissertation at different investigation scales in Germany.The study identified species-specific ability and tolerance of various epiphytic lichens to accumulate N. Samples of tree bark were also collected and N accumulation ability was detected in connection with the increased intensity of agriculture, and according to the presence of reduced N compounds (NHx) in the atmosphere. Nitrophytic lichens (Xanthoria parietina, Physcia spp.) have the strongest correlations with high agriculture-related N deposition. In addition, the main N sources were revealed with the help of δ15N values along a gradient of altitude and areas affected by different types of land use (NH3 density classes, livestock units and various deposition types). Furthermore, in the first nationwide survey of Germany to compare lichens, mosses and tree bark samples as biomonitors for N deposition, it was revealed that lichens are clearly the most meaningful monitor organisms in highly N affected regions. Additionally, the study shows that dealing with different biomonitors is a difficult task due to their variety of N responses. The specific receptor surfaces of the indicators and therefore their different strategies of N uptake are responsible for the tissue N concentration of each organism group. It was also shown that the δ15N values depend on their N origin and the specific N transformations in each organism system, so that a direct comparison between atmosphere and ecosystems is not possible.In conclusion, biomonitors, and especially epiphytic lichens may serve as possible alternatives to get a spatially representative picture of the N deposition conditions. Furthermore, bioindication with lichens is a cost-efficient alternative to physico-chemical measurements to comprehensively assess different prevailing N doses and sources of N pools on a regional scale. They can at least support on-site deposition instruments by qualification and quantification of N deposition.
N-acetylation by N-acetyltransferase 1 (NAT1) is an important biotransformation pathway of the human skin and it is involved in the deactivation of the arylamine and well-known contact allergen para-phenylenediamine (PPD). Here, NAT1 expression and activity were analyzed in antigen presenting cells (monocyte-derived dendritic cells, MoDCs, a model for epidermal Langerhans cells) and human keratinocytes. The latter were used to study exogenous and endogenous NAT1 activity modulations. Within this thesis, MoDCs were found to express metabolically active NAT1. Activities were between 23.4 and 26.6 nmol/mg/min and thus comparable to peripheral blood mononuclear cells. These data suggest that epidermal Langerhans cells contribute to the cutaneous N-acetylation capacity. Keratinocytes, which are known for their efficient N-acetylation, were analyzed in a comparative study using primary keratinocytes (NHEK) and different shipments of the immortalized keratinocyte cell line HaCaT, in order to investigate the ability of the cell line to model epidermal biotransformation. N-acetylation of the substrate para-aminobenzoic acid (PABA) was 3.4-fold higher in HaCaT compared to NHEK and varied between the HaCaT shipments (range 12.0"44.5 nmol/mg/min). Since B[a]P induced cytochrome p450 1 (CYP1) activities were also higher in HaCaT compared to NHEK, the cell line can be considered as an in vitro tool to qualitatively model epidermal metabolism, regarding NAT1 and CYP1. The HaCaT shipment with the highest NAT1 activity showed only minimal reduction of cell viability after treatment with PPD and was subsequently used to study interactions between NAT1 and PPD in keratinocytes. Treatment with PPD induced expression of cyclooxygenases (COX) in HaCaT, but in parallel, PPD N-acetylation was found to saturate with increasing PPD concentration. This saturation explains the presence of the PPD induced COX induction despite the high N-acetylation capacities. A detailed analysis of the effect of PPD on NAT1 revealed that the saturation of PPD N-acetylation was caused by a PPD-induced decrease of NAT1 activity. This inhibition was found in HaCaT as well as in primary keratinocytes after treatment with PPD and PABA. Regarding the mechanism, reduced NAT1 protein level and unaffected NAT1 mRNA expression after PPD treatment adduced clear evidences for substrate-dependent NAT1 downregulation. These results expand the existing knowledge about substrate-dependent NAT1 downregulation to human epithelial skin cells and demonstrate that NAT1 activity in keratinocytes can be modulated by exogenous factors. Further analysis of HaCaT cells from different shipments revealed an accelerated progression through the cell cycle in HaCaT cells with high NAT1 activities. These findings suggest an association between NAT1 and proliferation in keratinocytes as it has been proposed earlier for tumor cells. In conclusion, N-acetylation capacity of MoDCs as well as keratinocytes contribute to the overall N-acetylation capacity of human skin. NAT1 activity of keratinocytes and consequently the detoxification capacities of human skin can be modulated by the presence of exogenous NAT1 substrates and endogenous by the cell proliferation status of keratinocytes.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.
In a case of robbery, some people actually use violence to steal - but others may supply information or weapons, make the plans, act as lookouts, provide transport. Certainly the actual robbers are guilty - but what of the others? How does Hong Kong's version of the common law answer this question now? How should the question be answered in the future?
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
Up-to-date information about the type and spatial distribution of forests is an essential element in both sustainable forest management and environmental monitoring and modelling. The OpenStreetMap (OSM) database contains vast amounts of spatial information on natural features, including forests (landuse=forest). The OSM data model includes describing tags for its contents, i.e., leaf type for forest areas (i.e., leaf_type=broadleaved). Although the leaf type tag is common, the vast majority of forest areas are tagged with the leaf type mixed, amounting to a total area of 87% of landuse=forests from the OSM database. These areas comprise an important information source to derive and update forest type maps. In order to leverage this information content, a methodology for stratification of leaf types inside these areas has been developed using image segmentation on aerial imagery and subsequent classification of leaf types. The presented methodology achieves an overall classification accuracy of 85% for the leaf types needleleaved and broadleaved in the selected forest areas. The resulting stratification demonstrates that through approaches, such as that presented, the derivation of forest type maps from OSM would be feasible with an extended and improved methodology. It also suggests an improved methodology might be able to provide updates of leaf type to the OSM database with contributor participation.
Extension of an Open GEOBIA Framework for Spatially Explicit Forest Stratification with Sentinel-2
(2022)
Spatially explicit information about forest cover is fundamental for operational forest management and forest monitoring. Although open-satellite-based earth observation data in a spatially high resolution (i.e., Sentinel-2, ≤10 m) can cover some information needs, spatially very high-resolution imagery (i.e., aerial imagery, ≤2 m) is needed to generate maps at a scale suitable for regional and local applications. In this study, we present the development, implementation, and evaluation of a Geographic Object-Based Image Analysis (GEOBIA) framework to stratify forests (needleleaved, broadleaved, non-forest) in Luxembourg. The framework is exclusively based on open data and free and open-source geospatial software. Although aerial imagery is used to derive image objects with a 0.05 ha minimum size, Sentinel-2 scenes of 2020 are the basis for random forest classifications in different single-date and multi-temporal feature setups. These setups are compared with each other and used to evaluate the framework against classifications based on features derived from aerial imagery. The highest overall accuracies (89.3%) have been achieved with classification on a Sentinel-2-based vegetation index time series (n = 8). Similar accuracies have been achieved with classification based on two (88.9%) or three (89.1%) Sentinel-2 scenes in the greening phase of broadleaved forests. A classification based on color infrared aerial imagery and derived texture measures only achieved an accuracy of 74.5%. The integration of the texture measures into the Sentinel-2-based classification did not improve its accuracy. Our results indicate that high resolution image objects can successfully be stratified based on lower spatial resolution Sentinel-2 single-date and multi-temporal features, and that those setups outperform classifications based on aerial imagery only. The conceptual framework of spatially high-resolution image objects enriched with features from lower resolution imagery facilitates the delivery of frequent and reliable updates due to higher spectral and temporal resolution. The framework additionally holds the potential to derive additional information layers (i.e., forest disturbance) as derivatives of the features attached to the image objects, thus providing up-to-date information on the state of observed forests.
The forensic application of phonetics relies on individuality in speech. In the forensic domain, individual patterns of verbal and paraverbal behavior are of interest which are readily available, measurable, consistent, and robust to disguise and to telephone transmission. This contribution is written from the perspective of the forensic phonetic practitioner and seeks to establish a more comprehensive concept of disfluency than previous studies have. A taxonomy of possible variables forming part of what can be termed disfluency behavior is outlined. It includes the “classical” fillers, but extends well beyond these, covering, among others, additional types of fillers as well as prolongations, but also the way in which fillers are combined with pauses. In the empirical section, the materials collected for an earlier study are re-examined and subjected to two different statistical procedures in an attempt to approach the issue of individuality. Recordings consist of several minutes of spontaneous speech by eight speakers on three different occasions. Beyond the established set of hesitation markers, additional aspects of disfluency behavior which fulfill the criteria outlined above are included in the analysis. The proportion of various types of disfluency markers is determined. Both statistical approaches suggest that these speakers can be distinguished at a level far above chance using the disfluency data. At the same time, the results show that it is difficult to pin down a single measure which characterizes the disfluency behavior of an individual speaker. The forensic implications of these findings are discussed.
Mental processes are filters which intervene in the literary presentation of nature. This article will take you on a journey through literary landscapes, starting from Joseph Furphy and end-ing with Gerald Murnane. It will try to show the development of Australian literary landscape depiction. The investigation of this extensive topic will show that the perception of the Aus-tralian landscape as foreign and threatening is a coded expression of the protagonists" crisis of identity due to their estrangement from European cultural roots. Only a feeling of being at home enables the characters to perceive landscapes in a positive way and allows the author to depict intimate and familiar views of nature. This topic will be investigated with a range of novels to reveal the development of this theme from the turn of the nineteenth century (the time of Furphy- novel Such is Life) up to the present (i.e. novels by Malouf, Foster, Hall, Murnane).
The temporal stability of psychological test scores is one prerequisite for their practical usability. This is especially true for intelligence test scores. In educational contexts, high stakes decisions with long-term consequences, such as placement in special education programs, are often based on intelligence test results. There are four different types of temporal stability: mean-level change, individual-level change, differential continuity, and ipsative continuity. We present statistical methods for investigating each type of stability. Where necessary, the methods were adapted for the specific challenges posed by intelligence research (e.g., controlling for general intelligence in lower order test scores). We provide step-by-step guidance for the application of the statistical methods and apply them to a real data set of 114 gifted students tested twice with a test-retest interval of 6 months.
• Four different types of stability need to be investigated for a full picture of temporal stability in psychological research
• Selection and adaption of the methods for the use in intelligence research
• Complete protocol of the implementation
The COVID-19 pandemic has affected schooling worldwide. In many places, schools closed for weeks or months, only part of the student body could be educated at any one time, or students were taught online. Previous research discloses the relevance of schooling for the development of cognitive abilities. We therefore compared the intelligence test performance of 424 German secondary school students in Grades 7 to 9 (42% female) tested after the first six months of the COVID-19 pandemic (i.e., 2020 sample) to the results of two highly comparable student samples tested in 2002 (n = 1506) and 2012 (n = 197). The results revealed substantially and significantly lower intelligence test scores in the 2020 sample than in both the 2002 and 2012 samples. We retested the 2020 sample after another full school year of COVID-19-affected schooling in 2021. We found mean-level changes of typical magnitude, with no signs of catching up to previous cohorts or further declines in cognitive performance. Perceived stress during the pandemic did not affect changes in intelligence test results between the two measurements.
Spatial Queues
(2000)
In the present thesis, a theoretical framework for the analysis of spatial queues is developed. Spatial queues are a generalization of the classical concept of queues as they provide the possibility of assigning properties to the users. These properties may influence the queueing process, but may also be of interest for themselves. As a field of application, mobile communication networks are modeled by spatial queues in order to demonstrate the advantage of including user properties into the queueing model. In this application, the property of main interest is the user's position in the network. After a short introduction, the second chapter contains an examination of the class of Markov-additive jump processes, including expressions for the transition probabilities and the expectation as well as laws of large numbers. Chapter 3 contains the definition and analysis of the central concept of spatial Markovian arrival processes (shortly: SMAPs) as a special case of Markov-additive jump processes, but also as a natural generalization from the well-known concept of BMAPs. In chapters 4 and 5, SMAPs serve as arrival streams for the analyzed periodic SMAP/M/c/c and SMAP/G/infinity queues, respectively. These types of queues find application as models or planning tools for mobile communication networks. The analysis of these queues involves new methods such that even for the special cases of BMAP inputs (i.e. non-spatial queues) new results are obtained. In chapter 6, a procedure for statistical parameter estimation is proposed along with its numerical results. The thesis is concluded by an appendix which collects necessary results from the theories of Markov jump processes and stochastic point fields. For special classes of Markov jump processes, new results have been obtained, too.
Globalization and the emergence of global value chains have not only changed the way we live, but also the way economists study international economics. These changes are visible in various areas and dimension. This dissertation deals " mostly empirically " with some of these issues related to global value chains. It starts by critically examining the political economy forces determining the occurrence and the extent of trade liberalization conditions in World Bank lending agreements. The focal point is whether these are affected by the World Bank- most influential member countries. Afterwards, the thesis moves on to describe trade of the European Union member countries at each stage of the value chain. The description is based on a new classification of goods into parts, components and final products as well as a newly developed measure describing the average level of development of a countries trading partners. This descriptive exercise is followed by critically examining discrepancies between gross trade and trade in value added with respect to comparative advantage. A gravity model is employed to contrast results when studying the institutional determinants of comparative advantage. Finally, the thesis deals with determinants of regional location choices for foreign direct investment. The analysis is based on a theoretical new economic geography model and employs a newly developed index that accounts for the presence of potentially all suppliers and buyers at all stages of the value chain.
Magnet Resonance Imaging (MRI) and Electroencephalography (EEG) are tools used to investigate the functioning of the working brain in both humans and animal studies. Both methods are increasingly combined in separate or simultaneous measurements under the assumption to benefit from their individual strength while compensating their particular weaknesses. However, little attention has been paid to how statistical analyses strategies can influence the information that can be retrieved from a combined EEG fMRI study. Two independent studies in healthy student volunteers were conducted in the context of emotion research to demonstrate two approaches of combining MRI and EEG data of the same participants. The first study (N = 20) applied a visual search paradigm and found that in both measurements the assumed effects were absent by not statistically combining their results. The second study (N = 12) applied a novelty P300 paradigm and found that only the statistical combination of MRI and EEG measurements was able to disentangle the functional effects of brain areas involved in emotion processing. In conclusion, the observed results demonstrate that there are added benefits of statistically combining EEG-fMRI data acquisitions by assessing both the inferential statistical structure and the intra-individual correlations of the EEG and fMRI signal.
In the first overview lecture, we take a look at conceptualizations of water – from the hydrological cycle to socio-political perspectives on water. During the 20th century, water management developed from traditional uses and local industrial schemes to the “hydraulic paradigm” and finally, to the concept of modern water governance at the turn of the millennium. We will raise the question of whether there has truly been a paradigm shift from the natural, science based hydraulic paradigm to water governance and how dual- isms of culture/society and nature are still being reproduced. With this in mind, we will also take an introductory look at the much talked about global water crisis.
This working paper examines the concept of metabolism and its potential as a critical analytical lens to study the contemporary city from a political perspective. The paper illustrates how the metabolism concept has been used historically, both as a metaphor to describe the technological, social, political and economic dimensions of human-environment relations, and as a concrete analytical tool to quantify and better understand how flows of matter and energy shape the territorial and spatial configurations of cityscapes. Drawing on the example of the urban water metabolism of the Greater Accra Metropolitan Area (GAMA), it is argued that contemporary approaches to metabolic analysis should be extended in two ways to increase the integrative potential of the urban water metabolism concept. On the one hand, the paper demonstrates that a political ecology approach is particularly well-suited to illuminate the contested production of urban environments and move beyond a narrow technical, managerial and state- centric focus in research on urban metabolic relations. On the other hand, the paper advocates for an approach to metabolic analysis that views the urban environment not simply as a relatively static exteriority that is produced by dynamic flows of matter, energy and information, but rather as a dynamic, nested and co-evolutionary network of complex biosocial and material relations, which in itself shapes how various metabolisms interact across scales. The paper then concludes by briefly discussing how a combination of metabolic analysis and political ecology research can inform urban water governance. In sum, the paper emphasizes the need for metabolic analysis to remain open to a plurality of different knowledge forms and perspectives, and to remain attentive to the inherently political nature of material and technological phenomena in order to allow for mutually beneficial exchanges between various scholarly communities.
Background
Identifying pain-related response patterns and understanding functional mechanisms of symptom formation and recovery are important for improving treatment.
Objectives
We aimed to replicate pain-related avoidance-endurance response patterns associated with the Fear-Avoidance Model, and its extension, the Avoidance-Endurance Model, and examined their differences in secondary measures of stress, action control (i.e., dispositional action vs. state orientation), coping, and health.
Methods
Latent profile analysis (LPA) was conducted on self-report data from 536 patients with chronic non-specific low back pain at the beginning of an inpatient rehabilitation program. Measures of stress (i.e., pain, life stress) and action control were analyzed as covariates regarding their influence on the formation of different pain response profiles. Measures of coping and health were examined as dependent variables.
Results
Partially in line with our assumptions, we found three pain response profiles of distress-avoidance, eustress-endurance, and low-endurance responses that are depending on the level of perceived stress and action control. Distress-avoidance responders emerged as the most burdened, dysfunctional patient group concerning measures of stress, action control, maladaptive coping, and health. Eustress-endurance responders showed one of the highest levels of action versus state orientation, as well as the highest levels of adaptive coping and physical activity. Low-endurance responders reported lower levels of stress as well as equal levels of action versus state orientation, maladaptive coping, and health compared to eustress-endurance responders; however, equally low levels of adaptive coping and physical activity compared to distress-avoidance responders.
Conclusions
Apart from the partially supported assumptions of the Fear-Avoidance and Avoidance-Endurance Model, perceived stress and dispositional action versus state orientation may play a crucial role in the formation of pain-related avoidance-endurance response patterns that vary in degree of adaptiveness. Results suggest tailoring interventions based on behavioral and functional analysis of pain responses in order to more effectively improve patients quality of life.
The visualization of relational data is at the heart of information visualization. The prevalence of visual representations for this kind of data is based on many real world examples spread over many application domains: protein-protein interaction networks in the field of bioinformatics, hyperlinked documents in the World Wide Web, call graphs in software systems, or co-author networks are just four instances of a rich source of relational datasets. The most common visual metaphor for this kind of data is definitely the node-link approach, which typically suffers from visual clutter caused by many edge crossings. Many sophisticated algorithms have been developed to layout a graph efficiently and with respect to a list of aesthetic graph drawing criteria. Relations between objects normally change over time. Visualizing the dynamics means an additional challenge for graph visualization researchers. Applying the same layout algorithms for static graphs to intermediate states of dynamic graphs may also be a strategy to compute layouts for an animated graph sequence that shows the dynamics. The major drawback of this approach is the high cognitive effort for a viewer of the animation to preserve his mental map. To tackle this problem, a sophisticated layout algorithm has to inspect the whole graph sequence and compute a layout with as little changes as possible between subsequent graphs. The main contribution and ultimate goal of this thesis is the visualization of dynamic compound weighted multi directed graphs as a static image that targets at visual clutter reduction and at mental map preservation. To achieve this goal, we use a radial space-filling visual metaphor to represent the dynamics in relational data. As a side effect the obtained pictures are very aesthetically appealing. In this thesis we firstly describe static graph visualizations for rule sets obtained by extracting knowledge from software archives under version control. In a different work we apply animated node-link diagrams to code-developer relationships to show the dynamics in software systems. An underestimated visualization paradigm is the radial representation of data. Though this kind of data has a long history back to centuries-old statistical graphics, only little efforts have been done to fully explore the benefits of this paradigm. We evaluated a Cartesian and a radial counterpart of a visualization technique for visually encoding transaction sequences and dynamic compound digraphs with both an eyetracking and an online study. We found some interesting phenomena apart from the fact that also laymen in graph theory can understand the novel approach in a short time and apply it to datasets. The thesis is concluded by an aesthetic dimensions framework for dynamic graph drawing, future work, and currently open issues.
In politics and economics, and thus in the official statistics, the precise estimation of indicators for small regions or parts of populations, the so-called Small Areas or domains, is discussed intensively. The design-based estimation methods currently used are mainly based on asymptotic properties and are thus reliable for large sample sizes. With small sample sizes, however, this design based considerations often do not apply, which is why special model-based estimation methods have been developed for this case - the Small Area methods. While these may be biased, they often have a smaller mean squared error (MSE) as the unbiased design based estimators. In this work both classic design-based estimation methods and model-based estimation methods are presented and compared. The focus lies on the suitability of the various methods for their use in official statistics. First theory and algorithms suitable for the required statistical models are presented, which are the basis for the subsequent model-based estimators. Sampling designs are then presented apt for Small Area applications. Based on these fundamentals, both design-based estimators and as well model-based estimation methods are developed. Particular consideration is given in this case to the area-level empirical best predictor for binomial variables. Numerical and Monte Carlo estimation methods are proposed and compared for this analytically unsolvable estimator. Furthermore, MSE estimation methods are proposed and compared. A very popular and flexible resampling method that is widely used in the field of Small Area Statistics, is the parametric bootstrap. One major drawback of this method is its high computational intensity. To mitigate this disadvantage, a variance reduction method for parametric bootstrap is proposed. On the basis of theoretical considerations the enormous potential of this proposal is proved. A Monte Carlo simulation study shows the immense variance reduction that can be achieved with this method in realistic scenarios. This can be up to 90%. This actually enables the use of parametric bootstrap in applications in official statistics. Finally, the presented estimation methods in a large Monte Carlo simulation study in a specific application for the Swiss structural survey are examined. Here problems are discussed, which are of high relevance for official statistics. These are in particular: (a) How small can the areas be without leading to inappropriate or to high precision estimates? (b) Are the accuracy specifications for the Small Area estimators reliable enough to use it for publication? (c) Do very small areas infer in the modeling of the variables of interest? Could they cause thus a deterioration of the estimates of larger and therefore more important areas? (d) How can covariates, which are in different levels of aggregation be used in an appropriate way to improve the estimates. The data basis is the Swiss census of 2001. The main results are that in the author- view, the use of small area estimators for the production of estimates for areas with very small sample sizes is advisable in spite of the modeling effort. The MSE estimates provide a useful measure of precision, but do not reach in all Small Areas the level of reliability of the variance estimates for design-based estimators.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
Food waste is the origin of major social and environmental issues. In industrial societies, domestic households are the biggest contributors to this problem. But why do people waste food although they buy and value it? Answering this question is mandatory to design effective interventions against food waste. So far, however, many interventions have not been based on theoretical knowledge. Integrating food waste literature and ambivalence research, we propose that domestic food waste can be understood via the concept of ambivalence—the simultaneous presence of positive and negative associations towards the same attitude object. In support of this notion, we demonstrated in three pre-registered experiments that people experienced ambivalence towards non-perishable food products with expired best before dates. The experience of ambivalence was in turn associated with an increased willingness to waste food. However, two informational interventions aiming to prevent people from experiencing ambivalence did not work as intended (Experiment 3). We hope that the outlined conceptualization inspires theory-driven research on why and when people dispose of food and on how to design effective interventions.
People are increasingly concerned about how meat affects the environment, human health, and animal welfare, yet eating and enjoying meat remains a norm. Unsurprisingly, many people are ambivalent about meat—evaluating it as both positive and negative. Here, we propose that meat-related conflict is multidimensional and depends on people’s dietary group: Omnivores’ felt ambivalence relates to multiple negative associations that oppose a predominantly positive attitude towards meat, and veg*ans’ ambivalence relates to various positive associations that oppose a predominantly negative attitude. A qualitative study (N = 235; German) revealed that omnivores and veg*ans experience meat-related ambivalence due to associations with animals, sociability, sustainability, health, and sensory experiences. To quantify felt ambivalence in these domains, we developed the Meat Ambivalence Questionnaire (MAQ). We validated the MAQ in four pre-registered studies using self-report and behavioral data (N = 3,485; German, UK, representative US). Both omnivores and veg*ans reported meat-related ambivalence, but with differences across domains and their consequences for meat consumption. Specifically, ambivalence was associated with less meat consumption in omnivores (especially sensory-/animal-based ambivalence) and more meat consumption in veg*ans (especially sensory-/socially-based ambivalence). Network analyses shed further light on the nomological net of the MAQ while controlling for a comprehensive set of determinants of meat consumption. By introducing the MAQ, we hope to provide researchers with a tool to better understand how ambivalence accompanies behavior change and maintenance.
Although it has been demonstrated that nociceptive processing can be modulated by heterotopically and concurrently applied noxious stimuli, the nature of brain processes involved in this percept modulation in healthy subjects remains elusive. Using functional magnetic resonance imaging (fMRI) we investigated the effect of noxious counter-stimulation on pain processing. FMRI scans (1.5 T; block-design) were performed in 34 healthy subjects (median age: 23.5 years; range: 20-31 yrs.) during combined and single application (duration: 15 s; ISI=36 s incl. 6 s rating time) of noxious interdigital-web pinching (intensity range: 6-15 N) and contact-heat (45-49 -°C) presented in pseudo-randomized order during two runs separated by approx. 15 min with individually adjusted equi-intense stimuli. In order to control for attention artifacts, subjects were instructed to maintain their focus either on the mechanical or on the thermal pain stimulus. Changes in subjective pain intensity were computed as percent differences (∆%) in pain ratings between single and heterotopic stimulation for both fMRI runs, resulting in two subgroups showing a relative pain increase (subgroup P-IN, N=10) vs. decrease (subgroup P-DE, N=12). Second level and Region of Interest analysis conducted for both subgroups separately revealed that during heterotopic noxious counter-stimulation, subjects with relative pain decrease showed stronger and more widespread brain activations compared to subjects with relative pain increase in pain processing regions as well as a fronto-parietal network. Median-split regression analyses revealed a modulatory effect of prefrontal activation on connectivity between the thalamus and midbrain/pons, supporting the proposed involvement of prefrontal cortex regions in pain modulation. Furthermore, the mid-sagittal size of the total corpus callosum and five of its subareas were measured from the in vivo magnetic resonance imaging (MRI) recordings. A significantly larger relative truncus size (P=.04) was identified in participants reporting a relative decrease of subjective pain intensity during counter-stimulation, when compared to subjects experiencing a relative pain increase. The above subgroup differences observed in functional and structural imaging data are discussed with consideration of potential differences in cognitive and emotional aspects of pain modulation.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
The global spread of the coronavirus pandemic has particularly dramatic consequences for the lives of migrants and refugees living in already marginalised and restricted conditions, whose ongoing crisis is at risk of being overlooked. But refugees are not only extremely vulnerable and at risk of infection, as several reports show, quickly develop their own protection measures like the production of hygienic products, the publication of their situation and calls for action and help. Therefore, this paper aims to research the effects of the coronavirus crisis on refugees in camp settings with a special ethnographic focus on how refugees actively deal with this crisis and if they, through already developed resilience, are capable of adapting to the restrictions as well as inventing strategies to cope with the difficult situation. To account for the variety of refugee camps as well as the different living conditions due to their locality, history and national asylum politics, we will look at three different locations, namely refugee asylum homes in Germany, hotspots on the Greek islands as well as one refugee camp in Kenya. The main questions will be how, under structurally and institutionally framed conditions of power and victimisation in refugee camps, forms of agency are established, made possible or limited. The goal is to show which strategies refugees apply to cope with the enhanced restrictions and exclusion, how they act to protect themselves and others from the virus and how they present and reflect their situation during the coronavirus pandemic. Finally, this discussion offers a new perspective to consider refugees not only as vulnerable victims, but also as actively engaged individuals.
Aggression is one of the most researched topics in psychology. This is understandable, since aggression behavior does a lot of harm to individuals and groups. A lot is known already about the biology of aggression, but one system that seems to be of vital importance in animals has largely been overlooked: the hypothalamic-pituitary-adrenal (HPA) axis. Menno Kruk and Jószef Haller and their research teams developed rodent models of adaptive, normal, and abnormal aggressive behavior. They found the acute HPA axis (re)activity, but also chronic basal levels to be causally relevant in the elicitation and escalation of aggressive behavior. As a mediating variable, changes in the processing of relevant social information is proposed, although this could not be tested in animals. In humans, not a lot of research has been done, but there is evidence for both the association between acute and basal cortisol levels in (abnormal) aggression. However, not many of these studies have been experimental of nature. rnrnOur aim was to add to the understanding of both basal chronic levels of HPA axis activity, as well as acute levels in the formation of aggressive behavior. Therefore, we did two experiments, both with healthy student samples. In both studies we induced aggression with a well validated paradigm from social psychology: the Taylor Aggression Paradigm. Half of the subjects, however, only went through a non-provoking control condition. We measured trait basal levels of HPA axis activity on three days prior. We took several cortisol samples before, during, and after the task. After the induction of aggression, we measured the behavioral and electrophysiological brain response to relevant social stimuli, i.e., emotional facial expressions embedded in an emotional Stroop task. In the second study, we pharmacologically manipulated cortisol levels 60min before the beginning of the experiment. To do that, half of the subjects were administered 20mg of hydrocortisone, which elevates circulating cortisol levels (cortisol group), the other half was administered a placebo (placebo group). Results showed that acute HPA axis activity is indeed relevant for aggressive behavior. We found in Study 1 a difference in cortisol levels after the aggression induction in the provoked group compared to the non-provoked group (i.e., a heightened reactivity of the HPA axis). However, this could not be replicated in Study 2. Furthermore, the pharmacological elevation of cortisol levels led to an increase in aggressive behavior in women compared to the placebo group. There were no effects in men, so that while men were significantly more aggressive than women in the placebo group, they were equally aggressive in the cortisol group. Furthermore, there was an interaction of cortisol treatment with block of the Taylor Aggression Paradigm, in that the cortisol group was significantly more aggressive in the third block of the task. Concerning basal HPA axis activity, we found an effect on aggressive behavior in both studies, albeit more consistently in women and in the provoked and non-provoked groups. However, the effect was not apparent in the cortisol group. After the aggressive encounter, information processing patterns were changed in the provoked compared to the non-provoked group for all facial expressions, especially anger. These results indicate that the HPA axis plays an important role in the formation of aggressive behavior in humans, as well. Importantly, different changes within the system, be it basal or acute, are associated with the same outcome in this task. More studies are needed, however, to better understand the role that each plays in different kinds of aggressive behavior, and the role information processing plays as a possible mediating variable. This extensive knowledge is necessary for better behavioral interventions.
The brain is the central coordinator of the human stress reaction. At the same time, peripheral endocrine and neural stress signals act on the brain modulating brain function. Here, three experimental studies are presented demonstrating this dual role of the brain in stress. Study I shows that centrally acting insulin, an important regulator of energy homeostasis, attenuates the stress related cortisol secretion. Studies II and III show that specific components of the stress reaction modulate learning and memory retrieval, two important aspects of higher-order brain function.
Tropospheric ozone (O3) is known to have various detrimental effects on plants, such as visible leaf injury, reduced growth and premature senescence. Flux models offer the determination of the harmful ozone dose entering the plant through the stomata. This dose can then be related to phytotoxic effects mentioned above to obtain dose-response relationships, which are a helpful tool for the formulation of abatement strategies of ozone precursors. rnOzone flux models are dependant on the correct estimation of stomatal conductance (gs). Based on measurements of gs, an ozone flux model for two white clover clones (Trifolium repens L. cv Regal; NC-S (ozone-sensitive) and NC-R (ozone-resistant)) differing in their sensitivity to ozone was developed with the help of artificial neural networks (ANNs). White clover is an important species of various European grassland communities. The clover plants were exposed to ambient air at three sites in the Trier region (West Germany) during five consecutive growing seasons (1997 to 2001). The response parameters visible leaf injury and biomass ratio of NC-S/NC-R clone were regularly assessed. gs-measurements of both clones functioned as output of the ANN-based gs model, while corresponding climate parameters (i.e. temperature, vapour pressure deficit (VPD) and photosynthetic active radiation (PAR)) and various ozone concentration indices were inputs. The development of the model was documented in detail and various model evaluation techniques (e.g. sensitivity analysis) were applied. The resulting gs model was used as a basis for ozone flux calculations, which were related to above mentioned response parameters. rnThe results showed that the ANNs were capable of revealing and learning the complex relationship between gs and key meteorological parameters and ozone concentration indices. The dose-response relationships between ozone fluxes and visible leaf injury were reasonably strong, while those between ozone fluxes and NC-S/NC-R biomass ratio were fairly weak. The results were discussed in detail with respect to the suitability of the chosen experimental methods and model type.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
Die räumliche Entwicklung von Städten und Regionen wird durch Trends wie Klimawandel, demographische Veränderungen und Strukturwandel beeinflusst, welche nicht an Verwaltungsgrenzen aufhören, sondern die Entwicklung großflächiger Gebiete bestimmen. Außerdem weisen Grenzräume häufig funktionale und thematische Verflechtungen auf, die über die nationalen Grenzen hinweg bestehen. Damit verbunden sind ein regelmäßiger Austausch und Abhängigkeiten zwischen Grenzräumen und deren Bewohnern. Daher ist die Koordination der grenzüberschreitenden Raumentwicklung entscheidend für eine zukunftsorientierte und nachhaltige räumliche Entwicklung. Aufgrund seiner hohen Bedeutung wird dieses Thema von europäischen Wissenschaftlern in der ersten Ausgabe der Themenhefte Borders in Perspective aus verschiedenen Perspektiven beleuchtet.
This dissertation includes three research articles on the portfolio risks of private investors. In the first article, we analyze a large data set of private banking portfolios in Switzerland of a major bank with the unique feature that parts of the portfolios were managed by the bank, and parts were advisory portfolios. To correct the heterogeneity of individual investors, we apply a mixture model and a cluster analysis. Our results suggest that there is indeed a substantial group of advised individual investors that outperform the bank managed portfolios, at least after fees. However, a simple passive strategy that invests in the MSCI World and a risk-free asset significantly outperforms both the better advisory and the bank managed portfolios. The new regulation of the EU for financial products (UCITS IV) prescribes Value at Risk (VaR) as the benchmark for assessing the risk of structured products. The second article discusses the limitations of this approach and shows that, in theory, the expected return of structured products can be unbounded while the VaR requirement for the lowest risk class can still be satisfied. Real-life examples of large returns within the lowest risk class are then provided. The results demonstrate that the new regulation could lead to new seemingly safe products that hide large risks. Behavioral investors who choose products based only on their official risk classes and their expected returns will, therefore, invest into suboptimal products. To overcome these limitations, we suggest a new risk-return measure for financial products based on the martingale measure that could erase such loopholes. Under the mean-VaR framework, the third article discusses the impacts of the underlying's first four moments on the structured product. By expanding the expected return and the VaR of a structured product with its underlying moments, it is possible to investigate each moment's impact on them, simultaneously. Results are tested by Monte Carlo simulation and historical simulation. The findings show that for the majority of structured products, underlyings with large positive skewness are preferred. The preferences for variance and for kurtosis are ambiguous.
The complicated human alternative GR promoter region plays a pivotal role in the regulation of GR levels. In this thesis, both genomic and environmental factors linked with GR expression are covered. This research showed that GR promoters were susceptible to silencing by methylation and the activity of the individual promoters was also modulated by SNPs. E2F1 is a major element to drive the expression of GR 1F transcripts and single CpG dinucleotide methylation cannot mediate the inhibition of transcription in vitro. Also, the distribution of GR first exons and 3" splice variants (GRα and GR-P) is expressed throughout the human brain with no region-specific alternative first exon usage. These data mirrored the consistently low levels of methylation in the brain, and the observed homogeneity throughout the studied regions. Taken together, the research presented in this thesis explored several layers of complexity in GR transcriptional regulation.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
Finding behavioral parameterization for a 1-D water balance model by multi-criteria evaluation
(2019)
Evapotranspiration is often estimated by numerical simulation. However, to produce accurate simulations, these models usually require on-site measurements for parameterization or calibration. We have to make sure that the model realistically reproduces both, the temporal patterns of soil moisture and evapotranspiration. In this study, we combine three sources of information: (i) measurements of sap velocities; (ii) soil moisture; and (iii) expert knowledge on local runoff generation and water balance to define constraints for a “behavioral” forest stand water balance model. Aiming for a behavioral model, we adjusted soil moisture at saturation, bulk resistance parameters and the parameters of the water retention curve (WRC). We found that the shape of the WRC influences substantially the behavior of the simulation model. Here, only one model realization could be referred to as “behavioral”. All other realizations failed for a least one of our evaluation criteria: Not only transpiration and soil moisture are simulated consistently with our observations, but also total water balance and runoff generation processes. The introduction of a multi-criteria evaluation scheme for the detection of unrealistic outputs made it possible to identify a well performing parameter set. Our findings indicate that measurement of different fluxes and state variables instead of just one and expert knowledge concerning runoff generation facilitate the parameterization of a hydrological model.
This doctoral thesis includes five studies that deal with the topics work, well-being, and family formation, as well as their interaction. The studies aim to find answers to the following questions: Do workers’ personality traits determine whether they sort into jobs with performance appraisals? Does job insecurity result in lower quality and quantity of sleep? Do public smoking bans affect subjective well-being by changing individuals’ use of leisure time? Can risk preferences help to explain non-traditional family forms? And finally, are differences in out-of-partnership birth rates between East and West Germany driven by cultural characteristics that have evolved in the two separate politico-economic systems? To answer these questions, the following chapters use basic economic subjects such as working conditions, income, and time use, but also employ a range of sociological and psychological concepts such as personality traits and satisfaction measures. Furthermore, all five studies use data from the German Socio-Economic Panel (SOEP), a representative longitudinal panel of private households in Germany, and apply state-of-the-art microeconometric methods. The findings of this doctoral thesis are important for individuals, employers, and policymakers. Workers and employers benefit from knowing the determinants of occupational sorting, as vacancies can be filled more accurately. Moreover, knowing which job-related problems lead to lower well-being and potentially higher sickness absence likely increases efficiency in the workplace. The research on smoking bans and family formation in chapters 4, 5, and 6 is particularly interesting for policymakers. The results on the effects of smoking bans on subjective well-being presented in chapter 4 suggest that the impacts of tobacco control policies could be weighed more carefully. Additionally, understanding why women are willing to take the risks associated with single motherhood can help to improve policies targeting single mothers.
The dissertation includes three published articles on which the development of a theoretical model of motivational and self-regulatory determinants of the intention to comprehensively search for health information is based. The first article focuses on building a solid theoretical foundation as to the nature of a comprehensive search for health information and enabling its integration into a broader conceptual framework. Based on subjective source perceptions, a taxonomy of health information sources was developed. The aim of this taxonomy was to identify most fundamental source characteristics to provide a point of reference when it comes to relating to the target objects of a comprehensive search. Three basic source characteristics were identified: expertise, interaction and accessibility. The second article reports on the development and evaluation of an instrument measuring the goals individuals have when seeking health information: the ‘Goals Associated with Health Information Seeking’ (GAINS) questionnaire. Two goal categories (coping focus and regulatory focus) were theoretically derived, based on which four goals (understanding, action planning, hope and reassurance) were classified. The final version of the questionnaire comprised four scales representing the goals, with four items per scale (sixteen items in total). The psychometric properties of the GAINS were analyzed in three independent samples, and the questionnaire was found to be reliable and sufficiently valid as well as suitable for a patient sample. It was concluded that the GAINS makes it possible to evaluate goals of health information seeking (HIS) which are likely to inform the intention building on how to organize the search for health information. The third article describes the final development and a first empirical evaluation of a model of motivational and self-regulatory determinants of an intentionally comprehensive search for health information. Based on the insights and implications of the previous two articles and an additional rigorous theoretical investigation, the model included approach and avoidance motivation, emotion regulation, HIS self-efficacy, problem and emotion focused coping goals and the intention to seek comprehensively (as outcome variable). The model was analyzed via structural equation modeling in a sample of university students. Model fit was good and hypotheses with regard to specific direct and indirect effects were confirmed. Last, the findings of all three articles are synthesized, the final model is presented and discussed with regard to its strengths and weaknesses, and implications for further research are determined.
Institutional and cultural determinants of speed of government responses during COVID-19 pandemic
(2021)
This article examines institutional and cultural determinants of the speed of government responses during the COVID-19 pandemic. We define the speed as the marginal rate of stringency index change. Based on cross-country data, we find that collectivism is associated with higher speed of government response. We also find a moderating role of trust in government, i.e., the association of individualism-collectivism on speed is stronger in countries with higher levels of trust in government. We do not find significant predictive power of democracy, media freedom and power distance on the speed of government responses.
This dissertation develops a rationale of how to use fossil data in solving biogeographical and ecological problems. It is argued that large amounts of fossil data of high quality can be used to document the evolutionary processes (the origin, development, formation and dynamics) of Arealsystems, which can be divided into six stages in North America: the Refugium Stage (before 15,000 years ago: > 15 ka), the Dispersal Stage (from 8,000 to 15,000 years ago: 8.0 - 15 ka), the Developing Stage (from 3,000 to 8,000 years ago: 3.0 - 8.0 ka), the Transitional Stage (from 1,000 to 3,000 years ago: 1 - 3 ka), the Primitive Stage (from 5,00 to 1,000 years ago: 0.5 - 1 ka) and the Human Disturbing Stage (during the last 500 years: < 0.5 ka). The division into these six stages is based on geostatistical analysis of the FAUNMAP database that contains 43,851 fossil records collected from 1860 to 1994 in North America. Fossil data are one of the best materials to test the glacial refugia theory. Glacial refugia represent areas where flora and fauna were preserved during the glacial period, characterized by richness in species and endemic species at present. This means that these (endemic) species should have distributed purely or primarily in these areas during the glacial period. The refugia can therefore be identified by fossil records of that period. If it is not the case, the richness in (endemic) species may not be the result of the glacial refugia. By exploring where mammals lived during the Refugium Stage (> 15 ka), seven refugia in North America can be identified: the California Refugium, the Mexico Refugium, the Florida Refugium, the Appalachia Refugium, the Great Basin Refugium, the Rocky Mountain Refugium and the Great Lake Refugium. The first five refugia coincide well with De Lattin- dispersal centers recognized by biogeographical methods using data on modern distributions. The individuals of a species are not evenly distributed over its Arealsystem. Brown- Hot Spots Model shows that in most cases there is an enormous variation in abundance within an areal of a species: In a census, zero or only a very few individuals occur at most sample locations, but tens or hundreds are found at a few sample sites. Locations where only a few individuals can be sampled in a survey are called "cool spots", and sites where tens or hundreds of individuals can be observed in a survey are called "hot spots". Many areas within the areal are uninhabited, which are called "holes". This model has direct implications for analyzing fossil data: Hot spots have a much higher local population density than cool spots. The chances to discover fossil individuals of a species are much higher in sediments located in a "hot spot" area than in a "cool spot" area. Therefore much higher MNIs (Minimum Number of Individuals) of the species should be found in fossil localities located in the hot spot than in the cool spot area. There are only a few hot spots but many cool spots within an areal of a single hypothetical species, consequently only a few fossil sites can provide with much high MNIs, whereas most other sites can only provide with very low MNIs. This prediction has been proved to be true by analysis of 70 species in FAUMAP containing over 100 fossil records. The temporal and spatial variation in abundance can be reconstructed from the temporospatial distribution of the MNIs of a species over its Arealsystem. Areas with no fossil records from the last thousands of years may be holes, and sites with much higher MNIs may be hot spots, while locations with low MNIs may be cool spots. Although the hot spots of many species can remain unchanged in an area over thousands of years, our study shows that a large shift of hot spots occurred mainly around 1,500-1,000 years ago. There are three directions of movement: from the west side to the east side of the Rockies, from the East of the USA to the east side of the Rockies and from the west side of the Rockies to the Southwest of the USA. The first two directions of shift are called Lewis and Clark- pattern, which can be verified with the observations mad by Lewis and Clark during their expedition in 1805-1806. The historical process of this pattern may well explain the 200-year-old puzzle why big game then abundant on the east side were rare on the west side of the Rocky Mountains noted by modern ecologists and biogeographers. The third direction of shift is called Bayham- pattern. This pattern can be tested by the model of Late Holocene resource intensification first described by Frank E. Bayham. The historical process creating the Bayham pattern will challenge the classic explanation of the Late Holocene resource intensification. An environmental change model has been proposed to account for the shift of hot spots. Implications of glacial refugia and hot spots areas for wildlife management and effective conservation are discussed. Suggestions for paleontologists and zooarchaeologists regarding how to provide more valuable information in their future excavation and research for other disciplines are given.
Background and rationale: Changing working conditions demand adaptation, resulting in higher stress levels in employees. In consequence, decreased productivity, increasing rates of sick leave, and cases of early retirement result in higher direct, indirect, and intangible costs. Aims of the Research Project: The aim of the study was to test the usefulness of a novel translational diagnostic tool, Neuropattern, for early detection, prevention, and personalized treatment of stress-related disorders. The trial was designed as a pilot study with a wait list control group. Materials and Methods: In this study, 70 employees of the Forestry Department Rhineland-Palatinate, Germany, were enrolled. Subjects were block-randomized according to the functional group of their career field, and either underwent Neuropattern diagnostics immediately, or after a waiting period of three months. After the diagnostic assessment, their physicians received the Neuropattern Medical Report, including the diagnostic results and treatment recommendations. Participants were informed by the Neuropattern Patient Report, and were eligible to an individualized Neuropattern Online Counseling account. Results: The application of Neuropattern diagnostics significantly improved mental health and health-related behavior, reduced perceived stress, emotional exhaustion, overcommitment and possibly, presenteeism. Additionally, Neuropattern sensitively detected functional changes in stress physiology at an early stage, thus allowing timely personalized interventions to prevent and treat stress pathology. Conclusion: The present study encouraged the application of Neuropattern diagnostics to early intervention in non-clinical populations. However, further research is required to determine the best operating conditions.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Acute social and physical stress interact to influence social behavior: the role of social anxiety
(2018)
Stress is proven to have detrimental effects on physical and mental health. Due to different tasks and study designs, the direct consequences of acute stress have been found to be wide-reaching: while some studies report prosocial effects, others report increases in antisocial behavior, still others report no effect. To control for specific effects of different stressors and to consider the role of social anxiety in stress-related social behavior, we investigated the effects of social versus physical stress on behavior in male participants possessing different levels of social anxiety. In a randomized, controlled two by two design we investigated the impact of social and physical stress on behavior in healthy young men. We found significant influences on various subjective increases in stress by physical and social stress, but no interaction effect. Cortisol was significantly increased by physical stress, and the heart rate was modulated by physical and social stress as well as their combination. Social anxiety modulated the subjective stress response but not the cortisol or heart rate response. With respect to behavior, our results show that social and physical stress interacted to modulate trust, trustworthiness, and sharing. While social stress and physical stress alone reduced prosocial behavior, a combination of the two stressor modalities could restore prosociality. Social stress alone reduced nonsocial risk behavior regardless of physical stress. Social anxiety was associated with higher subjective stress responses and higher levels of trust. As a consequence, future studies will need to investigate further various stressors and clarify their effects on social behavior in health and social anxiety disorders.
During and after application, pesticides enter the atmosphere by volatilisation and by wind erosion of particles on which the pesticide is sorbed. Measurements at application sites revealed that sometimes more than half of the amount applied is lost into the atmosphere within a few days. The atmosphere is an important part of the hydrologic cycle that can transport pesticides from their point of application and deposit them into aquatic and terrestrial ecosystems far from their point of use. In the region of Trier pesticides are widely used. In order to protect crops from pests and increase crop yields in the viniculture, six to eight pesticide applications take place between May and August. The impact that these applications have on the environmental pollution of the region is not yet well understood. The present study was developed to characterize the atmospheric presence, temporal patterns, transport and deposition of a variety of pesticides in the atmosphere of the area of Trier. To this purpose, rain samples were weekly collected at eight sites during the growing seasons 2000, 2001 and 2002, and air samples (gas and particle phases) were collected during the growing season 2002. Multiresidue analysis methods were developed to determine multiple classes of pesticides in rain water, particle- and gas-phase samples. Altogether 24 active ingredients and 3 metabolites were chosen as representative substances, focussing mainly on fungicides. Twenty-four of the 27 measured pesticides were detected in the rain samples; seventeen pesticides were detected in the air samples. The most frequently detected pesticides and at the highest concentrations, both in rain and air, were compounds belonging to the class of fungicides. The insecticide methyl parathion was also detected in several rain samples as well as two substances that are banned in Germany, such as the herbicides atrazine and simazine. Concentration levels varied during the growing season with the highest concentrations being measured in the late spring and summer months, coinciding with application times and warmer months. Concentration levels measured in the rain samples were, generally, in the order of rnng l-1. Though average concentrations for single substances were less than 100 ng l-1, total concentrations were considerable and in some instances well above the EU drinking water quality standard of 500 ng l-1 for total pesticides. Compared to the amounts applied for pest control, the amounts deposited by rain resulted between 0,004% and 0,10% of the maximum application rates. These low pesticide inputs from precipitation to surface-water bodies is not of concern in vinicultural areas where the impact of other sources, such as superficial runoff inputs from the treated areas and cleaning of field crop sprayers, is more important. However, the potential impacts of these aerial pesticide inputs to non-target sites, such as organic crops, and sensitive ecosystems are as yet not known. Concentration levels in the air samples were in the order of ng m-3 at sites close to the fields were pesticides were applied, while lower values, in the order of pg m-3, were detected at the site located further away from fields where applications were performed. The measured air concentration levels found in this study do not represent a concern for human health in terms of acute risk. Inhalation toxicity studies have shown that an acute potential risk only arises at air concentrations in the range of g m-3. Finally, it must be kept in mind that only a small number of chemicals that were applied in the area were analysed for in this study. In order to gain a better evaluation of the local atmospheric load of pesticides, a wider spectrum of applied substances (including metabolites) needs to be investigated.
My study attempts to illustrate the generic development of the family novel in the second half of the twentieth century. At its beginning stands a preliminary classification of the various types of family fiction as they are referred to in secondary literature, which is then followed by a definition of the family novel proper. With its microscopic approach to novels featuring the American family and its (post-)postmodern variations, my study marks a first step into as yet uncharted territory. Assuming that the family novel has emerged as a result of the twentieth century's emphasis on the modern nuclear family, focuses on the family as a gestalt rather than on a single protagonist, and is concerned with issues of social and cultural significance, this study examines how the family, its forms and its conflicts are functionalized for the respective author's cultural critique. From post-war to post-millennium, family novelists have sketched the American family in various precarious conditions, and their texts are critical assessments of contemporary socioeconomic and cultural conditions. My close reading of John Cheever's The Wapshot Chronicle (1957), Don DeLillo's White Noise (1985) and Jonathan Franzen's The Corrections (2001) intends to reveal, shared values as well as significant differences on a formal as well as on a thematic level. As my examination of the respective novel shows, authors react to social and cultural change with new functionalizations of the family in fiction. Unlike the general assumption of literary crticism, family novels do not approach new cultural developments in a conventional or even traditionalist manner. A comparison of White Noise with The Wapshot Chronicle demonstrates that DeLillo's postmodern family novel transcends the rather nostalgic perspective of Cheever's 1950s work. Similarly, Jonathan Franzen's fin de millennium family novel The Corrections holds a post-postmodern position, which can be aptly described by Franzen's own term 'tragical realism'. The significant changes and developments of the family novel in the past five decades demonstrate the need for a continuous reassessment of the genre, and in this respect, my study is merely a beginning.
Objective: Only 20-25% of the variance for the two to four-fold increased risk of developing breast cancer among women with family histories of the disease can be explained by known gene mutations. Other factors must exist. Here, a familial breast cancer model is proposed in which overestimation of risk, general distress, and cancer-specific distress constitute the type of background stress sufficient to increase unrelated acute stress reactivity in women at familial risk for breast cancer. Furthermore, these stress reactions are thought to be associated with central adiposity, an independent well-established risk factor for breast cancer. Hence, stress through its hormonal correlates and possible associations with central adiposity may play a crucial role in the etiology of breast cancer in women at familial risk for the disease. Methods: Participants were 215 healthy working women with first-degree relatives diagnosed before (high familial risk) or after age 50 (low familial risk), or without breast cancer in first-degree relatives (no familial risk). Participants completed self-report measures of perceived lifetime breast cancer risk, intrusive thoughts and avoidance about breast cancer (Impact of Event Scale), negative affect (Profile of Mood States), and general distress (Brief Symptom Inventory). Anthropometric measurements were taken. Urine samples during work, home, and sleep were collected for assessment of cortisol responses in the naturalistic setting where work was conceptualized as the stressful time of the day. Results: A series of analyses indicated a gradient increase of cortisol levels in response to the work environment from no, low, to high familial risk of breast cancer. When adding breast cancer intrusions to the model with familial risk status predicting work cortisol levels, significant intrusion effects emerged rendering the familial risk group non-significant. However, due to a lack of association between intrusions and cortisol in the low and high familial risk group separately, as well as a significant difference between low and high familial risk on intrusions, but not on work cortisol levels, full mediation of familial risk group effects on work cortisol by intrusions could not be established. A separate analysis indicated increased levels of central but not general adiposity in women at high familial risk of breast cancer compared to the low and no risk groups. There were no significant associations between central adiposity and cortisol excretion. Conclusion: A hyperactive hypothalamus-pituitary-adrenal axis with a more pronounced excretion of its end product cortisol, as well as elevated levels of central but not overall adiposity in women at high familial risk for breast cancer may indicate an increased health risk which expands beyond that of increased breast cancer risk for these women.
The startle response in psychophysiological research: modulating effects of contextual parameters
(2013)
Startle reactions are fast, reflexive, and defensive responses which protect the body from injury in the face of imminent danger. The underlying reflex is basic and can be found in many species. Even though it consists of only a few synapses located in the brain stem, the startle reflex offers a valuable research method for human affective, cognitive, and psychological research. This is because of moderating effects of higher mental processes such as attention and emotion on the response magnitude: affective foreground stimulation and directed attention are validated paradigms in startle-related research. This work presents findings from three independent research studies that deal with (1) the application of the established "affective modulation of startle"-paradigm to the novel setting of attractiveness and human mating preferences, (2) the question of how different components of the startle response are affected by a physiological stressor and (3) how startle stimuli affect visual attention towards emotional stimuli. While the first two studies treat the startle response as a dependent variable by measuring its response magnitude, the third study uses startle stimuli as an experimental manipulation and investigates its potential effects on a behavioural measure. The first chapter of this thesis describes the basic mechanisms of the startle response as well as the body of research that sets the foundation of startle research in psychophysiology. It provides the rationale for the presented studies, and offers a short summary of the obtained results. Chapter two to four represent primary research articles that are published or in press. At the beginning of each chapter the contribution of all authors is explained. The references for all chapters are listed at the end of this thesis. The overall scope of this thesis is to show how the human startle response is modulated by a variety of factors, such as the attractiveness of a potential mating partner or the exposure to a stressor. In conclusion, the magnitude of the startle response can serve as a measure for such psychological states and processes. Beyond the involuntary, physiological startle reflex, startle stimuli also affect intentional behavioural responses, which we could demonstrate for eye movements in a visual attention paradigm.
Der digitale Fortschritt der vergangenen Jahrzehnte beruht zu einem großen Teil auf der Innovationskraft junger aufstrebender Unternehmen. Während diese Unternehmen auf der einen Seite ihr hohes Maß an Innovativität eint, entsteht für diese zeitgleich auch ein hoher Bedarf an finanziellen Mitteln, um ihre geplanten Innovations- und Wachstumsziele auch in die Tat umsetzen zu können. Da diese Unternehmen häufig nur wenige bis keine Unternehmenswerte, Umsätze oder auch Profitabilität vorweisen können, gestaltet sich die Aufnahme von externem Kapital häufig schwierig bis unmöglich. Aus diesem Umstand entstand in der Mitte des zwanzigsten Jahrhunderts das Geschäftsmodell der Risikofinanzierung, des sogenannten „Venture Capitals“. Dabei investieren Risikokapitalgeber in aussichtsreiche junge Unternehmen, unterstützen diese in ihrem Wachstum und verkaufen nach einer festgelegten Dauer ihre Unternehmensanteile, im Idealfall zu einem Vielfachen ihres ursprünglichen Wertes. Zahlreiche junge Unternehmen bewerben sich um Investitionen dieser Risikokapitalgeber, doch nur eine sehr geringe Zahl erhält diese auch. Um die aussichtsreichsten Unternehmen zu identifizieren, sichten die Investoren die Bewerbungen anhand verschiedener Kriterien, wodurch bereits im ersten Schritt der Bewerbungsphase zahlreiche Unternehmen aus dem Kreis potenzieller Investmentobjekte ausscheiden. Die bisherige Forschung diskutiert, welche Kriterien Investoren zu einer Investition bewegen. Daran anschließend verfolgt diese Dissertation das Ziel, ein tiefergehendes Verständnis darüber zu erlangen, welche Faktoren die Entscheidungsfindung der Investoren beeinflussen. Dabei wird vor allem auch untersucht, wie sich persönliche Faktoren der Investoren, sowie auch der Unternehmensgründer, auf die Investitionsentscheidung auswirken. Ergänzt werden diese Untersuchungen zudem durch die Analyse der Wirkung des digitalen Auftretens von Unternehmensgründern auf die Entscheidungsfindung von Risikokapitalgebern. Des Weiteren verfolgt diese Dissertation als zweites Ziel einen Erkenntnisgewinn über die Auswirkungen einer erfolgreichen Investition auf den Unternehmensgründer. Insgesamt umfasst diese Dissertation vier Studien, die im Folgenden näher beschrieben werden.
In Kapitel 2 wird untersucht, inwiefern sich bestimmte Humankapitaleigenschaften des Investors auf dessen Entscheidungsverhalten auswirken. Mithilfe vorangegangener Interviews und Literaturrecherchen wurden insgesamt sieben Kriterien identifiziert, die Risikokapitalinvestoren in ihrer Entscheidungsfindung nutzen. Daraufhin nahmen 229 Investoren an einem Conjoint Experiment teil, mithilfe dessen gezeigt werden konnte, wie wichtig die jeweiligen Kriterien im Rahmen der Entscheidung sind. Von besonderem Interesse ist dabei, wie sich die Wichtigkeit der Kriterien in Abhängigkeit der Humankapitaleigenschaften der Investoren unterscheiden. Dabei kann gezeigt werden, dass sich die Wichtigkeit der Kriterien je nach Bildungshintergrund und Erfahrung der Investoren unterscheidet. So legen beispielsweise Investoren mit einem höheren Bildungsabschluss und Investoren mit unternehmerischer Erfahrung deutlich mehr Wert auf die internationale Skalierbarkeit der Unternehmen. Zudem unterscheidet sich die Wichtigkeit der Kriterien auch in Abhängigkeit der fachlichen Ausbildung. So legen etwa Investoren mit einer fachlichen Ausbildung in Naturwissenschaften einen deutlich stärkeren Fokus auf den Mehrwert des Produktes beziehungsweise der Dienstleistung. Zudem kann gezeigt werden, dass Investoren mit mehr Investitionserfahrung die Erfahrung des Managementteams wesentlich wichtiger einschätzen als Investoren mit geringerer Investitionserfahrung. Diese Ergebnisse ermöglichen es Unternehmensgründern ihre Bewerbungen um eine Risikokapitalfinanzierung zielgenauer auszurichten, etwa durch eine Analyse des beruflichen Hintergrunds der potentiellen Investoren und eine damit einhergehende Anpassung der Bewerbungsunterlagen, zum Beispiel durch eine stärkere Schwerpunktsetzung besonders relevanter Kriterien.
Die in Kapitel 3 vorgestellte Studie bedient sich der Daten des gleichen Conjoint Experiments aus Kapitel 2, legt hierbei allerdings einen Fokus auf den Unterschied zwischen Investoren aus den USA und Investoren aus Kontinentaleuropa. Dazu wurden Subsamples kreiert, in denen 128 Experimentteilnehmer in den USA angesiedelt sind und 302 in Kontinentaleuropa. Die Analyse der Daten zeigt, dass US-amerikanische Investoren, im Vergleich zu Investoren in Kontinentaleuropa, einen signifikant stärkeren Fokus auf das Umsatzwachstum der Unternehmen legen. Zudem legen kontinentaleuropäische Investoren einen deutlich stärkeren Fokus auf die internationale Skalierbarkeit der Unternehmen. Um die Ergebnisse der Analyse besser interpretieren zu können, wurden diese im Anschluss mit vier amerikanischen und sieben europäischen Investoren diskutiert. Dabei bestätigen die europäischen Investoren die Wichtigkeit der hohen internationalen Skalierbarkeit aufgrund der teilweise geringen Größe europäischer Länder und dem damit zusammenhängenden Zwang, schnell international skalieren zu können, um so zufriedenstellende Wachstumsraten zu erreichen. Des Weiteren wurde der vergleichsweise geringere Fokus auf das Umsatzwachstum in Europa mit fehlenden Mitteln für eine schnelle Expansion begründet. Gleichzeitig wird der starke Fokus der US-amerikanischen Investoren auf Umsatzwachstum mit der höheren Tendenz zu einem Börsengang in den USA begründet, bei dem hohe Umsätze als Werttreiber dienen. Die Ergebnisse dieses Kapitels versetzen Unternehmensgründer in die Lage, ihre Bewerbung stärker an die wichtigsten Kriterien der potenziellen Investoren auszurichten, um so die Wahrscheinlichkeit einer erfolgreichen Investitionsentscheidung zu erhöhen. Des Weiteren bieten die Ergebnisse des Kapitels Investoren, die sich an grenzüberschreitenden syndizierten Investitionen beteiligen, die Möglichkeit, die Präferenzen der anderen Investoren besser zu verstehen und die Investitionskriterien besser auf potenzielle Partner abzustimmen.
Kapitel 4 untersucht ob bestimmte Charaktereigenschaften des sogenannten Schumpeterschen Entrepreneurs einen Einfluss auf die Wahrscheinlichkeit eines zweiten Risikokapitalinvestments haben. Dazu wurden von Gründern auf Twitter gepostete Nachrichten sowie Information von Investitionsrunden genutzt, die auf der Plattform Crunchbase zur Verfügung stehen. Insgesamt wurden mithilfe einer Textanalysesoftware mehr als zwei Millionen Tweets von 3313 Gründern analysiert. Die Ergebnisse der Studie deuten an, dass einige Eigenschaften, die typisch für Schumpetersche Gründer sind, die Chancen für eine weitere Investition erhöhen, während andere keine oder negative Auswirkungen haben. So erhöhen Gründer, die auf Twitter einen starken Optimismus sowie ihre unternehmerische Vision zur Schau stellen die Chancen auf eine zweite Risikokapitalfinanzierung, gleichzeitig werden diese aber durch ein zu starkes Streben nach Erfolg reduziert. Diese Ergebnisse haben eine hohe praktische Relevanz für Unternehmensgründer, die sich auf der Suche nach Risikokapital befinden. Diese können dadurch ihr virtuelles Auftreten („digital identity“) zielgerichteter steuern, um so die Wahrscheinlichkeit einer weiteren Investition zu erhöhen.
Abschließend wird in Kapitel 5 untersucht, wie sich die digitale Identität der Gründer verändert, nachdem diese eine erfolgreiche Risikokapitalinvestition erhalten haben. Dazu wurden sowohl Twitter-Daten als auch Crunchbase-Daten genutzt, die im Rahmen der Erstellung der Studie in Kapitel 4 erhoben wurden. Mithilfe von Textanalyse und Paneldatenregressionen wurden die Tweets von 2094 Gründern vor und nach Erhalt der Investition untersucht. Dabei kann gezeigt werden, dass der Erhalt einer Risikokapitalinvestition das Selbstvertrauen, die positiven Emotionen, die Professionalisierung und die Führungsqualitäten der Gründer erhöhen. Gleichzeitig verringert sich allerdings die Authentizität der von den Gründern verfassten Nachrichten. Durch die Verwendung von Interaktionseffekten kann zudem gezeigt werden, dass die Steigerung des Selbstvertrauens positiv durch die Reputation des Investors moderiert wird, während die Höhe der Investition die Authentizität negativ moderiert. Investoren haben durch diese Erkenntnisse die Möglichkeit, den Weiterentwicklungsprozess der Gründer nach einer erfolgreichen Investition besser nachvollziehen zu können, wodurch sie in die Lage versetzt werden, die Aktivitäten ihrer Gründer auf Social Media Plattformen besser zu kontrollieren und im Bedarfsfall bei ihrer Anpassung zu unterstützen.
Die in den Kapiteln 2 bis 5 vorgestellten Studien dieser Dissertation tragen damit zu einem besseren Verständnis der Entscheidungsfindung im Venture Capital Prozess bei. Der bisherige Stand der Forschung wird um Erkenntnisse erweitert, die sowohl den Einfluss der Eigenschaften der Investoren als auch der Gründer betreffen. Zudem wird auch gezeigt, wie sich die Investition auf den Gründer selbst auswirken kann. Die Implikationen der Ergebnisse, sowie Limitationen und Möglichkeiten künftiger Forschung werden in Kapitel 6 näher beschrieben. Da die in dieser Dissertation verwendeten Methoden und Daten erst seit wenigen Jahren im Kontext der Venture Capital Forschung genutzt werden, beziehungsweise überhaupt verfügbar sind, bietet sie sich als eine Grundlage für weitere Forschung an.
Stress has been considered one of the most relevant factors promoting aggressive behavior. Animal and human pharmacological studies revealed the stress hormones corticosterone in rodents and cortisol in humans to constitute a particularly important neuroendocrine determinate in facilitating aggression and beyond that, assumedly in its continuation and escalation. Moreover, cortisol-induced alterations of social information processing, as well as of cognitive control processes, have been hypothesized as possible influencing factors in the stress-aggression link. So far, the immediate impact of a preceding stressor and thereby stress-induced rise of cortisol on aggressive behavior as well as higher-order cognitive control processes and social information processing in this context have gone mostly unheeded. The present thesis aimed to extend the hitherto findings of stress and aggression in this regard. For this purpose two psychophysiological studies with healthy adults were carried out, both using the socially evaluated-cold pressor test as an acute stress induction. Additionally to behavioral data and subjective reports, event related potentials were measured and acute levels of salivary cortisol were collected on the basis of which stressed participants were divided into cortisol-responders and "nonresponders. Study 1 examined the impact of acute stress-induced cortisol increase on inhibitory control and its neural correlates. 41 male participants were randomly assigned to the stress procedure or to a non-stressful control condition. Beforehand and afterwards, participants performed a Go Nogo task with visual letters to measure response inhibition. The effect of acute stress-induced cortisol increase on covert and overt aggressive behavior and on the processing of provoking stimuli within the aggressive encounter was investigated in study 2. Moreover, this experiment examined the combined impact of stress and aggression on ensuing affective information processing. 71 male and female participants were either exposed to the stress or to the control condition. Following this, half of each group received high or low levels of provocation during the Taylor Aggression Paradigm. At the end of the experiment, a passive viewing paradigm with affective pictures depicting positive, negative, or aggressive scenes with either humans or objects was realized. The results revealed that men were not affected by a stress-induced rise in cortisol on a behavioral level, showing neither impaired response inhibition nor enhanced aggressive behavior. In contrast, women showed enhanced overt and covert aggressive behavior under a surge of endogenous cortisol, confirming previous results, albeit only in case of high provocation and only up to the level of the control group. Unlike this rather moderate impact on behavior, cortisol showed a distinct impact on neural correlates of information processing throughout inhibitory control, aggression-eliciting stimuli, and emotional pictures for both men and women. At this, stress-induced increase of cortisol resulted in enhanced N2 amplitudes to Go stimuli, whereas P2 amplitudes to both and N2 to Nogo amplitudes retained unchanged, indicating an overcorrection and caution of the response activation in favor of successful inhibitory control. The processing of aggression-eliciting stimuli during the aggressive encounter was complexly altered by stress differently for women and men. Under increased cortisol levels, the frontal or parietal P3 amplitude patterns were either diminished or reversed in the case of high provocation compared to the control group and to cortisol-nonresponders, indicating a desensitization towards aggression-eliciting stimuli in males, but a more elaborate processing of those in women. Moreover, stress-induced cortisol and provocation jointly altered subsequent affective information processing at early as well as later stages of the information processing stream. Again, increased levels of cortisol led opposite directed amplitudes in the case of high provocation relative to the control group and cortisol-nonresponders, with enhanced N2 amplitudes in men and reduced P3 and LPP amplitudes in men and women for all affective pictures, suggesting initially enhanced emotional reactivity in men, but ensuing reduced motivational attention and enhanced emotion regulation in both, men and women. As a result, these present findings confirm the relevance of HPA activity in the elicitation and persistence of human aggressive behavior. Moreover, they reveal the significance of compensatory and emotion regulatory strategies and mechanisms in response to stress and provocation, indorsing the relevance of social information and cognitive control processes. Still, more research is needed to clarify the conditions which lead to the facilitation of aggression and by which compensatory mechanisms this is prevented.
In splitting theory of locally convex spaces we investigate evaluable characterizations of the pairs (E, X) of locally convex spaces such that each exact sequence 0 -> X -> G -> E -> 0 of locally convex spaces splits, i.e. either X -> G has a continuous linear left inverse or G -> E has a continuous linear right inverse. In the thesis at hand we deal with splitting of short exact sequences of so-called PLH spaces, which are defined as projective limits of strongly reduced spectra of strong duals of Fréchet-Hilbert spaces. This class of locally convex spaces contains most of the spaces of interest for application in the theory of partial differential operators as the space of Schwartz distributions , the space of real analytic functions and various spaces of ultradifferentiable functions and ultradistributions. It also contains non-Schwartz spaces as B(2,k,loc)(Ω) and spaces of smooth and square integrable functions that are not covered by the current theory for PLS spaces. We prove a complete characterizations of the above problem in the case of X being a PLH space and E either being a Fréchet-Hilbert space or a strong dual of one by conditions of type (T ). To this end, we establish the full homological toolbox of Yoneda Ext functors in exact categories for the category of PLH spaces including the long exact sequence, which in particular involves a thorough discussion of the proper concept of exactness. Furthermore, we exhibit the connection to the parameter dependence problem via the Hilbert tensor product for hilbertizable locally convex spaces. We show that the Hilbert tensor product of two PLH spaces is again a PLH space which in particular proves the positive answer to Grothendieck- problème des topologies. In addition to that we give a complete characterization of the vanishing of the first derivative of the functor proj for tensorized PLH spectra if one of the PLH spaces E and X meets some nuclearity assumptions. To apply our results to concrete cases we establish sufficient conditions of (DN)-(Ω) type and apply them to the parameter dependence problem for partial differential operators with constant coefficients on B(2,k,loc)(Ω) spaces as well as to the smooth and square integrable parameter dependence problem. Concluding we give a complete solution of all the problems under consideration for PLH spaces of Köthe type.
Chapter 2: Using data from the German Socio-Economic Panel, this study examines the relation-ship between immigrant residential segregation and immigrants" satisfaction with the neighbor-hood. The estimates show that immigrants living in segregated areas are less satisfied with the neighborhood. This is consistent with the hypothesis that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Our result holds true even when controlling for other influences such as household income and quality of the dwelling. It also holds true in fixed effects estimates that account for unobserved time-invariant influences. Chapter 3: Using survey data from the German Socio-Economic Panel, this study shows that immigrants living in segregated residential areas are more likely to report discrimination because of their ethnic background. This applies to both segregated areas where most neighbors are immigrants from the same country of origin as the surveyed person and segregated areas where most neighbors are immigrants from other countries of origin. The results suggest that housing discrimination rather than self-selection plays an important role in immigrant residential segregation. Chapter 4: Using data from the German Socio-Economic Panel (SOEP) and administrative data from 1996 to 2009, I investigate the question whether or not right-wing extremism of German residents is affected by the ethnic concentration of foreigners living in the same residential area. My results show a positive but insignificant relationship between ethnic concentration at the county level and the probability of extreme right-wing voting behavior for West Germany. However, due to potential endogeneity issues, I additionally instrument the share of foreigners in a county with the share of foreigners in each federal state (following an approach of Dustmann/Preston 2001). I find evidence for the interethnic contact theory, predicting a negative relationship between foreign-ers" share and right-wing voting. Moreover, I analyze the moderating role of education and the influence of cultural traits on this relationship. Chapter 5: Using data from the Socio-Economic Panel from 1998 to 2009 and administrative data on regional ethnic diversity, I show that ethnic diversity inhibits significantly people- political interest and participation in political organizations in West Germany. People seem to isolate themselves from political participation if exposed to more ethnic diversity which is particularly relevant with respect to the ongoing integration process of the European Union and the increasing transfer of legislative power from the national to European level. The results are robust if an instrumental variable strategy suggested by Dustmann and Preston (2001) is used to take into account that ethnic diversity measured on a local spatial level could be endogenous due to residential sorting. Interestingly, participation in non-political organizations is positively affected by ethnic diversity if selection bias is corrected for.
The main achievement of this thesis is an analysis of the accuracy of computations with Loader's algorithm for the binomial density. This analysis in later progress of work could be used for a theorem about the numerical accuracy of algorithms that compute rectangle probabilities for scan statistics of a multinomially distributed random variable. An example that shall illustrate the practical use of probabilities for scan statistics is the following, which arises in epidemiology: Let n patients arrive at a clinic in d = 365 days, each of the patients with probability 1/d at each of these d days and all patients independently from each other. The knowledge of the probability, that there exist 3 adjacent days, in which together more than k patients arrive, helps deciding, after observing data, if there is a cluster which we would not suspect to have occurred randomly but for which we suspect there must be a reason. Formally, this epidemiological example can be described by a multinomial model. As multinomially distributed random variables are examples of Markov increments, which is a fact already used implicitly by Corrado (2011) to compute the distribution function of the multinomial maximum, we can use a generalized version of Corrado's Algorithm to compute the probability described in our example. To compute its result, the algorithm for rectangle probabilities for Markov increments always uses transition probabilities of the corresponding Markov Chain. In the multinomial case, the transition probabilities of the corresponding Markov Chain are binomial probabilities. Therefore, we start an analysis of accuracy of Loader's algorithm for the binomial density, which for example the statistical software R uses. With the help of accuracy bounds for the binomial density we would be able to derive accuracy bounds for the computation of rectangle probabilities for scan statistics of multinomially distributed random variables. To figure out how sharp derived accuracy bounds are, in examples these can be compared to rigorous upper bounds and rigorous lower bounds which we obtain by interval-arithmetical computations.
In the first part of this work we generalize a method of building optimal confidence bounds provided in Buehler (1957) by specializing an exhaustive class of confidence regions inspired by Sterne (1954). The resulting confidence regions, also called Buehlerizations, are valid in general models and depend on a designated statistic'' that can be chosen according to some desired monotonicity behaviour of the confidence region. For a fixed designated statistic, the thus obtained family of confidence regions indexed by their confidence level is nested. Buehlerizations have furthermore the optimality property of being the smallest (w.r.t. set inclusion) confidence regions that are increasing in their designated statistic. The theory is eventually applied to normal, binomial, and exponential samples. The second part deals with the statistical comparison of pairs of diagnostic tests and establishes relations 1. between the sets of lower confidence bounds, 2. between the sets of pairs of comparable lower confidence bounds, and 3. between the sets of admissible lower confidence bounds in various models for diverse parameters of interest.
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
Laboratory landslide experiments enable the observation of specific properties of these natural hazards. However, these observations are limited by traditional techniques: frequently used high-speed video analysis and wired sensors (e.g. displacement). These techniques lead to the drawback that either only the surface and 2D profiles can be observed or wires confine the motion behaviour. In contrast, an unconfined observation of the total spatiotemporal dynamics of landslides is needed for an adequate understanding of these natural hazards.
The present study introduces an autonomous and wireless probe to characterize motion features of single clasts within laboratory-scale landslides. The Smartstone probe is based on an inertial measurement unit (IMU) and records acceleration and rotation at a sampling rate of 100 Hz. The recording ranges are ±16 g (accelerometer) and ±2000∘ s−1 (gyroscope). The plastic tube housing is 55 mm long with a diameter of 10 mm. The probe is controlled, and data are read out via active radio frequency identification (active RFID) technology. Due to this technique, the probe works under low-power conditions, enabling the use of small button cell batteries and minimizing its size.
Using the Smartstone probe, the motion of single clasts (gravel size, median particle diameter d50 of 42 mm) within approx. 520 kg of a uniformly graded pebble material was observed in a laboratory experiment. Single pebbles were equipped with probes and placed embedded and superficially in or on the material. In a first analysis step, the data of one pebble are interpreted qualitatively, allowing for the determination of different transport modes, such as translation, rotation and saltation. In a second step, the motion is quantified by means of derived movement characteristics: the analysed pebble moves mainly in the vertical direction during the first motion phase with a maximal vertical velocity of approx. 1.7 m s−1. A strong acceleration peak of approx. 36 m s−2 is interpreted as a pronounced hit and leads to a complex rotational-motion pattern. In a third step, displacement is derived and amounts to approx. 1.0 m in the vertical direction. The deviation compared to laser distance measurements was approx. −10 %. Furthermore, a full 3D spatiotemporal trajectory of the pebble is reconstructed and visualized supporting the interpretations. Finally, it is demonstrated that multiple pebbles can be analysed simultaneously within one experiment. Compared to other observation methods Smartstone probes allow for the quantification of internal movement characteristics and, consequently, a motion sampling in landslide experiments.
"Culture", in addition to its ethnic signification, can also express various groups' and communities' political and economic situation in society. As well as signifying the accommodation of ethnic diversity, the integration of dissimilar cultures in South Africa has to do with both the former oppressors and the formerly oppressed coming to terms with the oppression of the past, and with the equitable distribution of material means. Constitutional and other legal means have been designed to facilitate a process of integration dealing with the abovementioned issues. Some of these measures will be looked at. The speaker will argue that the integration of different cultures in South Africa cannot and will not be achieved if the law is invoked, in a strong arm fashion, trying to concoct a melting pot. The law can do no more than aiding the facilitation of a process of consolidation as precondition to nation building. Deep-seated, cultural differences among various sections of the population cannot and should not be denied or simply thought away.
As the oldest genre in New Zealand literature written in English, poetry always played a significant role in the country's literary debate and was generally considered to be an indicator of the country's cultural advancement. Throughout the 20th century, the question of home, of where it is and what it entails, became a crucial issue in discussing a distinct New Zealand sense of identity and in strengthening its independent cultural status. The establishment of a national sense of home was thus of primary concern, and poetry was regarded as the cultural marker of New Zealand's independence as a nation. In this politically motivated cultural debate, the writing of women was only considered on the margin, largely because their writing was considered too personal and too intimately tied together with daily life, especially domestic life, as to be able to contribute to a larger cultural statement. Such criticism built on gender role stereotypes, like for instance women's roles as mothers and housewives in the 1950s. The strong alignment of women with the home environment is not coincidental but a construct that was, and still is, predominantly shaped by white patriarchal ideology. However, it is in particular women's, both Pakeha and Maori, thorough investigation into the concept of home from within New Zealand's society that bears the potential for revealing a more profound relationship between actual social reality and the poetic imagination. The close reading of selected poems by Ursula Bethell, Mary Stanley, Lauris Edmond and J.C. Sturm in this thesis reveals the ways in which New Zealand women of different backgrounds subvert, transcend and deconstruct such paradigms through their poetic imagination. Bethell, Stanley, Edmond and Sturm position their concepts of home at the crossroads between the public and the private realm. Their poems explore the correspondence between personal and national concerns and assess daily life against the backdrop of New Zealand's social development. Such complex socio-cultural interdependence has not been paid sufficient attention to in literary criticism, largely because a suitable approach to capturing the complexity of this kind of interconnectedness was lacking. With Spaces of Overlap and Spaces of Mediation this thesis presents two critical models that seek to break the tight critical frames in the assessment of poetic concepts of home. Both notions are based on a contextualised approach to the poetic imagination in relation to social reality and seek to carve out the concept of home in its interconnected patterns. Eventually, this approach helps to comprehend the ways in which women's intimate negotiations of home translate into moments of cultural insight and transcend the boundaries of the individual poets' concerns. The focus on women's (re)negotiations of home counteracts the traditionally male perspective on New Zealand poetry and provides a more comprehensive picture of New Zealand's cultural fabric. In highlighting the works of Ursula Bethell, Mary Stanley, Lauris Edmond and J.C. Sturm, this thesis not only emphasises their individual achievements but makes clear that a traditional line of New Zealand women's poetry exists that has been neglected far too long in the estimation of New Zealand's literary history.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.