Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (341) (entfernen)
Sprache
- Englisch (341) (entfernen)
Schlagworte
- Stress (24)
- Optimierung (17)
- Hydrocortison (13)
- Modellierung (11)
- Fernerkundung (10)
- Deutschland (9)
- cortisol (9)
- stress (9)
- Cortisol (8)
- Finanzierung (8)
Institut
- Psychologie (64)
- Fachbereich 4 (54)
- Raum- und Umweltwissenschaften (45)
- Mathematik (44)
- Wirtschaftswissenschaften (26)
- Fachbereich 1 (20)
- Informatik (16)
- Anglistik (11)
- Fachbereich 6 (8)
- Fachbereich 2 (7)
While women's evolving contribution to entrepreneurship is irrefutable, in almost all nations, gender disparity is an existing reality of entrepreneurship. Social and economic outcomes make women entrepreneurship an important area for scholars and governments. In attempts to find reasons for this gender disparity, academic scholars evaluated various factors and recognised perceptual variables as having outstanding explanatory value in understanding women's entrepreneurship. To advance our knowledge of gender disparity in entrepreneurship, the present study explores the influence of entrepreneurial perceptual variables on women's entrepreneurship and considers the critical role of country-level institutional contexts on the women's entrepreneurial propensity. Therefore, this study examines the impact of perceptual variables in different nations. It also offers connections between entrepreneurial perceptions, women entrepreneurship, and institutional contexts as a critical topic for future studies.
Drawing on the importance of perceptual factors, this dissertation investigates whether and how their perception of entrepreneurial networks influences the individuals' decision to initiate a new venture. Prior scholars considered exposure to entrepreneurial role models as one of the most influential factors on the women's inclination towards entrepreneurship; thus, a systemized analysis makes it possible to identify existing research gaps related to this perception. Hence, to draw a clear picture of the relationship between entrepreneurial role models and entrepreneurship, this dissertation provides a systemized overview of prior studies. Subsequently, Chapter 2 structures the existing literature on entrepreneurial role models and reveals that past literature has focused on the different types of role models, the stage of life at which the exposure to role models occurs, and the context of the exposure. Current discourse argues that the women's lower access to entrepreneurial role models negatively influences their inclination towards entrepreneurship.
Additionally, although the research on women entrepreneurship has proliferated in recent years, little is known about how entrepreneurial perceptual variables form women's propensity towards entrepreneurship in various institutional contexts. The work of Koellinger et al. (2013), hereafter KMS, is one of the most influential papers that investigated the influence of perceptual variables, and it showed that a lower rate of women entrepreneurship is associated with a lower level of their entrepreneurial network, perceived entrepreneurial capability, and opportunity evaluation and with a higher fear of entrepreneurial failure. Thus, this dissertation replicates the work of KMS. Chapter 3 explicitly investigates the influence of the above perceptions on women's entrepreneurial propensity. This research has drawn data from the Global Entrepreneurship Monitor, a cross-national individual-level data set (2001-2006) covering 236,556 individuals across 17 countries. The results of this chapter suggest that gender disparities in entrepreneurial propensity are conditioned by differences in entrepreneurial perceptual variables. Women's lower levels of perceived entrepreneurial capability, entrepreneurial role models and opportunity evaluation and their higher fear of failure lead to lower entrepreneurial propensity.
To extend and generalise the relationship between perceptions and women's entrepreneurial propensity, in Chapter 4, two studies are conducted based on replicated research. Extension 1 generalises the results of KMS by using the same analysis on more recent data. Accordingly, this research implemented the same analysis on 372,069 individuals across the same countries (2011-2016). The recent data show that although gender disparity became significantly weaker, the gender gap is still in men's favour. However, similarly to the replicated study, this research revealed that perceptual factors explain a larger part of the gender disparity. To strengthen prior empirical evidence, in extension 2, utilising a sample of 1,029,863 individuals from 71 countries (2011-2016), the study conducted the same measures and analysis in a more global setting. By including developing countries, gender disparity in entrepreneurial propensity decreased significantly. The study revealed that the relative significance of the influences of perceptions' differs significantly across nations; however, perceptions have a worldwide effect. Moreover, this research found that the ratio of nascent women entrepreneurs in less developed countries to those in more developed nations is 2. More precisely, a higher level of economic development negatively influences the impact of perceptions on women's entrepreneurial propensity.
Whereas prior scholars increasingly underlined the importance of perceptions in explaining a large part of gender disparities in entrepreneurship, most of the prior investigations focused on nascent (early-stage) entrepreneurship, and evidence on the relationship between perceptions and other types of self-employment, such as innovative entrepreneurship, is scant. Innovation is a confirmed key driver of a firm's sustainability, higher competitive capability, and growth. Therefore, Chapter 5 investigates the influence of perceptions on women's innovative entrepreneurship. The chapter points out that entrepreneurial perceptions are the main determinants of the women's decision to offer a new product or service. This chapter also finds that women's innovative entrepreneurship is associated with the country's specific economic setting.
Overall, by underlining the critical role of institutional contexts, this dissertation provides considerable insights into the interaction between perceptions and women entrepreneurship, and its results have implications for policymakers and practitioners, who may find it helpful to consider women entrepreneurship in systemized challenges. Formal and informal barriers affect women's entrepreneurial perceptions and can differ from one country to the other. In this sense, it is crucial to design operational plans to mitigate formal and stereotypical challenges, and thus, more women will be able to start a business, particularly in developing countries in which women significantly comprise a smaller portion of the labour markets. This type of policy could write the "rules of the game" such that these rules enhance the women's propensity towards entrepreneurship.
The goal of this thesis is to transfer the logarithmic barrier approach, which led to very efficient interior-point methods for convex optimization problems in recent years, to convex semi-infinite programming problems. Based on a reformulation of the constraints into a nondifferentiable form this can be directly done for convex semi- infinite programming problems with nonempty compact sets of optimal solutions. But, by means of an involved max-term this reformulation leads to nondifferentiable barrier problems which can be solved with an extension of a bundle method of Kiwiel. This extension allows to deal with inexact objective values and subgradient information which occur due to the inexact evaluation of the maxima. Nevertheless we are able to prove similar convergence results as for the logarithmic barrier approach in the finite optimization. In the further course of the thesis the logarithmic barrier approach is coupled with the proximal point regularization technique in order to solve ill-posed convex semi-infinite programming problems too. Moreover this coupled algorithm generates sequences converging to an optimal solution of the given semi-infinite problem whereas the pure logarithmic barrier only produces sequences whose accumulation points are such optimal solutions. If there are certain additional conditions fulfilled we are further able to prove convergence rate results up to linear convergence of the iterates. Finally, besides hints for the implementation of the methods we present numerous numerical results for model examples as well as applications in finance and digital filter design.
Many combinatorial optimization problems on finite graphs can be formulated as conic convex programs, e.g. the stable set problem, the maximum clique problem or the maximum cut problem. Especially NP-hard problems can be written as copositive programs. In this case the complexity is moved entirely into the copositivity constraint.
Copositive programming is a quite new topic in optimization. It deals with optimization over the so-called copositive cone, a superset of the positive semidefinite cone, where the quadratic form x^T Ax has to be nonnegative for only the nonnegative vectors x. Its dual cone is the cone of completely positive matrices, which includes all matrices that can be decomposed as a sum of nonnegative symmetric vector-vector-products.
The related optimization problems are linear programs with matrix variables and cone constraints.
However, some optimization problems can be formulated as combinatorial problems on infinite graphs. For example, the kissing number problem can be formulated as a stable set problem on a circle.
In this thesis we will discuss how the theory of copositive optimization can be lifted up to infinite dimension. For some special cases we will give applications in combinatorial optimization.
Arctic and Antarctic polynya systems are of high research interest since extensive new ice formation takes place in these regions. The monitoring of polynyas and the ice production is crucial with respect to the changing sea-ice regime. The thin-ice thickness (TIT) distribution within polynyas controls the amount of heat that is released to the atmosphere and has therefore an impact on the ice-production rates. This thesis presents an improved method to retrieve thermal-infrared thin-ice thickness distributions within polynyas. TIT with a spatial resolution of 1 km × 1 km is calculated using the MODIS ice-surface temperature and atmospheric model variables within the Laptev Sea polynya for the winter periods 2007/08 and 2008/09. The improvement of the algorithm is focused on the surface-energy flux parameterizations. Furthermore, a thorough sensitivity analysis is applied to quantify the uncertainty in the thin-ice thickness results. An absolute mean uncertainty of -±4.7 cm for ice below 20 cm of thickness is calculated. Furthermore, advantages and drawbacks using different atmospheric data sets are investigated. Daily MODIS TIT composites are computed to fill the data gaps arising from clouds and shortwave radiation. The resulting maps cover on average 70 % of the Laptev Sea polynya. An intercomparison of MODIS and AMSR-E polynya data indicates that the spatial resolution issue is essential for accurately deriving polynya characteristics. Monthly fast-ice masks are generated using the daily TIT composites. These fast-ice masks are implemented into the coupled sea-ice/ocean model FESOM. An evaluation of FESOM sea-ice concentrations is performed with the result that a prescribed high-resolution fast-ice mask is necessary regarding the accurate polynya location. However, for a more realistic simulation of other small-scale sea-ice features further model improvements are required. The retrieval of daily high-resolution MODIS TIT composites is an important step towards a more precise monitoring of thin sea ice and sea-ice production. Future work will address a combined remote sensing " model assimilation method to simulate fully-covered thin-ice thickness maps that enable the retrieval of accurate ice production values.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
This dissertation focuses on the link between labour market institutions and precautionary savings. It is evaluated whether private households react to changes in social insurance provision such as the income replacement in case of unemployment by increased savings for precautionary reasons. The dissertation consists of three self-contained chapters, each focusing on slightly different aspects of the topic. The first chapter titled "Precautionary saving and the (in)stability of subjective earnings uncertainty" empirically looks at the influence of future income uncertainty on household saving behavior. Numerous cross-section studies on precautionary saving use subjective expectations regarding the income variance one year ahead as a proxy for income uncertainty. Using such proxies observed only at one point in time, however, may give rise to biased estimates for precautionary wealth if expectations are not stable over time. Survey data from the Dutch DNB Household Survey suggest that subjective future income distributions are not stable over the mid-term. Moreover, in this study I contrast estimates of precautionary wealth using the variation coefficient observed at one point in time with those using a simple mid-term average. Estimates of precautionary wealth based on the average are about 40% to 80% higher than the estimates using the variation coefficient observed only once. In addition to that, wealth accumulation for precautionary reasons is estimated for different parts of the income distribution. The share of precautionary wealth is highest for households at the center of the income distribution. By linking saving behaviour with unemployment insurance, the following chapters then shed some light on an issue that has largely been neglected in the literature on labour market institutions so far. Whereas the third chapter models the relevance of unemployment insurance for income uncertainty and intertemporal decision making during institutional reform processes, chapter 4 seeks to establish empirically a relationship between saving behavior and unemployment insurance. Social insurance, especially unemployment insurance, provides agents with income insurance against not marketable income risks. Since the early 1990s, reform measures like more activating policies as suggested by the OECD Jobs Study in 1994 have been observed in Europe. In the third chapter it is argued that such changes in unemployment insurance reduce public insurance and increase income uncertainty. Moreover, a simple three period model is discussed which shows a link between a welfare state reform and agents' saving decisions as one possible reaction of agents to self-insure against income risk. Two sources of uncertainty seem to be important in this context: (1) uncertain results of the reform process concerning the replacement rate, and (2) uncertainty regarding the timing of information about the content of the reform. It can be shown that the precautionary motive for saving explains an increased accumulation of capital in times of reform activities. In addition to that, early information about the expected replacement rate increases agents' utility and reduces under and oversaving. Following the argument of the previous chapters, that an important feature of labour market institutions in modern welfare states is to provide cash transfers as income replacement in case of unemployment, it is hypothesised that unemployment benefits reduce the motive to save for precautionary reasons. Based on consumer sentiment data from the European Commission's Consumer Survey, chapter four finally provides some evidence that aggregate saving intentions are significantly influenced by unemployment benefits. It can be shown that higher benefits lower the intention to save.
Stress represents a significant problem for Western societies inducing costs as high as 3-4 % of the European gross national products, a burden that is continually increasing (WHO Briefing, EUR/04/5047810/B6). The classical stress response system is the hypothalamic-pituitary-adrenal (HPA) axis which acts to restore homeostasis after disturbances. Two major components within the HPA axis system are the glucocorticoid receptor (GR) and the mineralocorticoid receptor (MR). Cortisol, released from the adrenal glands at the end of the HPA axis, binds to MRs and with a 10 fold lower affinity to GRs. Both, impairment of the HPA axis and an imbalance in the MR/GR ratio enhances the risk for infection, inflammation and stress related psychiatric disorders. Major depressive disorder (MDD) is characterised by a variety of symptoms, however, one of the most consistent findings is the hyperactivity of the HPA axis. This may be the result of lower numbers or reduced activity of GRs and MRs. The GR gene consists of multiple alternative first exons resulting in different GR mRNA transcripts whereas for the MR only two first exons are known to date. Both, the human GR promoter 1F and the homologue rat Gr promoter 1.7 seem to be susceptible to methylation during stressful early life events resulting in lower 1F/1.7 transcript levels. It was proposed that this is due to methylation of a NGFI-A binding site in both, the rat promoter 1.7 and the human promoter 1F. The research presented in this thesis was undertaken to determine the differential expression and methylation patterns of GR and MR variants in multiple areas of the limbic brain system in the healthy and depressed human brain. Furthermore, the transcriptional control of the GR transcript 1F was investigated as expression changes of this transcript were associated with MDD, childhood abuse and early life stress. The role of NGFI-A and several other transcription factors on 1F regulation was studied in vitro and the effect of Ngfi-a overexpression on the rat Gr promoter 1.7 in vivo. The susceptibility to epigenetic programming of several GR promoters was investigated in MDD. In addition, changes in methylation levels have been determined in response to a single acute stressor in rodents. Our results showed that GR and MR first exon transcripts are differentially expressed in the human brain, but this is not due to epigenetic programming. We showed that NGFI-A has no effect on endogenous 1F/1.7 expression in vitro and in vivo. We provide evidence that the transcription factor E2F1 is a major element in the transcriptional complex necessary to drive the expression of GR 1F transcripts. In rats, highly individual methylation patterns in the paraventricular nucleus of the hypothalamus (PVN) suggest that this is not related to the stressor but can rather be interpreted as pre-existing differences. In contrast, the hippocampus showed a much more uniform epigenetic status, but still is susceptible to epigenetic modification even after a single acute stress suggesting a differential "state‟ versus "trait‟ regulation of the GR gene in different brain regions. The results of this thesis have given further insight in the complex transcriptional regulation of GR and MR first exons in health and disease. Epigenetic programming of GR promoters seems to be involved in early life stress and acute stress in adult rats; however, the susceptibility to methylation in response to stress seems to vary between brain regions.
External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.
Internet interventions have gained popularity and the idea is to use them to increase the availability of psychological treatment. Research suggests that internet interventions are effective for a number of psychological disorders with effect sizes comparable to those found in face-to-face treatment. However, when provided as an add-on to treatment as usual, internet interventions do not seem to provide additional benefit. Furthermore, adherence and dropout rates vary greatly between studies, limiting the generalizability of the findings. This underlines the need to further investigate differences between internet interventions, participating patients, and their usage of interventions. A stronger focus on the processes of change seems necessary to better understand the varying findings regarding outcome, adherence and dropout in internet interventions. Thus, the aim of this dissertation was to investigate change processes in internet interventions and the factors that impact treatment response. This could help to identify important variables that should be considered in research on internet interventions as well as in clinical settings that make use of internet interventions.
Study I (Chapter 5) investigated early change patterns in participants of an internet intervention targeting depression. Data from 409 participants were analyzed using Growth Mixture Modeling. Specifically a piecewise model was applied to model change from screening to registration (pretreatment) and early change (registration to week four of treatment). Three early change patterns were identified; two were characterized by improvement and one by deterioration. The patterns were predictive of treatment outcome. The results therefore indicated that early change should be closely monitored in internet interventions, as early change may be an important indicator of treatment outcome.
Study II (Chapter 6) picked up on the idea of analyzing change patterns in internet interventions and extended it by using the Muthen-Roy model to identify change-dropout patterns. A sligthly bigger sample of the dataset from Study I was analyzed (N = 483). Four change-dropout patterns emerged; high risk of dropout was associated with rapid improvement and deterioration. These findings indicate that clinicians should consider how dropout may depend on patient characteristics as well as symptom change, as dropout is associated with both deterioration and a good enough dosage of treatment.
Study III (Chapter 7) compared adherence and outcome in different participant groups and investigated the impact of adherence to treatment components on treatment outcome in an internet intervention targeting anxiety symptoms. 50 outpatient participants waiting for face- to-face treatment and 37 self-referred participants were compared regarding adherence to treatment components and outcome. In addition, outpatient participants were compared to a matched sample of outpatients, who had no access to the internet intervention during the waiting period. Adherence to treatment components was investigated as a predictor of treatment outcome. Results suggested that especially adherence may vary depending on participant group. Also using specific measures of adherence such as adherence to treatment components may be crucial to detect change mechanisms in internet interventions. Fostering adherence to treatment components in participants may increase the effectiveness of internet interventions.
Results of the three studies are discussed and general conclusions are drawn.
Implications for future research as well as their utility for clinical practice and decision- making are presented.
This thesis deals with economic aspects of employees' sickness. In addition to the classical case of sickness absence, in which an employee is completely unable to work and hence stays at home, there is the case of sickness presenteeism, in which the employee comes to work despite being sick. Accordingly, the thesis at hand covers research on both sickness states, absence and presenteeism. The first section covers sickness absence and labour market institutions. Chapter 2 presents theoretical and empirical evidence that differences in the social norm against benefit fraud, so-called benefit morale, can explain cross country diversity in the generosity of statutory sick pay entitlements between developed countries. In our political economy model, a stricter benefit morale reduces the absence rate, with counteracting effects on the politically set sick pay replacement rate. On the one hand, less absence caused by a stricter norm, makes the tax-financed insurance cheaper, leading to the usual demand side effect and hence to more generous sick pay entitlements. On the other hand, being less likely to be absent due to a stricter norm, the voters prefer a smaller fee over more insurance. We document both effects in a sample of 31 developed countries, capturing the years from 1981 to 2010. In Chapter 3 we investigate the relationship between the existence of works councils and illness-related absence and its consequences for plants. Using individual data from the German Socio-Economic Panel (SOEP), we find that the existence of a works council is positively correlated with the incidence and the annual duration of absence. Additionally, linked employer-employee data (LIAB) suggests that employers are more likely to expect personnel problems due to absence in plants with a works council. In western Germany, we find significant effects using a difference-in-differences approach, which can be causally interpreted. The second part of this thesis covers two studies on sickness presenteeism. In Chapter 4, we empirically investigate the determinants of the annual duration of sickness presenteeism using the European Working Conditions Survey (EWCS). Work autonomy, workload and tenure are positively related to the number of sickness presenteeism days, while a good working environment comes with less presenteeism. In Chapter 5 we theoretically and empirically analyze sickness absence and presenteeism behaviour with a focus on their interdependence. Specifically, we ask whether work-related factors lead to a substitutive, a complementary or no relationship between sickness absence and presenteeism. In other words, we want to know whether changes in absence and presenteeism behaviour incurred by work-related characteristics point in opposite directions (substitutive), the same direction (complementary), or whether they only affect either one of the two sickness states (no relationship). Our theoretical model shows that the relationship between sickness absence and presenteeism with regard to work-related characteristics is not necessarily of a substitutive nature. Instead, a complementary or no relationship can emerge as well. Turning to the empirical investigation, we find that only one out of 16 work-related factors, namely the supervisor status, leads to a substitutive relationship between absence and presenteeism. Few of the other determinants are complements, while the large majority is either related to sickness absence or presenteeism.
A basic assumption of standard small area models is that the statistic of interest can be modelled through a linear mixed model with common model parameters for all areas in the study. The model can then be used to stabilize estimation. In some applications, however, there may be different subgroups of areas, with specific relationships between the response variable and auxiliary information. In this case, using a distinct model for each subgroup would be more appropriate than employing one model for all observations. If no suitable natural clustering variable exists, finite mixture regression models may represent a solution that „lets the data decide“ how to partition areas into subgroups. In this framework, a set of two or more different models is specified, and the estimation of subgroup-specific model parameters is performed simultaneously to estimating subgroup identity, or the probability of subgroup identity, for each area. Finite mixture models thus offer a fexible approach to accounting for unobserved heterogeneity. Therefore, in this thesis, finite mixtures of small area models are proposed to account for the existence of latent subgroups of areas in small area estimation. More specifically, it is assumed that the statistic of interest is appropriately modelled by a mixture of K linear mixed models. Both mixtures of standard unit-level and standard area-level models are considered as special cases. The estimation of mixing proportions, area-specific probabilities of subgroup identity and the K sets of model parameters via the EM algorithm for mixtures of mixed models is described. Eventually, a finite mixture small area estimator is formulated as a weighted mean of predictions from model 1 to K, with weights given by the area-specific probabilities of subgroup identity.
The Second Language Acquisition of English Non-Finite Complement Clauses – A Usage-Based Perspective
(2022)
One of the most essential hypotheses of usage-based theories and many constructionist approaches to language is that language entails the piecemeal learning of constructions on the basis of general cognitive mechanisms and exposure to the target language in use (Ellis 2002; Tomasello 2003). However, there is still a considerable lack of empirical research on the emergence and mental representation of constructions in second language (L2) acquisition. One crucial question that arises, for instance, is whether L2 learners’ knowledge of a construction corresponds to a native-like mapping of form and meaning and, if so, to what extent this representation is shaped by usage. For instance, it is unclear how learners ‘build’ constructional knowledge, i.e. which pieces of frequency-, form- and meaning-related information become relevant for the entrenchment and schematisation of a L2 construction.
To address these issues, the English catenative verb construction was used as a testbed phenomenon. This idiosyncratic complex construction is comprised of a catenative verb and a non-finite complement clause (see Huddleston & Pullum 2002), which is prototypically a gerund-participial (henceforth referred to as ‘target-ing’ construction) or a to-infinitival complement (‘target-to’ construction):
(1) She refused to do her homework.
(2) Laura kept reading love stories.
(3) *He avoids to listen to loud music.
This construction is particularly interesting because learners often show choices of a complement type different from those of native speakers (e.g. Gries & Wulff 2009; Martinez‐Garcia & Wulff 2012) as illustrated in (3) and is commonly claimed to be difficult to be taught by explicit rules (see e.g. Petrovitz 2001).
By triangulating different types of usage data (corpus and elicited production data) and analysing these by multivariate statistical tests, the effects of different usage-related factors (e.g. frequency, proficiency level of the learner, semantic class of verb, etc.) on the representation and development of the catenative verb construction and its subschemas (i.e. target-to and target-ing construction) were examined. In particular, it was assessed whether they can predict a native-like form-meaning pairing of a catenative verb and non-finite complement.
First, all studies were able to show a robust effect of frequency on the complement choice. Frequency does not only lead to the entrenchment of high-frequency exemplars of the construction but is also found to motivate a taxonomic generalisation across related exemplars and the representation of a more abstract schema. Second, the results indicate that the target-to construction, due to its higher type and token frequency, has a high degree of schematicity and productivity than the target-ing construction for the learners, which allows for analogical comparisons and pattern extension with less entrenched exemplars. This schema is likely to be overgeneralised to (less frequent) target-ing verbs because the learners perceive formal and semantic compatibility between the unknown/infrequent verb and this pattern.
Furthermore, the findings present evidence that less advanced learners (A2-B2) make more coarse-grained generalisations, which are centred around high-frequency and prototypical exemplars/low-scope patterns. In the case of high-proficiency learners (C1-C2), not only does the number of native-like complement choices increase but relational information, such as the semantic subclasses of the verb, form-function contingency and other factors, becomes also relevant for a target-like choice. Thus, the results suggests that with increasing usage experience learners gradually develop a more fine-grained, interconnected representation of the catenative verb construction, which gains more resemblance to the form-meaning mappings of native speakers.
Taken together, these insights highlight the importance for language learning and teaching environments to acknowledge that L2 knowledge is represented in the form of highly interconnected form-meaning pairings, i.e. constructions, that can be found on different levels of abstraction and complexity.
This work is concerned with the numerical solution of optimization problems that arise in the context of ground water modeling. Both ground water hydraulic and quality management problems are considered. The considered problems are discretized problems of optimal control that are governed by discretized partial differential equations. Aspects of special interest in this work are inaccurate function evaluations and the ensuing numerical treatment within an optimization algorithm. Methods for noisy functions are appropriate for the considered practical application. Also, block preconditioners are constructed and analyzed that exploit the structure of the underlying linear system. Specifically, KKT systems are considered, and the preconditioners are tested for use within Krylov subspace methods. The project was financed by the foundation Stiftung Rheinland-Pfalz für Innovation and carried out in joint work with TGU GmbH, a company of consulting engineers for ground water and water resources.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
Evidence points to autonomy as having a place next to affiliation, achievement, and power as one of the basic implicit motives; however, there is still some research that needs to be conducted to support this notion.
The research in this dissertation aimed to address this issue. I have specifically focused on two issues that help solidify the foundation of work that has already been conducted on the implicit autonomy motive, and will also be a foundation for future studies. The first issue is measurement. Implicit motives should be measured using causally valid instruments (McClelland, 1980). The second issue addresses the function of motives. Implicit motives orient, select, and energize behavior (McClelland, 1980). If autonomy is an implicit motive, then we need a valid instrument to measure it and we also need to show that it orients, selects, and energizes behavior.
In the following dissertation, I address these two issues in a series of ten studies. Firstly, I present studies that examine the causal validity of the Operant Motive Test (OMT; Kuhl, 2013) for the implicit affiliation and power motives using established methods. Secondly, I developed and empirically tested pictures to specifically assess the implicit autonomy motive and examined their causal validity. Thereafter, I present two studies that investigated the orienting and energizing effects of the implicit autonomy motive. The results of the studies solidified the foundation of the OMT and how it measures nAutonomy. Furthermore, this dissertation demonstrates that nAutonomy fulfills the criteria for two of the main functions of implicit motives. Taken together, the findings of this dissertation provide further support for autonomy as an implicit motive and a foundation for intriguing future studies.
There is no longer any doubt about the general effectiveness of psychotherapy. However, up to 40% of patients do not respond to treatment. Despite efforts to develop new treatments, overall effectiveness has not improved. Consequently, practice-oriented research has emerged to make research results more relevant to practitioners. Within this context, patient-focused research (PFR) focuses on the question of whether a particular treatment works for a specific patient. Finally, PFR gave rise to the precision mental health research movement that is trying to tailor treatments to individual patients by making data-driven and algorithm-based predictions. These predictions are intended to support therapists in their clinical decisions, such as the selection of treatment strategies and adaptation of treatment. The present work summarizes three studies that aim to generate different prediction models for treatment personalization that can be applied to practice. The goal of Study I was to develop a model for dropout prediction using data assessed prior to the first session (N = 2543). The usefulness of various machine learning (ML) algorithms and ensembles was assessed. The best model was an ensemble utilizing random forest and nearest neighbor modeling. It significantly outperformed generalized linear modeling, correctly identifying 63.4% of all cases and uncovering seven key predictors. The findings illustrated the potential of ML to enhance dropout predictions, but also highlighted that not all ML algorithms are equally suitable for this purpose. Study II utilized Study I’s findings to enhance the prediction of dropout rates. Data from the initial two sessions and observer ratings of therapist interventions and skills were employed to develop a model using an elastic net (EN) algorithm. The findings demonstrated that the model was significantly more effective at predicting dropout when using observer ratings with a Cohen’s d of up to .65 and more effective than the model in Study I, despite the smaller sample (N = 259). These results indicated that generating models could be improved by employing various data sources, which provide better foundations for model development. Finally, Study III generated a model to predict therapy outcome after a sudden gain (SG) in order to identify crucial predictors of the upward spiral. EN was used to generate the model using data from 794 cases that experienced a SG. A control group of the same size was also used to quantify and relativize the identified predictors by their general influence on therapy outcomes. The results indicated that there are seven key predictors that have varying effect sizes on therapy outcome, with Cohen's d ranging from 1.08 to 12.48. The findings suggested that a directive approach is more likely to lead to better outcomes after an SG, and that alliance ruptures can be effectively compensated for. However, these effects
were reversed in the control group. The results of the three studies are discussed regarding their usefulness to support clinical decision-making and their implications for the implementation of precision mental health.
Striving for sustainable development by combating climate change and creating a more social world is one of the most pressing issues of our time. Growing legal requirements and customer expectations require also Mittelstand firms to address sustainability issues such as climate change. This dissertation contributes to a better understanding of sustainability in the Mittelstand context by examining different Mittelstand actors and the three dimensions of sustainability - social, economic, and environmental sustainability - in four quantitative studies. The first two studies focus on the social relevance and economic performance of hidden champions, a niche market leading subgroup of Mittelstand firms. At the regional level, the impact of 1,645 hidden champions located in Germany on various dimensions of regional development is examined. A higher concentration of hidden champions has a positive effect on regional employment, median income, and patents. At the firm level, analyses of a panel dataset of 4,677 German manufacturing firms, including 617 hidden champions, show that the latter have a higher return on assets than other Mittelstand firms. The following two chapters deal with environmental strategies and thus contribute to the exploration of the environmental dimension of sustainability. First, the consideration of climate aspects in investment decisions is compared using survey data from 468 European venture capital and private equity investors. While private equity firms respond to external stakeholders and portfolio performance and pursue an active ownership strategy, venture capital firms are motivated by product differentiation and make impact investments. Finally, based on survey data from 443 medium-sized manufacturing firms in Germany, 54% of which are family-owned, the impact of stakeholder pressures on their decarbonization strategies is analyzed. A distinction is made between symbolic (compensation of CO₂-emissions) and substantive decarbonization strategies (reduction of CO₂-emissions). Stakeholder pressures lead to a proactive pursuit of decarbonization strategies, with internal and external stakeholders varying in their influence on symbolic and substantial decarbonization strategies, and the relationship influenced by family ownership.
This study focuses on the representation of British South Asian identities in contemporary British audiovisual media. It attempts to answer the question, whether these identities are represented as hybrid, heterogeneous and ambivalent, or whether these contemporary representations follow in the tradition of colonial and postcolonial racialism. Racialised depictions of British South Asians have been the norm not only in the colonial but also in the postcolonial era until the rise of the Black British movement, whose successes have been also acknowledged in the field of representation. However these achievements have to be scrutinized again, especially in the context of the post 9/11 world, rising Islamophobia, and new forms of institutionalized discrimination on the basis of religion. Since the majority of British Muslims are of South Asian origin, this study tries to answer the question whether the marker of religious origin is racial belonging, i.e. skin colour, and old stereotypes associated with the racialised representation are being perpetuated into current depictions through an examination of the varied genre of popular audio visual media texts.
Evapotranspiration (ET) is one of the most important variables in hydrological studies. In the ET process, energy exchange and water transfer are involved. ET consists of transpiration and evaporation. The amount of plants transpiration dominates in ET. Especially in the forest regions, the ratio of transpiration to ET is in general 80-90 %. Meteorological variables, vegetation properties, precipitation and soil moisture are critical influence factors for ET generation. The study area is located in the forest area of Nahe catchment (Rhineland-Palatinate, Germany). The Nahe catchment is highly wooded. About 54.6 % of this area is covered by forest, with deciduous forest and coniferous forest are two primary types. A hydrological model, WaSiM-ETH, was employed for a long-term simulation from 1971-2003 in the Nahe catchment. In WaSiM-ETH, the potential evapotranspiration (ETP) was firstly calculated by the Penman-Monteith equation, and subsequently reduced according to the soil water content to obtain the actual evapotranspiration (ETA). The Penman-Monteith equation has been widely used and recommended for ETP estimation. The difficulties in applying this equation are the high demand of ground-measured meteorological data and the determination of surface resistance. A method combined remote sensing images with ground-measured meteorological data was also used to retrieve the ETA. This method is based on the surface properties such as surface albedo, fractional vegetation cover (FVC) and land surface temperature (LST) to obtain the latent heat flux (LE, corresponding to ETA) through the surface energy balance equation. LST is a critical variable for surface energy components estimation. It was retrieved from the TM/ETM+ thermal infrared (TIR) band. Due to the high-quality and cloudy-free requirements for TM/ETM+ data selection as well as the overlapping cycle of TM/ETM+ sensor is 16 days, images on only five dates are available during 1971-2003 (model ran) " May 15, 2000, July 05, 2001, July 19, August 04 and September 21 in 2003. It is found that the climate conditions of 2000, 2001 and 2003 are wet, medium wet and dry, respectively. Therefore, the remote sensing-retrieved observations are noncontinuous in a limited number over time but contain multiple climate conditions. Aerodynamic resistance and surface resistance are two most important parameters in the Penman-Monteith equation. However, for forest area, the aerodynamic resistance is calculated by a function of wind speed in the model. Since transpiration and evaporation are separately calculated by the Penman-Monteith equation in the model, the surface resistance was divided into canopy surface resistance rsc and soil surface resistance rse. rsc is related to the plants transpiration and rse is related to the bare soil evaporation. The interception evaporation was not taken into account due to its negligible contribution to ET rate under a dry-canopy (no rainfall) condition. Based on the remote sensing-retrieved observations, rsc and rse were calibrated in the WaSiM-ETH model for both forest types: for deciduous forest, rsc = 150 sm−1, rse = 250 sm−1; for coniferous forest, rsc = 300 sm−1, rse = 650 sm−1. We also carried out sensitivity analysis on rsc and rse. The appropriate value ranges of rsc and rse were determined as (annual maximum): for deciduous forest, [100,225] sm−1 for rsc and [50,450] sm−1 for rse; for coniferous forest, [225,375] sm−1 for rsc and [350,1200] sm−1 for rse. Due to the features of the observations that are in a limited number but contain multiple climate conditions, the statistical indices for model performance evaluation are required to be sensitive to extreme values. In this study, boxplots were found to well exhibit the model performance at both spatial and temporal scale. Nush-Sutcliffe efficiency (NSE), RMSE-observations standard deviation ratio (RSR), percent bias (PBIAS), mean bias error (MBE), mean variance of error distribution (S2d), index of agreement (d), root mean square error (RMSE) were found as appropriate statistical indices to provide additional evaluation information to the boxplots. The model performance can be judged as satisfactory if NSE > 0.5, RSR ≤ 0.7, PBIAS < -±12, MBE < -±0.45, S2d < 1.11, d > 0.79, RMSE < 0.97. rsc played a more important role than rse in ETP and ETA estimation by the Penman-Monteith equation, which is attributed to the fact that transpiration dominates in ET. The ETP estimation was found the most correlated to the relative humidity (RH), followed by air temperature (T), relative sunshine duration (SSD) and wind speed (WS). Under wet or medium wet climate conditions, ETA estimation was found the most correlated to T, followed by RH, SSD and WS. Under a water-stress condition, there were very small correlations between ETA and each meteorological variable.
The glucocorticoid (GC) cortisol, main mediator of the hypothalamic-pituitary-adrenal axis, has many implications in metabolism, stress response and the immune system. GC function is mediated mainly via the glucocorticoid receptor (GR) which binds as a transcription factor to glucocorticoid response elements (GREs). GCs are strong immunosuppressants and used to treat inflammatory and autoimmune diseases. Long-term usage can lead to several irreversible side effects which make improved understanding indispensable and warrant the adaptation of current drugs. Several large scale gene expression studies have been performed to gain insight into GC signalling. Nevertheless, studies at the proteomic level have not yet been made. The effects of cortisol on monocytes and macrophages were studied in the THP-1 cell line using 2D fluorescence difference gel electrophoresis (2D DIGE) combined with MALDI-TOF mass spectrometry. More than 50 cortisol-modulated proteins were identified which belonged to five functional groups: cytoskeleton, chaperones, immune response, metabolism, and transcription/translation. Multiple GREs were found in the promoters of their corresponding genes (+10 kb/-0.2 kb promoter regions including all alternative promoters available within the Database for Transcription Start Sites (DBTSS)). High quality GREs were observed mainly in cortisol modulated genes, corroborating the proteomics results. Differential regulation of selected immune response related proteins were confirmed by qPCR and immuno-blotting. All immune response related proteins (MX1, IFIT3, SYWC, STAT3, PMSE2, PRS7) which were induced by LPS were suppressed by cortisol and belong mainly to classical interferon target genes. Mx1 has been selected for detailed expression analysis since new isoforms have been identified by proteomics. FKBP51, known to be induced by cortisol, was identified as the strongest differentially expressed protein and contained the highest number of strict GREs. Genomic analysis of five alternative FKBP5 promoter regions suggested GC inducibility of all transcripts. 2D DIGE combined with 2D immunoblotting revealed the existence of several previously unknown FKBP51 isoforms, possibly resulting from these transcripts. Additionally multiple post-translational modifications were found, which could lead to different subcellular localization in monocytes and macrophages as seen by confocal microscopy. Similar results were obtained for the different cellular subsets of human peripheral blood mononuclear cells (PBMCs). FKBP51 was found to be constitutively phosphorylated with up to 8 phosphosites in CD19+ B lymphocytes. Differential Co-immunoprecipitation for cytoplasm and nucleus allowed us to identify new potential interaction partners. Nuclear FKBP51 was found to interact with myosin 9, whereas cytosolic FKBP51 with TRIM21 (synonym: Ro52, Sjögren`s syndrome antigen). The GR has been found to interact with THOC4 and YB1, two proteins implicated in mRNA processing and transcriptional regulation. We also applied proteomics to study rapid non-genomic effects of acute stress in a rat model. The nuclear proteome of the thymus was investigated after 15 min restraint stress and compared to the non-stressed control. Most of the identified proteins were transcriptional regulators found to be enriched in the nucleus probably to assist gene expression in an appropriate manner. The proteomic approach allowed us to further understand the cortisol mediated response in monocytes/macrophages. We identified several new target proteins, but we also found new protein variants and post-translational modifications which need further investigation. Detailed study of FKBP51 and GR indicated a complex regulation network which opened a new field of research. We identified new variants of the anti-viral response protein MX1, displaying differential expression and phosphorylation in the cellular compartments. Further, proteomics allowed us to follow the very early effects of acute stress, which happen prior to gene expression. The nuclear thymocyte proteome of restraint stressed rats revealed an active preparation for subsequent gene expression. Proteomics was successfully applied to study differential protein expression, to identify new protein variants and phosphorylation events as well as to follow translocation. New aspects for future research in the field of cortisol-mediated immune modulation have been added.
This work addresses the algorithmic tractability of hard combinatorial problems. Basically, we are considering \NP-hard problems. For those problems we can not find a polynomial time algorithm. Several algorithmic approaches already exist which deal with this dilemma. Among them we find (randomized) approximation algorithms and heuristics. Even though in practice they often work in reasonable time they usually do not return an optimal solution. If we constrain optimality then there are only two methods which suffice for this purpose: exponential time algorithms and parameterized algorithms. In the first approach we seek to design algorithms consuming exponentially many steps who are more clever than some trivial algorithm (who simply enumerates all solution candidates). Typically, the naive enumerative approach yields an algorithm with run time $\Oh^*(2^n)$. So, the general task is to construct algorithms obeying a run time of the form $\Oh^*(c^n)$ where $c<2$. The second approach considers an additional parameter $k$ besides the input size $n$. This parameter should provide more information about the problem and cover a typical characteristic. The standard parameterization is to see $k$ as an upper (lower, resp.) bound on the solution size in case of a minimization (maximization, resp.) problem. Then a parameterized algorithm should solve the problem in time $f(k)\cdot n^\beta$ where $\beta$ is a constant and $f$ is independent of $n$. In principle this method aims to restrict the combinatorial difficulty of the problem to the parameter $k$ (if possible). The basic hypothesis is that $k$ is small with respect to the overall input size. In both fields a frequent standard technique is the design of branching algorithms. These algorithms solve the problem by traversing the solution space in a clever way. They frequently select an entity of the input and create two new subproblems, one where this entity is considered as part of the future solution and another one where it is excluded from it. Then in both cases by fixing this entity possibly other entities will be fixed. If so then the traversed number of possible solution is smaller than the whole solution space. The visited solutions can be arranged like a search tree. To estimate the run time of such algorithms there is need for a method to obtain tight upper bounds on the size of the search trees. In the field of exponential time algorithms a powerful technique called Measure&Conquer has been developed for this purpose. It has been applied successfully to many problems, especially to problems where other algorithmic attacks could not break the trivial run time upper bound. On the other hand in the field of parameterized algorithms Measure&Conquer is almost not known. This piece of work will present examples where this technique can be used in this field. It also will point out what differences have to be made in order to successfully apply the technique. Further, exponential time algorithms for hard problems where Measure&Conquer is applied are presented. Another aspect is that a formalization (and generalization) of the notion of a search tree is given. It is shown that for certain problems such a formalization is extremely useful.
Attitudes are "the most distinctive and indispensable concept in contemporary social psychology" (Allport, 1935, p. 798). This outstanding position of the attitude concept in social cognitive research is not only reflected in the innumerous studies focusing on this concept but also in the huge number of theoretical approaches that have been put forth since then. Yet, it is still an open question, what attitudes actually are. That is, the question of how attitude objects are represented in memory cannot be unequivocally answered until now (e.g., Barsalou, 1999; Gawronski, 2007; Pratkanis, 1989, Chapter 4). In particular, researchers strongly differ with respect to their assumptions on the content, format and structural nature of attitude representations (Ferguson & Fukukura, 2012). This prevailing uncertainty on what actually constitutes our likes and dislikes is strongly dovetailed with the question of which processes result in the formation of these representations. In recent years, this issue has mainly been addressed in evaluative conditioning research (EC). In a standard EC-paradigm a neutral stimulus (conditioned stimulus, CS) is repeatedly paired with an affective stimulus (unconditioned stimulus, US). The pairing of stimuli then typically results in changes in the evaluation of the CS corresponding to the evaluative response of the US (De Houwer, Baeyens, & Field, 2005). This experimental approach on the formation of attitudes has primarily been concerned with the question of how the representations underlying our attitudes are formed. However, which processes operate on the formation of such an attitude representation is not yet understood (Jones, Olson, & Fazio, 2010; Walther, Nagengast, & Trasselli, 2005). Indeed, there are several ideas on how CS-US pairs might be encoded in memory. Notwithstanding the importance of these theoretical ideas, looking at the existing empirical work within the research area of EC (for reviews see Hofmann, De Houwer, Perugini, Baeyens, & Crombez, 2010; De Houwer, Thomas, & Baeyens, 2001) leaves one with the impression that scientists have skipped the basic processes. Basic processes hereby especially refer to the attentional processes being involved in the encoding of CSs and USs as well as the relation between them. Against the background of this huge gap in current research on attitude formation, the focus of this thesis will be to highlight the contribution of selective attention processes to a better understanding of the representation underlying our likes and dislikes. In particular, the present thesis considers the role of selective attention processes for the solution of the representation issue from three different perspectives. Before illustrating these different perspectives, Chapter 1 is meant to envision the omnipresence of the representation problem in current theoretical as well as empirical work on evaluative conditioning. Likewise, it emphasizes the critical role of selective attention processes for the representation question in classical conditioning and how this knowledge might be used to put forth the uniqueness of evaluative conditioning as compared to classical conditioning. Chapter 2 then considers the differential influence of attentional resources and goal-directed attention on attitude learning. The primary objective of the presented experiment was thereby to investigate whether attentional resources and goal-directed attention exert their influence on EC via changes in the encoding of CS-US relations in memory (i.e., contingency memory). Taking the findings from this experiment into account, Chapter 3 focuses on the selective processing of the US relative to the CS. In particular, the two experiments presented in this chapter were meant to explore the moderating influence of the selective processing of the US in its relation to the CS on EC. In Chapter 4 the important role of the encoding of the US in relation to the CS, as outlined in Chapter 3, is illuminated in the context of different retrieval processes. Against the background of the findings from the two presented experiments, the interplay between the encoding of CS-US contingencies and the moderation of EC via different retrieval processes will be discussed. Finally, a general discussion of the findings, their theoretical implications and future research lines will be outlined in Chapter 5.
The present thesis is devoted to a construction which defies generalisations about the prototypical English noun phrase (NP) to such an extent that it has been termed the Big Mess Construction (Berman 1974). As illustrated by the examples in (1) and (2), the NPs under study involve premodifying adjective phrases (APs) which precede the determiner (always realised in the form of the indefinite article a(n)) rather than following it.
(1) NoS had not been hijacked – that was too strong a word. (BNC: CHU 1766)
(2) He was prepared for a battle if the porter turned out to be as difficult a customer as his wife. (BNC: CJX 1755)
Previous research on the construction is largely limited to contributions from the realms of theoretical syntax and a number of cursory accounts in reference grammars. No comprehensive investigation of its realisations and uses has as yet been conducted. My thesis fills this gap by means of an exhaustive analysis of the construction on the basis of authentic language data retrieved from the British National Corpus (BNC). The corpus-based approach allows me to examine not only the possible but also the most typical uses of the construction. Moreover, while previous work has almost exclusively focused on the formal realisations of the construction, I investigate both its forms and functions.
It is demonstrated that, while the construction is remarkably flexible as concerns its possible realisations, its use is governed by probabilistic constraints. For example, some items occur much more frequently inside the degree item slot than others (as, too and so stand out for their particularly high frequency). Contrary to what is assumed in most previous descriptions, the slot is not restricted in its realisation to a fixed number of items. Rather than representing a specialised structure, the construction is furthermore shown to be distributed over a wide range of possible text types and syntactic functions. On the other hand, it is found to be much less typical of spontaneous conversation than of written language; Big Mess NPs further display a strong preference for the function of subject complement. Investigations of the internal structural complexity of the construction indicate that its obligatory components can optionally be enriched by a remarkably wide range of optional (if infrequent) elements. In an additional analysis of the realisations of the obligatory but lexically variable slots (head noun and head of AP), the construction is highlighted to represent a productive pattern. With the help of the methods of Collexeme Analysis (Stefanowitsch and Gries 2003) and Co-varying Collexeme Analysis (Gries and Stefanowitsch 2004b, Stefanowitsch and Gries 2005), the two slots are, however, revealed to be strongly associated with general nouns and ‘evaluative’ and ‘dimension’ adjectives, respectively. On the basis of an inspection of the most typical adjective-noun combinations, I identify the prototypical semantics of the Big Mess Construction.
The analyses of the constructional functions centre on two distinct functional areas. First, I investigate Bolinger’s (1972) hypothesis that the construction fulfils functions in line with the Principle of Rhythmic Alternation (e.g. Selkirk 1984: 11, Schlüter 2005). It is established that rhythmic preferences co-determine the use of the construction to some extent, but that they clearly do not suffice to explain the phenomenon under study. In a next step, the discourse-pragmatic functions of the construction are scrutinised. Big Mess NPs are demonstrated to perform distinct information-structural functions in that the non-canonical position of the AP serves to highlight focal information (compare De Mönnink 2000: 134-35). Additionally, the construction is shown to place emphasis on acts of evaluation. I conclude the construction to represent a contrastive focus construction.
My investigations of the formal and functional characteristics of Big Mess NPs each include analyses which compare individual versions of the construction to one another (e.g. the As Big a Mess, Too Big a Mess and So Big a Mess Constructions). It is revealed that the versions are united by a shared core of properties while differing from one another at more abstract levels of description. The question of the status of the constructional versions as separate constructions further receives special emphasis as part of a discussion in which I integrate my results into the framework of usage-based Construction Grammar (e.g. Goldberg 1995, 2006).
Mankind has dramatically influenced the nitrogen (N) fluxes between soil, vegetation, water and atmosphere " the global N cycle. Increasing intensification of agricultural land use, caused by the growing demand for agricultural products, has had major impacts on ecosystems worldwide. Particularly nitrogenous gases such as ammonia (NH3) have increased mainly due to industrial livestock farming. Countries with high N deposition rates require a variety of deposition measurements and effective N monitoring networks to assess N loads. Due to high costs, current "conventional"-deposition measurement stations are not widespread and therefore provide only a patchy picture of the real extent of the prevailing N deposition status over large areas. One tool that allows quantification of the exposure and the effects of atmospheric N impacts on an ecosystem is the use of bioindicators. Due to their specific physiology and ecology, especially lichens and mosses are suitable to reflect the atmospheric N input at ecosystem level. The present doctoral project began by investigating the general ability of epiphytic lichens to qualify and quantify N deposition by analysing both lichens and total N and δ15N along a gradient of different N emission sources and severity. The results showed that this was a viable monitoring method, and a grid-based monitoring system with nitrophytic lichens was set up in the western part of Germany. Finally, a critical appraisal of three different monitoring techniques (lichens, mosses and tree bark) was carried out to compare them with national relevant N deposition assessment programmes. In total 1057 lichen samples, 348 tree bark samples, 153 moss samples and 24 deposition water samples, were analysed in this dissertation at different investigation scales in Germany.The study identified species-specific ability and tolerance of various epiphytic lichens to accumulate N. Samples of tree bark were also collected and N accumulation ability was detected in connection with the increased intensity of agriculture, and according to the presence of reduced N compounds (NHx) in the atmosphere. Nitrophytic lichens (Xanthoria parietina, Physcia spp.) have the strongest correlations with high agriculture-related N deposition. In addition, the main N sources were revealed with the help of δ15N values along a gradient of altitude and areas affected by different types of land use (NH3 density classes, livestock units and various deposition types). Furthermore, in the first nationwide survey of Germany to compare lichens, mosses and tree bark samples as biomonitors for N deposition, it was revealed that lichens are clearly the most meaningful monitor organisms in highly N affected regions. Additionally, the study shows that dealing with different biomonitors is a difficult task due to their variety of N responses. The specific receptor surfaces of the indicators and therefore their different strategies of N uptake are responsible for the tissue N concentration of each organism group. It was also shown that the δ15N values depend on their N origin and the specific N transformations in each organism system, so that a direct comparison between atmosphere and ecosystems is not possible.In conclusion, biomonitors, and especially epiphytic lichens may serve as possible alternatives to get a spatially representative picture of the N deposition conditions. Furthermore, bioindication with lichens is a cost-efficient alternative to physico-chemical measurements to comprehensively assess different prevailing N doses and sources of N pools on a regional scale. They can at least support on-site deposition instruments by qualification and quantification of N deposition.
N-acetylation by N-acetyltransferase 1 (NAT1) is an important biotransformation pathway of the human skin and it is involved in the deactivation of the arylamine and well-known contact allergen para-phenylenediamine (PPD). Here, NAT1 expression and activity were analyzed in antigen presenting cells (monocyte-derived dendritic cells, MoDCs, a model for epidermal Langerhans cells) and human keratinocytes. The latter were used to study exogenous and endogenous NAT1 activity modulations. Within this thesis, MoDCs were found to express metabolically active NAT1. Activities were between 23.4 and 26.6 nmol/mg/min and thus comparable to peripheral blood mononuclear cells. These data suggest that epidermal Langerhans cells contribute to the cutaneous N-acetylation capacity. Keratinocytes, which are known for their efficient N-acetylation, were analyzed in a comparative study using primary keratinocytes (NHEK) and different shipments of the immortalized keratinocyte cell line HaCaT, in order to investigate the ability of the cell line to model epidermal biotransformation. N-acetylation of the substrate para-aminobenzoic acid (PABA) was 3.4-fold higher in HaCaT compared to NHEK and varied between the HaCaT shipments (range 12.0"44.5 nmol/mg/min). Since B[a]P induced cytochrome p450 1 (CYP1) activities were also higher in HaCaT compared to NHEK, the cell line can be considered as an in vitro tool to qualitatively model epidermal metabolism, regarding NAT1 and CYP1. The HaCaT shipment with the highest NAT1 activity showed only minimal reduction of cell viability after treatment with PPD and was subsequently used to study interactions between NAT1 and PPD in keratinocytes. Treatment with PPD induced expression of cyclooxygenases (COX) in HaCaT, but in parallel, PPD N-acetylation was found to saturate with increasing PPD concentration. This saturation explains the presence of the PPD induced COX induction despite the high N-acetylation capacities. A detailed analysis of the effect of PPD on NAT1 revealed that the saturation of PPD N-acetylation was caused by a PPD-induced decrease of NAT1 activity. This inhibition was found in HaCaT as well as in primary keratinocytes after treatment with PPD and PABA. Regarding the mechanism, reduced NAT1 protein level and unaffected NAT1 mRNA expression after PPD treatment adduced clear evidences for substrate-dependent NAT1 downregulation. These results expand the existing knowledge about substrate-dependent NAT1 downregulation to human epithelial skin cells and demonstrate that NAT1 activity in keratinocytes can be modulated by exogenous factors. Further analysis of HaCaT cells from different shipments revealed an accelerated progression through the cell cycle in HaCaT cells with high NAT1 activities. These findings suggest an association between NAT1 and proliferation in keratinocytes as it has been proposed earlier for tumor cells. In conclusion, N-acetylation capacity of MoDCs as well as keratinocytes contribute to the overall N-acetylation capacity of human skin. NAT1 activity of keratinocytes and consequently the detoxification capacities of human skin can be modulated by the presence of exogenous NAT1 substrates and endogenous by the cell proliferation status of keratinocytes.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
Spatial Queues
(2000)
In the present thesis, a theoretical framework for the analysis of spatial queues is developed. Spatial queues are a generalization of the classical concept of queues as they provide the possibility of assigning properties to the users. These properties may influence the queueing process, but may also be of interest for themselves. As a field of application, mobile communication networks are modeled by spatial queues in order to demonstrate the advantage of including user properties into the queueing model. In this application, the property of main interest is the user's position in the network. After a short introduction, the second chapter contains an examination of the class of Markov-additive jump processes, including expressions for the transition probabilities and the expectation as well as laws of large numbers. Chapter 3 contains the definition and analysis of the central concept of spatial Markovian arrival processes (shortly: SMAPs) as a special case of Markov-additive jump processes, but also as a natural generalization from the well-known concept of BMAPs. In chapters 4 and 5, SMAPs serve as arrival streams for the analyzed periodic SMAP/M/c/c and SMAP/G/infinity queues, respectively. These types of queues find application as models or planning tools for mobile communication networks. The analysis of these queues involves new methods such that even for the special cases of BMAP inputs (i.e. non-spatial queues) new results are obtained. In chapter 6, a procedure for statistical parameter estimation is proposed along with its numerical results. The thesis is concluded by an appendix which collects necessary results from the theories of Markov jump processes and stochastic point fields. For special classes of Markov jump processes, new results have been obtained, too.
Globalization and the emergence of global value chains have not only changed the way we live, but also the way economists study international economics. These changes are visible in various areas and dimension. This dissertation deals " mostly empirically " with some of these issues related to global value chains. It starts by critically examining the political economy forces determining the occurrence and the extent of trade liberalization conditions in World Bank lending agreements. The focal point is whether these are affected by the World Bank- most influential member countries. Afterwards, the thesis moves on to describe trade of the European Union member countries at each stage of the value chain. The description is based on a new classification of goods into parts, components and final products as well as a newly developed measure describing the average level of development of a countries trading partners. This descriptive exercise is followed by critically examining discrepancies between gross trade and trade in value added with respect to comparative advantage. A gravity model is employed to contrast results when studying the institutional determinants of comparative advantage. Finally, the thesis deals with determinants of regional location choices for foreign direct investment. The analysis is based on a theoretical new economic geography model and employs a newly developed index that accounts for the presence of potentially all suppliers and buyers at all stages of the value chain.
Magnet Resonance Imaging (MRI) and Electroencephalography (EEG) are tools used to investigate the functioning of the working brain in both humans and animal studies. Both methods are increasingly combined in separate or simultaneous measurements under the assumption to benefit from their individual strength while compensating their particular weaknesses. However, little attention has been paid to how statistical analyses strategies can influence the information that can be retrieved from a combined EEG fMRI study. Two independent studies in healthy student volunteers were conducted in the context of emotion research to demonstrate two approaches of combining MRI and EEG data of the same participants. The first study (N = 20) applied a visual search paradigm and found that in both measurements the assumed effects were absent by not statistically combining their results. The second study (N = 12) applied a novelty P300 paradigm and found that only the statistical combination of MRI and EEG measurements was able to disentangle the functional effects of brain areas involved in emotion processing. In conclusion, the observed results demonstrate that there are added benefits of statistically combining EEG-fMRI data acquisitions by assessing both the inferential statistical structure and the intra-individual correlations of the EEG and fMRI signal.
The visualization of relational data is at the heart of information visualization. The prevalence of visual representations for this kind of data is based on many real world examples spread over many application domains: protein-protein interaction networks in the field of bioinformatics, hyperlinked documents in the World Wide Web, call graphs in software systems, or co-author networks are just four instances of a rich source of relational datasets. The most common visual metaphor for this kind of data is definitely the node-link approach, which typically suffers from visual clutter caused by many edge crossings. Many sophisticated algorithms have been developed to layout a graph efficiently and with respect to a list of aesthetic graph drawing criteria. Relations between objects normally change over time. Visualizing the dynamics means an additional challenge for graph visualization researchers. Applying the same layout algorithms for static graphs to intermediate states of dynamic graphs may also be a strategy to compute layouts for an animated graph sequence that shows the dynamics. The major drawback of this approach is the high cognitive effort for a viewer of the animation to preserve his mental map. To tackle this problem, a sophisticated layout algorithm has to inspect the whole graph sequence and compute a layout with as little changes as possible between subsequent graphs. The main contribution and ultimate goal of this thesis is the visualization of dynamic compound weighted multi directed graphs as a static image that targets at visual clutter reduction and at mental map preservation. To achieve this goal, we use a radial space-filling visual metaphor to represent the dynamics in relational data. As a side effect the obtained pictures are very aesthetically appealing. In this thesis we firstly describe static graph visualizations for rule sets obtained by extracting knowledge from software archives under version control. In a different work we apply animated node-link diagrams to code-developer relationships to show the dynamics in software systems. An underestimated visualization paradigm is the radial representation of data. Though this kind of data has a long history back to centuries-old statistical graphics, only little efforts have been done to fully explore the benefits of this paradigm. We evaluated a Cartesian and a radial counterpart of a visualization technique for visually encoding transaction sequences and dynamic compound digraphs with both an eyetracking and an online study. We found some interesting phenomena apart from the fact that also laymen in graph theory can understand the novel approach in a short time and apply it to datasets. The thesis is concluded by an aesthetic dimensions framework for dynamic graph drawing, future work, and currently open issues.
In politics and economics, and thus in the official statistics, the precise estimation of indicators for small regions or parts of populations, the so-called Small Areas or domains, is discussed intensively. The design-based estimation methods currently used are mainly based on asymptotic properties and are thus reliable for large sample sizes. With small sample sizes, however, this design based considerations often do not apply, which is why special model-based estimation methods have been developed for this case - the Small Area methods. While these may be biased, they often have a smaller mean squared error (MSE) as the unbiased design based estimators. In this work both classic design-based estimation methods and model-based estimation methods are presented and compared. The focus lies on the suitability of the various methods for their use in official statistics. First theory and algorithms suitable for the required statistical models are presented, which are the basis for the subsequent model-based estimators. Sampling designs are then presented apt for Small Area applications. Based on these fundamentals, both design-based estimators and as well model-based estimation methods are developed. Particular consideration is given in this case to the area-level empirical best predictor for binomial variables. Numerical and Monte Carlo estimation methods are proposed and compared for this analytically unsolvable estimator. Furthermore, MSE estimation methods are proposed and compared. A very popular and flexible resampling method that is widely used in the field of Small Area Statistics, is the parametric bootstrap. One major drawback of this method is its high computational intensity. To mitigate this disadvantage, a variance reduction method for parametric bootstrap is proposed. On the basis of theoretical considerations the enormous potential of this proposal is proved. A Monte Carlo simulation study shows the immense variance reduction that can be achieved with this method in realistic scenarios. This can be up to 90%. This actually enables the use of parametric bootstrap in applications in official statistics. Finally, the presented estimation methods in a large Monte Carlo simulation study in a specific application for the Swiss structural survey are examined. Here problems are discussed, which are of high relevance for official statistics. These are in particular: (a) How small can the areas be without leading to inappropriate or to high precision estimates? (b) Are the accuracy specifications for the Small Area estimators reliable enough to use it for publication? (c) Do very small areas infer in the modeling of the variables of interest? Could they cause thus a deterioration of the estimates of larger and therefore more important areas? (d) How can covariates, which are in different levels of aggregation be used in an appropriate way to improve the estimates. The data basis is the Swiss census of 2001. The main results are that in the author- view, the use of small area estimators for the production of estimates for areas with very small sample sizes is advisable in spite of the modeling effort. The MSE estimates provide a useful measure of precision, but do not reach in all Small Areas the level of reliability of the variance estimates for design-based estimators.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
Although it has been demonstrated that nociceptive processing can be modulated by heterotopically and concurrently applied noxious stimuli, the nature of brain processes involved in this percept modulation in healthy subjects remains elusive. Using functional magnetic resonance imaging (fMRI) we investigated the effect of noxious counter-stimulation on pain processing. FMRI scans (1.5 T; block-design) were performed in 34 healthy subjects (median age: 23.5 years; range: 20-31 yrs.) during combined and single application (duration: 15 s; ISI=36 s incl. 6 s rating time) of noxious interdigital-web pinching (intensity range: 6-15 N) and contact-heat (45-49 -°C) presented in pseudo-randomized order during two runs separated by approx. 15 min with individually adjusted equi-intense stimuli. In order to control for attention artifacts, subjects were instructed to maintain their focus either on the mechanical or on the thermal pain stimulus. Changes in subjective pain intensity were computed as percent differences (∆%) in pain ratings between single and heterotopic stimulation for both fMRI runs, resulting in two subgroups showing a relative pain increase (subgroup P-IN, N=10) vs. decrease (subgroup P-DE, N=12). Second level and Region of Interest analysis conducted for both subgroups separately revealed that during heterotopic noxious counter-stimulation, subjects with relative pain decrease showed stronger and more widespread brain activations compared to subjects with relative pain increase in pain processing regions as well as a fronto-parietal network. Median-split regression analyses revealed a modulatory effect of prefrontal activation on connectivity between the thalamus and midbrain/pons, supporting the proposed involvement of prefrontal cortex regions in pain modulation. Furthermore, the mid-sagittal size of the total corpus callosum and five of its subareas were measured from the in vivo magnetic resonance imaging (MRI) recordings. A significantly larger relative truncus size (P=.04) was identified in participants reporting a relative decrease of subjective pain intensity during counter-stimulation, when compared to subjects experiencing a relative pain increase. The above subgroup differences observed in functional and structural imaging data are discussed with consideration of potential differences in cognitive and emotional aspects of pain modulation.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
Aggression is one of the most researched topics in psychology. This is understandable, since aggression behavior does a lot of harm to individuals and groups. A lot is known already about the biology of aggression, but one system that seems to be of vital importance in animals has largely been overlooked: the hypothalamic-pituitary-adrenal (HPA) axis. Menno Kruk and Jószef Haller and their research teams developed rodent models of adaptive, normal, and abnormal aggressive behavior. They found the acute HPA axis (re)activity, but also chronic basal levels to be causally relevant in the elicitation and escalation of aggressive behavior. As a mediating variable, changes in the processing of relevant social information is proposed, although this could not be tested in animals. In humans, not a lot of research has been done, but there is evidence for both the association between acute and basal cortisol levels in (abnormal) aggression. However, not many of these studies have been experimental of nature. rnrnOur aim was to add to the understanding of both basal chronic levels of HPA axis activity, as well as acute levels in the formation of aggressive behavior. Therefore, we did two experiments, both with healthy student samples. In both studies we induced aggression with a well validated paradigm from social psychology: the Taylor Aggression Paradigm. Half of the subjects, however, only went through a non-provoking control condition. We measured trait basal levels of HPA axis activity on three days prior. We took several cortisol samples before, during, and after the task. After the induction of aggression, we measured the behavioral and electrophysiological brain response to relevant social stimuli, i.e., emotional facial expressions embedded in an emotional Stroop task. In the second study, we pharmacologically manipulated cortisol levels 60min before the beginning of the experiment. To do that, half of the subjects were administered 20mg of hydrocortisone, which elevates circulating cortisol levels (cortisol group), the other half was administered a placebo (placebo group). Results showed that acute HPA axis activity is indeed relevant for aggressive behavior. We found in Study 1 a difference in cortisol levels after the aggression induction in the provoked group compared to the non-provoked group (i.e., a heightened reactivity of the HPA axis). However, this could not be replicated in Study 2. Furthermore, the pharmacological elevation of cortisol levels led to an increase in aggressive behavior in women compared to the placebo group. There were no effects in men, so that while men were significantly more aggressive than women in the placebo group, they were equally aggressive in the cortisol group. Furthermore, there was an interaction of cortisol treatment with block of the Taylor Aggression Paradigm, in that the cortisol group was significantly more aggressive in the third block of the task. Concerning basal HPA axis activity, we found an effect on aggressive behavior in both studies, albeit more consistently in women and in the provoked and non-provoked groups. However, the effect was not apparent in the cortisol group. After the aggressive encounter, information processing patterns were changed in the provoked compared to the non-provoked group for all facial expressions, especially anger. These results indicate that the HPA axis plays an important role in the formation of aggressive behavior in humans, as well. Importantly, different changes within the system, be it basal or acute, are associated with the same outcome in this task. More studies are needed, however, to better understand the role that each plays in different kinds of aggressive behavior, and the role information processing plays as a possible mediating variable. This extensive knowledge is necessary for better behavioral interventions.
The brain is the central coordinator of the human stress reaction. At the same time, peripheral endocrine and neural stress signals act on the brain modulating brain function. Here, three experimental studies are presented demonstrating this dual role of the brain in stress. Study I shows that centrally acting insulin, an important regulator of energy homeostasis, attenuates the stress related cortisol secretion. Studies II and III show that specific components of the stress reaction modulate learning and memory retrieval, two important aspects of higher-order brain function.
Tropospheric ozone (O3) is known to have various detrimental effects on plants, such as visible leaf injury, reduced growth and premature senescence. Flux models offer the determination of the harmful ozone dose entering the plant through the stomata. This dose can then be related to phytotoxic effects mentioned above to obtain dose-response relationships, which are a helpful tool for the formulation of abatement strategies of ozone precursors. rnOzone flux models are dependant on the correct estimation of stomatal conductance (gs). Based on measurements of gs, an ozone flux model for two white clover clones (Trifolium repens L. cv Regal; NC-S (ozone-sensitive) and NC-R (ozone-resistant)) differing in their sensitivity to ozone was developed with the help of artificial neural networks (ANNs). White clover is an important species of various European grassland communities. The clover plants were exposed to ambient air at three sites in the Trier region (West Germany) during five consecutive growing seasons (1997 to 2001). The response parameters visible leaf injury and biomass ratio of NC-S/NC-R clone were regularly assessed. gs-measurements of both clones functioned as output of the ANN-based gs model, while corresponding climate parameters (i.e. temperature, vapour pressure deficit (VPD) and photosynthetic active radiation (PAR)) and various ozone concentration indices were inputs. The development of the model was documented in detail and various model evaluation techniques (e.g. sensitivity analysis) were applied. The resulting gs model was used as a basis for ozone flux calculations, which were related to above mentioned response parameters. rnThe results showed that the ANNs were capable of revealing and learning the complex relationship between gs and key meteorological parameters and ozone concentration indices. The dose-response relationships between ozone fluxes and visible leaf injury were reasonably strong, while those between ozone fluxes and NC-S/NC-R biomass ratio were fairly weak. The results were discussed in detail with respect to the suitability of the chosen experimental methods and model type.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
This dissertation includes three research articles on the portfolio risks of private investors. In the first article, we analyze a large data set of private banking portfolios in Switzerland of a major bank with the unique feature that parts of the portfolios were managed by the bank, and parts were advisory portfolios. To correct the heterogeneity of individual investors, we apply a mixture model and a cluster analysis. Our results suggest that there is indeed a substantial group of advised individual investors that outperform the bank managed portfolios, at least after fees. However, a simple passive strategy that invests in the MSCI World and a risk-free asset significantly outperforms both the better advisory and the bank managed portfolios. The new regulation of the EU for financial products (UCITS IV) prescribes Value at Risk (VaR) as the benchmark for assessing the risk of structured products. The second article discusses the limitations of this approach and shows that, in theory, the expected return of structured products can be unbounded while the VaR requirement for the lowest risk class can still be satisfied. Real-life examples of large returns within the lowest risk class are then provided. The results demonstrate that the new regulation could lead to new seemingly safe products that hide large risks. Behavioral investors who choose products based only on their official risk classes and their expected returns will, therefore, invest into suboptimal products. To overcome these limitations, we suggest a new risk-return measure for financial products based on the martingale measure that could erase such loopholes. Under the mean-VaR framework, the third article discusses the impacts of the underlying's first four moments on the structured product. By expanding the expected return and the VaR of a structured product with its underlying moments, it is possible to investigate each moment's impact on them, simultaneously. Results are tested by Monte Carlo simulation and historical simulation. The findings show that for the majority of structured products, underlyings with large positive skewness are preferred. The preferences for variance and for kurtosis are ambiguous.
The complicated human alternative GR promoter region plays a pivotal role in the regulation of GR levels. In this thesis, both genomic and environmental factors linked with GR expression are covered. This research showed that GR promoters were susceptible to silencing by methylation and the activity of the individual promoters was also modulated by SNPs. E2F1 is a major element to drive the expression of GR 1F transcripts and single CpG dinucleotide methylation cannot mediate the inhibition of transcription in vitro. Also, the distribution of GR first exons and 3" splice variants (GRα and GR-P) is expressed throughout the human brain with no region-specific alternative first exon usage. These data mirrored the consistently low levels of methylation in the brain, and the observed homogeneity throughout the studied regions. Taken together, the research presented in this thesis explored several layers of complexity in GR transcriptional regulation.
This thesis considers the general task of computing a partition of a set of given objects such that each set of the partition has a cardinality of at least a fixed number k. Among such kinds of partitions, which we call k-clusters, the objective is to find the k-cluster which minimises a certain cost derived from a given pairwise difference between objects which end up the same set. As a first step, this thesis introduces a general problem, denoted by (||.||,f)-k-cluster, which models the task to find a k-cluster of minimum cost given by an objective function computed with respect to specific choices for the cost functions f and ||.||. In particular this thesis considers three different choices for f and also three different choices for ||.|| which results in a total of nine different variants of the general problem. Especially with the idea to use the concept of parameterised approximation, we first investigate the role of the lower bound on the cluster cardinalities and find that k is not a suitable parameter, due to remaining NP-hardness even for the restriction to the constant 3. The reductions presented to show this hardness yield the even stronger result which states that polynomial time approximations with some constant performance ratio for any of the nine variants of (||.||,f)-k-cluster require a restriction to instances for which the pairwise distance on the objects satisfies the triangle inequality. For this restriction to what we informally refer to as metric instances, constant-factor approximation algorithms for eight of the nine variants of (||.||,f)-k-cluster are presented. While two of these algorithms yield the provably best approximation ratio (assuming P!=NP), others can only guarantee a performance which depends on the lower bound k. With the positive effect of the triangle inequality and applications to facility location in mind, we discuss the further restriction to the setting where the given objects are points in the Euclidean metric space. Considering the effect of computational hardness caused by high dimensionality of the input for other related problems (curse of dimensionality) we check if this is also the source of intractability for (||.||,f)-k-cluster. Remaining NP-hardness for restriction to small constant dimensionality however disproves this theory. We then use parameterisation to develop approximation algorithms for (||.||,f)-k-cluster without restriction to metric instances. In particular, we discuss structural parameters which reflect how much the given input differs from a metric. This idea results in parameterised approximation algorithms with parameters such as the number of conflicts (our name for pairs of objects for which the triangle inequality is violated) or the number of conflict vertices (objects involved in a conflict). The performance ratios of these parameterised approximations are in most cases identical to those of the approximations for metric instances. This shows that for most variants of (||.||,f)-k-cluster efficient and reasonable solutions are also possible for non-metric instances.
This doctoral thesis includes five studies that deal with the topics work, well-being, and family formation, as well as their interaction. The studies aim to find answers to the following questions: Do workers’ personality traits determine whether they sort into jobs with performance appraisals? Does job insecurity result in lower quality and quantity of sleep? Do public smoking bans affect subjective well-being by changing individuals’ use of leisure time? Can risk preferences help to explain non-traditional family forms? And finally, are differences in out-of-partnership birth rates between East and West Germany driven by cultural characteristics that have evolved in the two separate politico-economic systems? To answer these questions, the following chapters use basic economic subjects such as working conditions, income, and time use, but also employ a range of sociological and psychological concepts such as personality traits and satisfaction measures. Furthermore, all five studies use data from the German Socio-Economic Panel (SOEP), a representative longitudinal panel of private households in Germany, and apply state-of-the-art microeconometric methods. The findings of this doctoral thesis are important for individuals, employers, and policymakers. Workers and employers benefit from knowing the determinants of occupational sorting, as vacancies can be filled more accurately. Moreover, knowing which job-related problems lead to lower well-being and potentially higher sickness absence likely increases efficiency in the workplace. The research on smoking bans and family formation in chapters 4, 5, and 6 is particularly interesting for policymakers. The results on the effects of smoking bans on subjective well-being presented in chapter 4 suggest that the impacts of tobacco control policies could be weighed more carefully. Additionally, understanding why women are willing to take the risks associated with single motherhood can help to improve policies targeting single mothers.
The dissertation includes three published articles on which the development of a theoretical model of motivational and self-regulatory determinants of the intention to comprehensively search for health information is based. The first article focuses on building a solid theoretical foundation as to the nature of a comprehensive search for health information and enabling its integration into a broader conceptual framework. Based on subjective source perceptions, a taxonomy of health information sources was developed. The aim of this taxonomy was to identify most fundamental source characteristics to provide a point of reference when it comes to relating to the target objects of a comprehensive search. Three basic source characteristics were identified: expertise, interaction and accessibility. The second article reports on the development and evaluation of an instrument measuring the goals individuals have when seeking health information: the ‘Goals Associated with Health Information Seeking’ (GAINS) questionnaire. Two goal categories (coping focus and regulatory focus) were theoretically derived, based on which four goals (understanding, action planning, hope and reassurance) were classified. The final version of the questionnaire comprised four scales representing the goals, with four items per scale (sixteen items in total). The psychometric properties of the GAINS were analyzed in three independent samples, and the questionnaire was found to be reliable and sufficiently valid as well as suitable for a patient sample. It was concluded that the GAINS makes it possible to evaluate goals of health information seeking (HIS) which are likely to inform the intention building on how to organize the search for health information. The third article describes the final development and a first empirical evaluation of a model of motivational and self-regulatory determinants of an intentionally comprehensive search for health information. Based on the insights and implications of the previous two articles and an additional rigorous theoretical investigation, the model included approach and avoidance motivation, emotion regulation, HIS self-efficacy, problem and emotion focused coping goals and the intention to seek comprehensively (as outcome variable). The model was analyzed via structural equation modeling in a sample of university students. Model fit was good and hypotheses with regard to specific direct and indirect effects were confirmed. Last, the findings of all three articles are synthesized, the final model is presented and discussed with regard to its strengths and weaknesses, and implications for further research are determined.
This dissertation develops a rationale of how to use fossil data in solving biogeographical and ecological problems. It is argued that large amounts of fossil data of high quality can be used to document the evolutionary processes (the origin, development, formation and dynamics) of Arealsystems, which can be divided into six stages in North America: the Refugium Stage (before 15,000 years ago: > 15 ka), the Dispersal Stage (from 8,000 to 15,000 years ago: 8.0 - 15 ka), the Developing Stage (from 3,000 to 8,000 years ago: 3.0 - 8.0 ka), the Transitional Stage (from 1,000 to 3,000 years ago: 1 - 3 ka), the Primitive Stage (from 5,00 to 1,000 years ago: 0.5 - 1 ka) and the Human Disturbing Stage (during the last 500 years: < 0.5 ka). The division into these six stages is based on geostatistical analysis of the FAUNMAP database that contains 43,851 fossil records collected from 1860 to 1994 in North America. Fossil data are one of the best materials to test the glacial refugia theory. Glacial refugia represent areas where flora and fauna were preserved during the glacial period, characterized by richness in species and endemic species at present. This means that these (endemic) species should have distributed purely or primarily in these areas during the glacial period. The refugia can therefore be identified by fossil records of that period. If it is not the case, the richness in (endemic) species may not be the result of the glacial refugia. By exploring where mammals lived during the Refugium Stage (> 15 ka), seven refugia in North America can be identified: the California Refugium, the Mexico Refugium, the Florida Refugium, the Appalachia Refugium, the Great Basin Refugium, the Rocky Mountain Refugium and the Great Lake Refugium. The first five refugia coincide well with De Lattin- dispersal centers recognized by biogeographical methods using data on modern distributions. The individuals of a species are not evenly distributed over its Arealsystem. Brown- Hot Spots Model shows that in most cases there is an enormous variation in abundance within an areal of a species: In a census, zero or only a very few individuals occur at most sample locations, but tens or hundreds are found at a few sample sites. Locations where only a few individuals can be sampled in a survey are called "cool spots", and sites where tens or hundreds of individuals can be observed in a survey are called "hot spots". Many areas within the areal are uninhabited, which are called "holes". This model has direct implications for analyzing fossil data: Hot spots have a much higher local population density than cool spots. The chances to discover fossil individuals of a species are much higher in sediments located in a "hot spot" area than in a "cool spot" area. Therefore much higher MNIs (Minimum Number of Individuals) of the species should be found in fossil localities located in the hot spot than in the cool spot area. There are only a few hot spots but many cool spots within an areal of a single hypothetical species, consequently only a few fossil sites can provide with much high MNIs, whereas most other sites can only provide with very low MNIs. This prediction has been proved to be true by analysis of 70 species in FAUMAP containing over 100 fossil records. The temporal and spatial variation in abundance can be reconstructed from the temporospatial distribution of the MNIs of a species over its Arealsystem. Areas with no fossil records from the last thousands of years may be holes, and sites with much higher MNIs may be hot spots, while locations with low MNIs may be cool spots. Although the hot spots of many species can remain unchanged in an area over thousands of years, our study shows that a large shift of hot spots occurred mainly around 1,500-1,000 years ago. There are three directions of movement: from the west side to the east side of the Rockies, from the East of the USA to the east side of the Rockies and from the west side of the Rockies to the Southwest of the USA. The first two directions of shift are called Lewis and Clark- pattern, which can be verified with the observations mad by Lewis and Clark during their expedition in 1805-1806. The historical process of this pattern may well explain the 200-year-old puzzle why big game then abundant on the east side were rare on the west side of the Rocky Mountains noted by modern ecologists and biogeographers. The third direction of shift is called Bayham- pattern. This pattern can be tested by the model of Late Holocene resource intensification first described by Frank E. Bayham. The historical process creating the Bayham pattern will challenge the classic explanation of the Late Holocene resource intensification. An environmental change model has been proposed to account for the shift of hot spots. Implications of glacial refugia and hot spots areas for wildlife management and effective conservation are discussed. Suggestions for paleontologists and zooarchaeologists regarding how to provide more valuable information in their future excavation and research for other disciplines are given.
Background and rationale: Changing working conditions demand adaptation, resulting in higher stress levels in employees. In consequence, decreased productivity, increasing rates of sick leave, and cases of early retirement result in higher direct, indirect, and intangible costs. Aims of the Research Project: The aim of the study was to test the usefulness of a novel translational diagnostic tool, Neuropattern, for early detection, prevention, and personalized treatment of stress-related disorders. The trial was designed as a pilot study with a wait list control group. Materials and Methods: In this study, 70 employees of the Forestry Department Rhineland-Palatinate, Germany, were enrolled. Subjects were block-randomized according to the functional group of their career field, and either underwent Neuropattern diagnostics immediately, or after a waiting period of three months. After the diagnostic assessment, their physicians received the Neuropattern Medical Report, including the diagnostic results and treatment recommendations. Participants were informed by the Neuropattern Patient Report, and were eligible to an individualized Neuropattern Online Counseling account. Results: The application of Neuropattern diagnostics significantly improved mental health and health-related behavior, reduced perceived stress, emotional exhaustion, overcommitment and possibly, presenteeism. Additionally, Neuropattern sensitively detected functional changes in stress physiology at an early stage, thus allowing timely personalized interventions to prevent and treat stress pathology. Conclusion: The present study encouraged the application of Neuropattern diagnostics to early intervention in non-clinical populations. However, further research is required to determine the best operating conditions.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
During and after application, pesticides enter the atmosphere by volatilisation and by wind erosion of particles on which the pesticide is sorbed. Measurements at application sites revealed that sometimes more than half of the amount applied is lost into the atmosphere within a few days. The atmosphere is an important part of the hydrologic cycle that can transport pesticides from their point of application and deposit them into aquatic and terrestrial ecosystems far from their point of use. In the region of Trier pesticides are widely used. In order to protect crops from pests and increase crop yields in the viniculture, six to eight pesticide applications take place between May and August. The impact that these applications have on the environmental pollution of the region is not yet well understood. The present study was developed to characterize the atmospheric presence, temporal patterns, transport and deposition of a variety of pesticides in the atmosphere of the area of Trier. To this purpose, rain samples were weekly collected at eight sites during the growing seasons 2000, 2001 and 2002, and air samples (gas and particle phases) were collected during the growing season 2002. Multiresidue analysis methods were developed to determine multiple classes of pesticides in rain water, particle- and gas-phase samples. Altogether 24 active ingredients and 3 metabolites were chosen as representative substances, focussing mainly on fungicides. Twenty-four of the 27 measured pesticides were detected in the rain samples; seventeen pesticides were detected in the air samples. The most frequently detected pesticides and at the highest concentrations, both in rain and air, were compounds belonging to the class of fungicides. The insecticide methyl parathion was also detected in several rain samples as well as two substances that are banned in Germany, such as the herbicides atrazine and simazine. Concentration levels varied during the growing season with the highest concentrations being measured in the late spring and summer months, coinciding with application times and warmer months. Concentration levels measured in the rain samples were, generally, in the order of rnng l-1. Though average concentrations for single substances were less than 100 ng l-1, total concentrations were considerable and in some instances well above the EU drinking water quality standard of 500 ng l-1 for total pesticides. Compared to the amounts applied for pest control, the amounts deposited by rain resulted between 0,004% and 0,10% of the maximum application rates. These low pesticide inputs from precipitation to surface-water bodies is not of concern in vinicultural areas where the impact of other sources, such as superficial runoff inputs from the treated areas and cleaning of field crop sprayers, is more important. However, the potential impacts of these aerial pesticide inputs to non-target sites, such as organic crops, and sensitive ecosystems are as yet not known. Concentration levels in the air samples were in the order of ng m-3 at sites close to the fields were pesticides were applied, while lower values, in the order of pg m-3, were detected at the site located further away from fields where applications were performed. The measured air concentration levels found in this study do not represent a concern for human health in terms of acute risk. Inhalation toxicity studies have shown that an acute potential risk only arises at air concentrations in the range of g m-3. Finally, it must be kept in mind that only a small number of chemicals that were applied in the area were analysed for in this study. In order to gain a better evaluation of the local atmospheric load of pesticides, a wider spectrum of applied substances (including metabolites) needs to be investigated.