Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (836) (entfernen)
Sprache
- Deutsch (490)
- Englisch (335)
- Mehrsprachig (7)
- Französisch (4)
Schlagworte
- Stress (37)
- Deutschland (33)
- Modellierung (18)
- Optimierung (18)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (15)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institut
- Psychologie (181)
- Raum- und Umweltwissenschaften (148)
- Mathematik (62)
- Wirtschaftswissenschaften (61)
- Fachbereich 4 (56)
- Fachbereich 1 (29)
- Geschichte, mittlere und neuere (28)
- Germanistik (26)
- Informatik (26)
- Kunstgeschichte (22)
This dissertation investigates corporate acquisition decisions that represent important corporate development activities for family and non-family firms. The main research objective of this dissertation is to generate insights into the subjective decision-making behavior of corporate decision-makers from family and non-family firms and their weighting of M&A decision-criteria during the early pre-acquisition target screening and selection process. The main methodology chosen for the investigation of M&A decision-making preferences and the weighting of M&A decision criteria is a choice-based conjoint analysis. The overall sample of this dissertation consists of 304 decision-makers from 264 private and public family and non-family firms from mainly Germany and the DACH-region. In the first empirical part of the dissertation, the relative importance of strategic, organizational and financial M&A decision-criteria for corporate acquirers in acquisition target screening is investigated. In addition, the author uses a cluster analysis to explore whether distinct decision-making patterns exist in acquisition target screening. In the second empirical part, the dissertation explores whether there are differences in investment preferences in acquisition target screening between family and non-family firms and within the group of family firms. With regards to the heterogeneity of family firms, the dissertation generated insights into how family-firm specific characteristics like family management, the generational stage of the firm and non-economic goals such as transgenerational control intention influences the weighting of different M&A decision criteria in acquisition target screening. The dissertation contributes to strategic management research, in specific to M&A literature, and to family business research. The results of this dissertation generate insights into the weighting of M&A decision-making criteria and facilitate a better understanding of corporate M&A decisions in family and non-family firms. The findings show that decision-making preferences (hence the weighting of M&A decision criteria) are influenced by characteristics of the individual decision-maker, the firm and the environment in which the firm operates.
In the modeling context, non-linearities and uncertainty go hand in hand. In fact, the utility function's curvature determines the degree of risk-aversion. This concept is exploited in the first article of this thesis, which incorporates uncertainty into a small-scale DSGE model. More specifically, this is done by a second-order approximation, while carrying out the derivation in great detail and carefully discussing the more formal aspects. Moreover, the consequences of this method are discussed when calibrating the equilibrium condition. The second article of the thesis considers the essential model part of the first paper and focuses on the (forward-looking) data needed to meet the model's requirements. A large number of uncertainty measures are utilized to explain a possible approximation bias. The last article keeps to the same topic but uses statistical distributions instead of actual data. In addition, theoretical (model) and calibrated (data) parameters are used to produce more general statements. In this way, several relationships are revealed with regard to a biased interpretation of this class of models. In this dissertation, the respective approaches are explained in full detail and also how they build on each other.
In summary, the question remains whether the exact interpretation of model equations should play a role in macroeconomics. If we answer this positively, this work shows to what extent the practical use can lead to biased results.
Internet interventions have gained popularity and the idea is to use them to increase the availability of psychological treatment. Research suggests that internet interventions are effective for a number of psychological disorders with effect sizes comparable to those found in face-to-face treatment. However, when provided as an add-on to treatment as usual, internet interventions do not seem to provide additional benefit. Furthermore, adherence and dropout rates vary greatly between studies, limiting the generalizability of the findings. This underlines the need to further investigate differences between internet interventions, participating patients, and their usage of interventions. A stronger focus on the processes of change seems necessary to better understand the varying findings regarding outcome, adherence and dropout in internet interventions. Thus, the aim of this dissertation was to investigate change processes in internet interventions and the factors that impact treatment response. This could help to identify important variables that should be considered in research on internet interventions as well as in clinical settings that make use of internet interventions.
Study I (Chapter 5) investigated early change patterns in participants of an internet intervention targeting depression. Data from 409 participants were analyzed using Growth Mixture Modeling. Specifically a piecewise model was applied to model change from screening to registration (pretreatment) and early change (registration to week four of treatment). Three early change patterns were identified; two were characterized by improvement and one by deterioration. The patterns were predictive of treatment outcome. The results therefore indicated that early change should be closely monitored in internet interventions, as early change may be an important indicator of treatment outcome.
Study II (Chapter 6) picked up on the idea of analyzing change patterns in internet interventions and extended it by using the Muthen-Roy model to identify change-dropout patterns. A sligthly bigger sample of the dataset from Study I was analyzed (N = 483). Four change-dropout patterns emerged; high risk of dropout was associated with rapid improvement and deterioration. These findings indicate that clinicians should consider how dropout may depend on patient characteristics as well as symptom change, as dropout is associated with both deterioration and a good enough dosage of treatment.
Study III (Chapter 7) compared adherence and outcome in different participant groups and investigated the impact of adherence to treatment components on treatment outcome in an internet intervention targeting anxiety symptoms. 50 outpatient participants waiting for face- to-face treatment and 37 self-referred participants were compared regarding adherence to treatment components and outcome. In addition, outpatient participants were compared to a matched sample of outpatients, who had no access to the internet intervention during the waiting period. Adherence to treatment components was investigated as a predictor of treatment outcome. Results suggested that especially adherence may vary depending on participant group. Also using specific measures of adherence such as adherence to treatment components may be crucial to detect change mechanisms in internet interventions. Fostering adherence to treatment components in participants may increase the effectiveness of internet interventions.
Results of the three studies are discussed and general conclusions are drawn.
Implications for future research as well as their utility for clinical practice and decision- making are presented.
Auf der Grundlage einer Fragebogenstudie wurden unterschiedliche Elemente eines förderlichen Umgangs mit Gesundheitsinformationen betrachtet und ihre Zusammenhänge mit personspezifischen Merkmalen analysiert. Als zentrale Aspekte der Informationsprozesse wurden die drei Elemente Gesundheitsinformationskompetenz, Gesundheitsinteresse und gesundheitsspezifische Informationsgewohnheiten konzeptuell voneinander getrennt. Auf der Basis des bisherigen Forschungsstands wurde zunächst ein theoretisches Modell des Umgangs mit Gesundheitsinformationen entwickelt, das die Bedeutung der Kompetenz und des Interesses für gesundheitsbezogene Informationsgewohnheiten hervorhebt, individuelle Ausprägungen dieser drei Elemente mit soziodemografischen Faktoren, Persönlichkeitseigenschaften, Überzeugungen und dem Gesundheitszustand in Beziehung setzt sowie Verbindungen zu gesundheitsrelevanten Verhaltensweisen beschreibt. Dieses Modell wurde anschließend an einer Stichprobe von 352 Berufsschülerinnen und -schülern aus drei Berufsbereichen (Wirtschaft/Verwaltung, Technik und Gesundheit) empirisch überprüft. Über multiple Regressionsanalysen wurden bedeutsame Prädiktoren für die drei Hauptelemente Kompetenz, Interesse und Informationsgewohnheiten identifiziert, über logistische Regressionen und Korrelationen ihre Zusammenhänge mit dem Gesundheitsverhalten überprüft. Darüber hinaus wurden lineare Strukturgleichungsmodelle zur Vorhersage des Informationsverhaltens entwickelt. Die Ergebnisse bestätigen die konzeptionelle Trennung der drei Faktoren, die jeweils mit unterschiedlichen Prädiktoren verbunden waren. Auf der Basis der Befunde werden Ansatzpunkte für die weitere Forschung und die Förderung eines kompetenten Umgangs mit Gesundheitsinformationen diskutiert.
Bei Albert Dietz und Bernhard Grothe handelt es sich um zwei bedeutende Architekten im französisch-saarländischen Grenzgebiet. Sie gründeten 1952 eine Arbeitsgemeinschaft mit dem Ziel, auf gemeinschaftlicher Basis den nach dem Krieg entstandenen Bedarf an profanen und sakralen Wiederaufbau- und Neubaumaßnahmen in ihren Bauwerken möglichst effektiv reaslisieren zu können. Diese Arbeit befaßt sich ausschließlich mit den Sakralbauten, die deren künstlerische und architektonische Leistungen auf anschauliche Weise demonstrieren und belegen.
Global food security poses large challenges to a fast changing human society and has been a key topic for scientists, agriculturist, and policy makers in the 21st century. The United Nation predicts a total world population of 9.15 billion in 2050 and defines the provision of food security as the second major point in the UN Sustainable Development Goals. As the capacities of both, land and water resources, are finite and locally heavily overused, reducing agriculture’s environmental impact while meeting an increasing demand for food of a constantly growing population is one of the greatest challenges of our century. Therefore, a multifaceted solution is required, including approaches using geospatial data to optimize agricultural food production.
The availability of precise and up-to-date information on vegetation parameters is mandatory to fulfill the requirements of agricultural applications. Direct field measurements of such vegetation parameters are expensive and time-consuming. On the contrary, remote sensing offers a variety of techniques for a cost-effective and non-destructive retrieval of vegetation parameters. Although not widely used, hyperspectral thermal infrared (TIR) remote sensing has demonstrated being a valuable addition to existing remote sensing techniques for the retrieval of vegetation parameters.
This thesis examined the potential of TIR imaging spectroscopy as an important contribution to the growing need of food security. The main scientific question dealt with the extraction of vegetation parameters from imaging TIR spectroscopy. To this end, two studies impressively demonstrated the ability of extracting vegetation related parameters from leaf emissivity spectra: (i) the discrimination of eight plant species based on their emissivity spectra and (ii) the detection of drought stress in potato plants using temperature measures and emissivity spectra.
The datasets used in these studies were collected using the Telops Hyper-Cam LW, a novel imaging spectrometer. Since this FTIR spectrometer presents some particularities, special attention was paid on the development of dedicated experimental data acquisition setups and on data processing chains. The latter include data preprocessing and the development of algorithms for extracting precise surface temperatures, reproducible emissivity spectra and, in the end, vegetation parameters.
The spectrometer’s versatility allows the collection of airborne imaging spectroscopy datasets. Since the general availability of airborne TIR spectrometers is limited, the preprocessing and
data extraction methods are underexplored compared to reflective remote sensing. This counts especially for atmospheric correction (AC) and temperature and emissivity separation (TES) algorithms. Therefore, we implemented a powerful simulation environment for the development of preprocessing algorithms for airborne hyperspectral TIR image data. This simulation tool is designed in a modular way and includes the image data acquisition and processing chain from surface temperature and emissivity to the final at-sensor radiance data. It includes a series of available algorithms for TES, AC as well as combined AC and TES approaches. Using this simulator, one of the most promising algorithms for the preprocessing of airborne TIR data – ARTEMISS – was significantly optimized. The retrieval error of the atmospheric water vapor during the atmospheric characterization was reduced. As a result, this improvement in atmospheric characterization accuracy enhanced the subsequent retrieval of surface temperatures and surface emissivities intensely.
Although, the potential of hyperspectral TIR applications in ecology, agriculture, and biodiversity has been impressively demonstrated, a serious contribution to a global provision of food security requires the retrieval of vegetation related parameters with global coverage, high spatial resolution and at high revisit frequencies.
Emerging from the findings in this thesis, the spectral configuration of a spaceborne TIR spectrometer concept was developed. The sensors spectral configuration aims at the retrieval of precise land surface temperatures and land surface emissivity spectra. Complemented with additional characteristics, i.e. short revisit times and a high spatial resolution, this sensor potentially allows the retrieval of valuable vegetation parameters needed for agricultural optimizations. The technical feasibility of such a sensor concept underlines the potential contribution to the multifaceted solution required for achieving the challenging goal of guaranteeing global food security in a world of increasing population.
In conclusion, thermal remote sensing and more precisely hyperspectral thermal remote sensing has been presented as a valuable technique for a variety of applications contributing to the final goal of a global food security.
This dissertation deals with consistent estimates in household surveys. Household surveys are often drawn via cluster sampling, with households sampled at the first stage and persons selected at the second stage. The collected data provide information for estimation at both the person and the household level. However, consistent estimates are desirable in the sense that the estimated household-level totals should coincide with the estimated totals obtained at the person-level. Current practice in statistical offices is to use integrated weighting. In this approach consistent estimates are guaranteed by equal weights for all persons within a household and the household itself. However, due to the forced equality of weights, the individual patterns of persons are lost and the heterogeneity within households is not taken into account. In order to avoid the negative consequences of integrated weighting, we propose alternative weighting methods in the first part of this dissertation that ensure both consistent estimates and individual person weights within a household. The underlying idea is to limit the consistency conditions to variables that emerge in both the personal and household data sets. These common variables are included in the person- and household-level estimator as additional auxiliary variables. This achieves consistency more directly and only for the relevant variables, rather than indirectly by forcing equal weights on all persons within a household. Further decisive advantages of the proposed alternative weighting methods are that original individual rather than the constructed aggregated auxiliaries are utilized and that the variable selection process is more flexible because different auxiliary variables can be incorporated in the person-level estimator than in the household-level estimator.
In the second part of this dissertation, the variances of a person-level GREG estimator and an integrated estimator are compared in order to quantify the effects of the consistency requirements in the integrated weighting approach. One of the challenges is that the estimators to be compared are of different dimensions. The proposed solution is to decompose the variance of the integrated estimator into the variance of a reduced GREG estimator, whose underlying model is of the same dimensions as the person-level GREG estimator, and add a constructed term that captures the effects disregarded by the reduced model. Subsequently, further fields of application for the derived decomposition are proposed such as the variable selection process in the field of econometrics or survey statistics.
Thema dieser Dissertation ist das deutsche Selbstbildnis im 17. Jahrhundert. Ziel der Arbeit war es, das deutsche Selbstbildnis als eigene Gattung zu etablieren. Hierzu wurden die Selbstbildnisse deutscher Maler des 17. Jahrhunderts ausgewählt, gilt doch diese Zeit noch immer als ‚totes Jahrhundert‘. Grundlage der Untersuchung war eine Sammlung von 148 Objekten, die einer grundlegenden Analyse unterzogen wurden. Das früheste Selbstbildnis in dieser Sammlung stammt von 1600, das späteste wurde um 1700 angefertigt. Künstler aus dem gesamten Alten Reich, ob aus Schlesien und Böhmen, Nord-oder Süddeutschland oder aus den österreichischen wie schweizerischen Landen sind hier vertreten. Die Selbstbildnisse stammen von Malern in der gesamten breite ihrer Karriere. So sind gleichermaßen Selbstbildnisse von Gesellen wie Meistern, von Hofmalern bis hin zu Freimeistern vertreten. Besonders wichtig war es, nicht nur Selbstbildnisse im Gemälde oder Kupferstich in die Untersuchung aufzunehmen, sondern auch Stammbucheinträge.
Die ausführliche Betrachtung und Gegenüberstellung der deutschen Selbstbildnisse mit denen ihrer europäischen Kollegen hat gezeigt, dass auch deutsche Maler den gängigen Darstellungstypen wie etwa dem virtuoso folgten. Aber die deutschen Maler imitierten nicht nur, sondern experimentierten und gingen mit ihren Vorbildern spielerisch um. Daneben folgten sie natürlich auch den Trends der Selbstinszenierung. Sie drückten in ihren Selbstbildnissen ihren Wunsch nach sozialer und gesellschaftlicher Emanzipation des gesamten Berufsstandes aus. So war das deutsche Selbstbildnis eigenständiger Ausdruck des Aufbruches deutscher Künstler in eine neue Zeit.
Competitive analysis is a well known method for analyzing online algorithms.
Two online optimization problems, the scheduling problems and the list accessing problems, are considered in the thesis of Yida Zhu in the respect of this method.
For both problems, several existing online and offline algorithms are studied. Their performances are compared with the performances of corresponding offline optimal algorithms.
In particular, the list accessing algorithm BIT is carefully reviewed.
The classical proof of its worst case performance get simplified by adapting the knowledge about the optimal offline algorithm.
With regard to average case analysis, a new closed formula is developed to determine the performance of BIT on specific class of instances.
All algorithm considered in this thesis are also implemented in Julia.
Their empirical performances are studied and compared with each other directly.
This doctoral thesis includes five studies that deal with the topics work, well-being, and family formation, as well as their interaction. The studies aim to find answers to the following questions: Do workers’ personality traits determine whether they sort into jobs with performance appraisals? Does job insecurity result in lower quality and quantity of sleep? Do public smoking bans affect subjective well-being by changing individuals’ use of leisure time? Can risk preferences help to explain non-traditional family forms? And finally, are differences in out-of-partnership birth rates between East and West Germany driven by cultural characteristics that have evolved in the two separate politico-economic systems? To answer these questions, the following chapters use basic economic subjects such as working conditions, income, and time use, but also employ a range of sociological and psychological concepts such as personality traits and satisfaction measures. Furthermore, all five studies use data from the German Socio-Economic Panel (SOEP), a representative longitudinal panel of private households in Germany, and apply state-of-the-art microeconometric methods. The findings of this doctoral thesis are important for individuals, employers, and policymakers. Workers and employers benefit from knowing the determinants of occupational sorting, as vacancies can be filled more accurately. Moreover, knowing which job-related problems lead to lower well-being and potentially higher sickness absence likely increases efficiency in the workplace. The research on smoking bans and family formation in chapters 4, 5, and 6 is particularly interesting for policymakers. The results on the effects of smoking bans on subjective well-being presented in chapter 4 suggest that the impacts of tobacco control policies could be weighed more carefully. Additionally, understanding why women are willing to take the risks associated with single motherhood can help to improve policies targeting single mothers.
Die publikationsbasierte Dissertation untersucht die Bedeutung sozialer Bewegungen für die Entwicklung der Sozialen Arbeit am Ende des 19. und den ersten Jahrzehnten des 20. Jahrhunderts als Profession und Disziplin in den USA und in Deutschland. Dabei wird die entstehende Soziale Arbeit als ‚Formbildung‘ sozialer Bewegungen verstanden und gefragt, wie sich die Bewegungen in die sich etablierende und institutionalisierende Profession und Wissenschaft Soziale Arbeit einschreiben, welche Anliegen dabei verfolgt werden und wie dadurch Wissen in der Sozialen Arbeit auch über nationalstaatliche Grenzen hinweg zirkuliert.
Die Untersuchung konzentriert sich auf Prozesse der Pädagogisierung, also unterschiedliche ‚Formbildungen des Pädagogischen‘, die die Bewegungsanliegen zum Thema von Aufklärung, (Selbst)Bildung und Pädagogik machen, und auf solche der Verwissenschaftlichung, die sich auf den Aufbau einer Wissensgrundlage zur Bearbeitung von sozialen Problemen richten und dabei alternative Formen der Wissensproduktion ausbilden. Diese Prozesse werden in drei Teilstudien – zur Charity Organization Movement und der Settlement House Movement in den USA sowie der bürgerlichen Frauenbewegung in Deutschland – in sieben Einzelbeiträgen näher untersucht. Im Mittelpunkt stehen dabei die Handlungsmethoden und das Praxisverständnis sowie Forschungskonzepte und –projekte exemplarisch ausgewählter sozialbewegter Initiativen der Sozialen Arbeit. Dabei werden unter anderem nicht-intendierte Effekte untersucht, die zum Beispiel in Konservierungen normativer Vorstellungen und Ideologien in als demokratisierend angelegten Ansätzen, aber auch in ‚differenzverstärkenden‘ Effekten bestehen können.
This thesis discusses revue as a significantly inter-cultural genre in the history of global theatre. During the ‘modernisation’ period in Europe, America and Japan, most major urban cities experienced a boom in revue venues and performances. Few studies about revue have yet been done in theatre studies or in urban cultural studies. My thesis will attempt to reevaluate and redefine revue as a highly intercultural theatre genre by using the concept of liminality. In other words, the aim is to examine revue as a genre built on ‘modern composition of betweenness’, bridging seemingly opposing elements, such as the foreign and the domestic, the classic and the innovative, the traditional and the modern, the professional and the amateur, high and low culture, and the feminine and the masculine. The goal is to regard revue as a liminal genre constructed amidst the negotiations between these binaries, existing in a state of constant flux.
The purpose of this approach is to capture revue as a transitory phenomena in five dimensions: conceptual, spatial, temporal, categorical and physical. Over the course of six chapters, this
inter-disciplinary discussion will reveal the reasons why and the ways by which revue came to establish its prominent position in the Japanese theatre industry. The whole structure is also an attempt to provide plausible ways to apply sociological considerations to theatre studies.
Nonlocal operators are used in a wide variety of models and applications due to many natural phenomena being driven by nonlocal dynamics. Nonlocal operators are integral operators allowing for interactions between two distinct points in space. The nonlocal models investigated in this thesis involve kernels that are assumed to have a finite range of nonlocal interactions. Kernels of this type are used in nonlocal elasticity and convection-diffusion models as well as finance and image analysis. Also within the mathematical theory they arouse great interest, as they are asymptotically related to fractional and classical differential equations.
The results in this thesis can be grouped according to the following three aspects: modeling and analysis, discretization and optimization.
Mathematical models demonstrate their true usefulness when put into numerical practice. For computational purposes, it is important that the support of the kernel is clearly determined. Therefore nonlocal interactions are typically assumed to occur within an Euclidean ball of finite radius. In this thesis we consider more general interaction sets including norm induced balls as special cases and extend established results about well-posedness and asymptotic limits.
The discretization of integral equations is a challenging endeavor. Especially kernels which are truncated by Euclidean balls require carefully designed quadrature rules for the implementation of efficient finite element codes. In this thesis we investigate the computational benefits of polyhedral interaction sets as well as geometrically approximated interaction sets. In addition to that we outline the computational advantages of sufficiently structured problem settings.
Shape optimization methods have been proven useful for identifying interfaces in models governed by partial differential equations. Here we consider a class of shape optimization problems constrained by nonlocal equations which involve interface-dependent kernels. We derive the shape derivative associated to the nonlocal system model and solve the problem by established numerical techniques.
In this thesis, we aim to study the sampling allocation problem of survey statistics under uncertainty. We know that the stratum specific variances are generally not known precisely and we have no information about the distribution of uncertainty. The cost of interviewing each person in a stratum is also a highly uncertain parameter as sometimes people are unavailable for the interview. We propose robust allocations to deal with the uncertainty in both stratum specific variances and costs. However, in real life situations, we can face such cases when only one of the variances or costs is uncertain. So we propose three different robust formulations representing these different cases. To the best of our knowledge robust allocation in the sampling allocation problem has not been considered so far in any research.
The first robust formulation for linear problems was proposed by Soyster (1973). Bertsimas and Sim (2004) proposed a less conservative robust formulation for linear problems. We study these formulations and extend them for the nonlinear sampling allocation problem. It is very unlikely to happen that all of the stratum specific variances and costs are uncertain. So the robust formulations are in such a way that we can select how many strata are uncertain which we refer to as the level of uncertainty. We prove that an upper bound on the probability of violation of the nonlinear constraints can be calculated before solving the robust optimization problem. We consider various kinds of datasets and compute robust allocations. We perform multiple experiments to check the quality of the robust allocations and compare them with the existing allocation techniques.
We consider a linear regression model for which we assume that some of the observed variables are irrelevant for the prediction. Including the wrong variables in the statistical model can either lead to the problem of having too little information to properly estimate the statistic of interest, or having too much information and consequently describing fictitious connections. This thesis considers discrete optimization to conduct a variable selection. In light of this, the subset selection regression method is analyzed. The approach gained a lot of interest in recent years due to its promising predictive performance. A major challenge associated with the subset selection regression is the computational difficulty. In this thesis, we propose several improvements for the efficiency of the method. Novel bounds on the coefficients of the subset selection regression are developed, which help to tighten the relaxation of the associated mixed-integer program, which relies on a Big-M formulation. Moreover, a novel mixed-integer linear formulation for the subset selection regression based on a bilevel optimization reformulation is proposed. Finally, it is shown that the perspective formulation of the subset selection regression is equivalent to a state-of-the-art binary formulation. We use this insight to develop novel bounds for the subset selection regression problem, which show to be highly effective in combination with the proposed linear formulation.
In the second part of this thesis, we examine the statistical conception of the subset selection regression and conclude that it is misaligned with its intention. The subset selection regression uses the training error to decide on which variables to select. The approach conducts the validation on the training data, which oftentimes is not a good estimate of the prediction error. Hence, it requires a predetermined cardinality bound. Instead, we propose to select variables with respect to the cross-validation value. The process is formulated as a mixed-integer program with the sparsity becoming subject of the optimization. Usually, a cross-validation is used to select the best model out of a few options. With the proposed program the best model out of all possible models is selected. Since the cross-validation is a much better estimate of the prediction error, the model can select the best sparsity itself.
The thesis is concluded with an extensive simulation study which provides evidence that discrete optimization can be used to produce highly valuable predictive models with the cross-validation subset selection regression almost always producing the best results.
Gegenstand der Dissertation ist die Geschichte und Manifestation des Nationaltheaters in Japan, der Transfer einer europäischen Kulturinstitution nach und deren Umsetzungsprozess in Japan, welcher mit der Modernisierung Japans ab Mitte des 19. Jahrhunderts begann und erst hundert Jahre später mit der Eröffnung des ersten Nationaltheaters 1966 endete. Dazu werden theaterhistorische Entwicklungen, Veränderungen in der Theaterproduktion und -architektur in Bezug auf die Genese eines japanischen Nationaltheaters beleuchtet. Das Ergebnis zeigt, dass sich die Institution Nationaltheater in seiner japanischen kulturellen Translation bzw. Manifestation wesentlich von den vom Land selbst als Model anvisierten Pendants in Europa in Inhalt, Organisations- und Produktionsstruktur unterscheidet. Kulturell übersetzt wurde allein die Hülle der europäischen Institution. Das erste Nationaltheater in Japan manifestiert sich als eine von der Regierung im Rahmen des Denkmalschutzgesetztes initiierte und bestimmte, spezifisch japanische Variante eines Nationaltheaters, die unter dem Management von staatlichen Angestellten und Beamten den Erhalt traditioneller Künste in dafür ausgerichteten Bühnen zur Aufgabe hat. Nationaltheaterensemble gibt es nicht, die Produktionen werden mit Schauspielern kommerzieller Theaterunternehmen realisiert. Der lange Prozess dieser Genese liegt in der nicht vorhandenen Theaterförderung seitens der Regierung und der eher zurückhaltenden Haltung der Theaterwelt gegenüber einem staatlich betriebenen Theater begründet. Das Hüllen-Konzept des ersten Nationaltheaters diente, genau wie dessen Management durch Beamte, als Prototyp für die fünf weiteren bis 2004 eröffneten Nationaltheater in Japan, welche als Spartentheater der spezifisch japanischen Vielfalt an Theaterformen, auch in ihrer Bühnenarchitektur Rechnung tragen.
Die Aufgabe der vorliegenden Arbeit lag darin, anhand der Quelle von Luthers Tischreden zu zeigen, welche Art von Kunstwerken Martin Luther erwähnt und welche Funktion diese in den Colloquia hatten. Sie untersucht das Corpus der über 7000 Tischreden systematisch auf Äußerungen, die im Zusammenhang mit Kunstwerken im weitesten Sinne stehen. Es erfolgte eine textkritische Bearbeitung der Tischreden. Der Anspruch der Arbeit war, diese Bildwerke zu identifizieren und in ihren historischen bzw. kunsthistorischen Kontext zu stellen. Da viele Parallelstellen gefunden und herangezogen werden konnten, ließen sich zahlreiche Irrtümer aufdecken und Fehldeutungen korrigieren. Der Fokus lag dabei nach der Auswertung und der anschließenden Beschäftigung mit den Stellen auf Luthers geradezu leitmotivisch auftretendem Thema der Superbia. Diesem Themenbereich der Todsünden wurde in der Lutherforschung bisher nur wenig Beachtung geschenkt, denn die sie galt für die Reformation als unwesentlich. Es konnte aber in dieser Arbeit dargelegt werden, wie wichtig die Todsünden und vor allem die Superbia in Luthers Tischreden und in seinem Gesamtwerk sind. Darüber hinaus hat sich die Arbeit mit der Performanz der Bilder in Luthers Werk beschäftigt. Sie leistet zudem einen Beitrag zu der Fragestellung, mit welcher Intention Luther seine eigenen Porträts in Auftrag gegeben hat. So wird dargelegt, wie Luther seine Bildstrategien verfolgt. Die Arbeit hat ferner gezeigt, wie wichtig für die effektive, interdisziplinär nutzbare Auswertung der Tischreden als Quelle eine digitale Ausgabe wäre, die mit Metadaten versehen ist und dadurch nach semantischen Kriterien durchsucht werden kann.
A huge number of clinical studies and meta-analyses have shown that psychotherapy is effective on average. However, not every patient profits from psychotherapy and some patients even deteriorate in treatment. Due to this result and the restricted generalization of clinical studies to clinical practice, a more patient-focused research strategy has emerged. The question whether a particular treatment works for an individual case is the focus of this paradigm. The use of repeated assessments and the feedback of this information to therapists is a major ingredient of patient-focused research. Improving patient outcomes and reducing dropout rates by the use of psychometric feedback seems to be a promising path. Therapists seem to differ in the degree to which they make use of and profit from such feedback systems. This dissertation aims to better understand therapist differences in the context of patient-focused research and the impact of therapists on psychotherapy. Three different studies are included, which focus on different aspects within the field:
Study I (Chapter 5) investigated how therapists use psychometric feedback in their work with patients and how much therapists differ in their usage. Data from 72 therapists treating 648 patients were analyzed. It could be shown that therapists used the psychometric feedback for most of their patients. Substantial variance in the use of feedback (between 27% and 52%) was attributable to therapists. Therapists were more likely to use feedback when they reported being satisfied with the graphical information they received. The results therefore indicated that not only patient characteristics or treatment progress affected the use of feedback.
Study II (Chapter 6) picked up on the idea of analyzing systematic differences in therapists and applied it to the criterion of premature treatment termination (dropout). To answer the question whether therapist effects occur in terms of patients’ dropout rates, data from 707 patients treated by 66 therapists were investigated. It was shown that approximately six percent of variance in dropout rates could be attributed to therapists, even when initial impairment was controlled for. Other predictors of dropout were initial impairment, sex, education, personality styles, and treatment expectations.
Study III (Chapter 7) extends the dissertation by investigating the impact of a transfer from one therapist to another within ongoing treatments. Data from 124 patients who agreed to and experienced a transfer during their treatment were analyzed. A significant drop in patient-rated as well as therapist-rated alliance levels could be observed after a transfer. On average, there seemed to be no difficulties establishing a good therapeutic alliance with the new therapist, although differences between patients were observed. There was no increase in symptom severity due to therapy transfer. Various predictors of alliance and symptom development after transfer were investigated. Impacts on clinical practice were discussed.
Results of the three studies are discussed and general conclusions are drawn. Implications for future research as well as their utility for clinical practice and decision-making are presented.
In this thesis, we consider the solution of high-dimensional optimization problems with an underlying low-rank tensor structure. Due to the exponentially increasing computational complexity in the number of dimensions—the so-called curse of dimensionality—they present a considerable computational challenge and become infeasible even for moderate problem sizes.
Multilinear algebra and tensor numerical methods have a wide range of applications in the fields of data science and scientific computing. Due to the typically large problem sizes in practical settings, efficient methods, which exploit low-rank structures, are essential. In this thesis, we consider an application each in both of these fields.
Tensor completion, or imputation of unknown values in partially known multiway data is an important problem, which appears in statistics, mathematical imaging science and data science. Under the assumption of redundancy in the underlying data, this is a well-defined problem and methods of mathematical optimization can be applied to it.
Due to the fact that tensors of fixed rank form a Riemannian submanifold of the ambient high-dimensional tensor space, Riemannian optimization is a natural framework for these problems, which is both mathematically rigorous and computationally efficient.
We present a novel Riemannian trust-region scheme, which compares favourably with the state of the art on selected application cases and outperforms known methods on some test problems.
Optimization problems governed by partial differential equations form an area of scientific computing which has applications in a variety of areas, ranging from physics to financial mathematics. Due to the inherent high dimensionality of optimization problems arising from discretized differential equations, these problems present computational challenges, especially in the case of three or more dimensions. An even more challenging class of optimization problems has operators of integral instead of differential type in the constraint. These operators are nonlocal, and therefore lead to large, dense discrete systems of equations. We present a novel solution method, based on separation of spatial dimensions and provably low-rank approximation of the nonlocal operator. Our approach allows the solution of multidimensional problems with a complexity which is only slightly larger than linear in the univariate grid size; this improves the state of the art for a particular test problem problem by at least two orders of magnitude.
Academic achievement is a central outcome in educational research, both in and outside higher education, has direct effects on individual’s professional and financial prospects and a high individual and public return on investment. Theories comprise cognitive as well as non-cognitive influences on achievement. Two examples frequently investigated in empirical research are knowledge (as a cognitive determinant) and stress (as a non-cognitive determinant) of achievement. However, knowledge and stress are not stable, what raises questions as to how temporal dynamics in knowledge on the one hand and stress on the other contribute to achievement. To study these contributions in the present doctoral dissertation, I used meta-analysis, latent profile transition analysis, and latent state-trait analysis. The results support the idea of knowledge acquisition as a cumulative and long-term process that forms the basis for academic achievement and conceptual change as an important mechanism for the acquisition of knowledge in higher education. Moreover, the findings suggest that students’ stress experiences in higher education are subject to stable, trait-like influences, as well as situational and/or interactional, state-like influences which are differentially related to achievement and health. The results imply that investigating the causal networks between knowledge, stress, and academic achievement is a promising strategy for better understanding academic achievement in higher education. For this purpose, future studies should use longitudinal designs, randomized controlled trials, and meta-analytical techniques. Potential practical applications include taking account of students’ prior knowledge in higher education teaching and decreasing stress among higher education students.
With the advent of highthroughput sequencing (HTS), profiling immunoglobulin (IG) repertoires has become an essential part of immunological research. The dissection of IG repertoires promises to transform our understanding of the adaptive immune system dynamics. Advances in sequencing technology now also allow the use of the Ion Torrent Personal Genome Machine (PGM) to cover the full length of IG mRNA transcripts. The applications of this benchtop scale HTS platform range from identification of new therapeutic antibodies to the deconvolution of malignant B cell tumors. In the context of this thesis, the usability of the PGM is assessed to investigate the IG heavy chain (IGH) repertoires of animal models. First, an innovate bioinformatics approach is presented to identify antigendriven IGH sequences from bulk sequenced bone marrow samples of transgenic humanized rats, expressing a human IG repertoire (OmniRatTM). We show, that these rats mount a convergent IGH CDR3 response towards measles virus hemagglutinin protein and tetanus toxoid, with high similarity to human counterparts. In the future, databases could contain all IGH CDR3 sequences with known specificity to mine IG repertoire datasets for past antigen exposures, ultimately reconstructing the immunological history of an individual. Second, a unique molecular identifier (UID) based HTS approach and network property analysis is used to characterize the CLLlike CD5+ B cell expansion of A20BKO mice overexpressing a natural short splice variant of the CYLD gene (A20BKOsCYLDBOE). We could determine, that in these mice, overexpression of sCYLD leads to unmutated subvariant of CLL (UCLL). Furthermore, we found that this short splice variant is also seen in human CLL patients highlighting it as important target for future investigations. Third, the UID based HTS approach is improved by adapting it to the PGM sequencing technology and applying a custommade data processing pipeline including the ImMunoGeneTics (IMGT) database error detection. Like this, we were able to obtain correct IGH sequences with over 99.5% confidence and correct CDR3 sequences with over 99.9% confidence. Taken together, the results, protocols and sample processing strategies described in this thesis will improve the usability of animal models and the Ion Torrent PGM HTS platform in the field if IG repertoire research.
External capital plays an important role in financing entrepreneurial ventures, due to limited internal capital sources. An important external capital provider for entrepreneurial ventures are venture capitalists (VCs). VCs worldwide are often confronted with thousands of proposals of entrepreneurial ventures per year and must choose among all of these companies in which to invest. Not only do VCs finance companies at their early stages, but they also finance entrepreneurial companies in their later stages, when companies have secured their first market success. That is why this dissertation focuses on the decision-making behavior of VCs when investing in later-stage ventures. This dissertation uses both qualitative as well as quantitative research methods in order to provide answer to how the decision-making behavior of VCs that invest in later-stage ventures can be described.
Based on qualitative interviews with 19 investment professionals, the first insight gained is that for different stages of venture development, different decision criteria are applied. This is attributed to different risks and goals of ventures at different stages, as well as the different types of information available. These decision criteria in the context of later-stage ventures contrast with results from studies that focus on early-stage ventures. Later-stage ventures possess meaningful information on financials (revenue growth and profitability), the established business model, and existing external investors that is not available for early-stage ventures and therefore constitute new decision criteria for this specific context.
Following this identification of the most relevant decision criteria for investors in the context of later-stage ventures, a conjoint study with 749 participants was carried out to understand the relative importance of decision criteria. The results showed that investors attribute the highest importance to 1) revenue growth, (2) value-added of products/services for customers, and (3) management team track record, demonstrating differences when compared to decision-making studies in the context of early-stage ventures.
Not only do the characteristics of a venture influence the decision to invest, additional indirect factors, such as individual characteristics or characteristics of the investment firm, can influence individual decisions. Relying on cognitive theory, this study investigated the influence of various individual characteristics on screening decisions and found that both investment experience and entrepreneurial experience have an influence on individual decision-making behavior. This study also examined whether goals, incentive structures, resources, and governance of the investment firm influence decision making in the context of later-stage ventures. This study particularly investigated two distinct types of investment firms, family offices and corporate venture capital funds (CVC), which have unique structures, goals, and incentive systems. Additional quantitative analysis showed that family offices put less focus on high-growth firms and whether reputable investors are present. They tend to focus more on the profitability of a later-stage venture in the initial screening. The analysis showed that CVCs place greater importance on product and business model characteristics than other investors. CVCs also favor later-stage ventures with lower revenue growth rates, indicating a preference for less risky investments. The results provide various insights for theory and practice.
Many combinatorial optimization problems on finite graphs can be formulated as conic convex programs, e.g. the stable set problem, the maximum clique problem or the maximum cut problem. Especially NP-hard problems can be written as copositive programs. In this case the complexity is moved entirely into the copositivity constraint.
Copositive programming is a quite new topic in optimization. It deals with optimization over the so-called copositive cone, a superset of the positive semidefinite cone, where the quadratic form x^T Ax has to be nonnegative for only the nonnegative vectors x. Its dual cone is the cone of completely positive matrices, which includes all matrices that can be decomposed as a sum of nonnegative symmetric vector-vector-products.
The related optimization problems are linear programs with matrix variables and cone constraints.
However, some optimization problems can be formulated as combinatorial problems on infinite graphs. For example, the kissing number problem can be formulated as a stable set problem on a circle.
In this thesis we will discuss how the theory of copositive optimization can be lifted up to infinite dimension. For some special cases we will give applications in combinatorial optimization.
Die Dissertation untersucht vergleichend deutsch-japanische fotografische Kriegspropaganda des Zweiten Weltkrieges anhand der zu jener Zeit auflagenstärksten illustrierten Zeitschriften "Illustrierter Beobachter" auf deutscher und "Shashin Shūhō" (Fotografische Wochenzeitung) auf japanischer Seite. Unter Rückgriff auf die ikonographisch-ikonologische Methode des Kunsthistorikers Erwin Panofsky bei gleichzeitiger Bezugnahme auf das Bildverständnis Charles Sanders Peirces werden Muster der bildlichen Darstellung von Kindern und Jugendlichen analysiert, um hierdurch Rückschlüsse zu ziehen auf Gemeinsamkeiten und Unterschiede in der Ausgestaltung der Bildpropaganda beider Länder unmittelbar vor und im Zweiten Weltkrieg (1939-1945), auf allgemeine Tendenzen in der Gestaltung von Propaganda im selben Zeitraum, auf die Organisation und Funktion von Propaganda in radikalnationalistischen Staaten. Gleichzeitig wird durch Einbeziehung der Rezipientensicht die Frage nach Mehrdeutigkeit und, hiermit einhergehend, Wirkungsweise und Wirkungsgrad der Propaganda gestellt. Schwerpunkt der Untersuchung bilden sämtliche publizierten Ausgaben der zweiten Jahreshälften 1938 und 1943.
This doctoral thesis examines intergenerational knowledge, its antecedents as well as how participation in intergenerational knowledge transfer is related to the performance evaluation of employees. To answer these questions, this doctoral thesis builds on a literature review and quantitative research methods. A systematic literature study shows that empirical evidence on intergenerational knowledge transfer is limited. Building on prior literature, effects of various antecedents at the interpersonal and organizational level regarding their effects on intergenerational and intragenerational knowledge transfer are postulated. By questioning 444 trainees and trainers, this doctoral thesis also demonstrates that interpersonal antecedents impact how trainees participate in intergenerational knowledge transfer with their trainers. Thereby, the results of this study provide support that interpersonal antecedents are relevant for intergenerational knowledge transfer, yet, also emphasize the implications attached to the assigned roles in knowledge transfer (i.e., whether one is a trainee or trainer). Moreover, the results of an experimental vignette study reveal that participation in intergenerational knowledge transfer is linked to the performance evaluation of employees, yet, is susceptible to whether the employee is sharing or seeking knowledge. Overall, this doctoral thesis provides insights into this topic by covering a multitude of antecedents of intergenerational knowledge transfer, as well as how participation in intergenerational knowledge transfer may be associated with the performance evaluation of employees.
Sample surveys are a widely used and cost effective tool to gain information about a population under consideration. Nowadays, there is an increasing demand not only for information on the population level but also on the level of subpopulations. For some of these subpopulations of interest, however, very small subsample sizes might occur such that the application of traditional estimation methods is not expedient. In order to provide reliable information also for those so called small areas, small area estimation (SAE) methods combine auxiliary information and the sample data via a statistical model.
The present thesis deals, among other aspects, with the development of highly flexible and close to reality small area models. For this purpose, the penalized spline method is adequately modified which allows to determine the model parameters via the solution of an unconstrained optimization problem. Due to this optimization framework, the incorporation of shape constraints into the modeling process is achieved in terms of additional linear inequality constraints on the optimization problem. This results in small area estimators that allow for both the utilization of the penalized spline method as a highly flexible modeling technique and the incorporation of arbitrary shape constraints on the underlying P-spline function.
In order to incorporate multiple covariates, a tensor product approach is employed to extend the penalized spline method to multiple input variables. This leads to high-dimensional optimization problems for which naive solution algorithms yield an unjustifiable complexity in terms of runtime and in terms of memory requirements. By exploiting the underlying tensor nature, the present thesis provides adequate computationally efficient solution algorithms for the considered optimization problems and the related memory efficient, i.e. matrix-free, implementations. The crucial point thereby is the (repetitive) application of a matrix-free conjugated gradient method, whose runtime is drastically reduced by a matrx-free multigrid preconditioner.
This study examines to what extent a banking crisis and the ensuing potential liquidity shortage affect corporate cash holdings. Specifically, how do firms adjust their liquidity management prior to and during a banking crisis when they are restricted in their financing options? These restrictions might not result from firm-specific characteristics but also incorporate the effects of certain regulatory requirements. I analyse the real effects of indicators of a potential crisis and the occurrence of a crisis event on corporate cash holdings for both unregulated and regulated firms from 31 different countries. In contrast to existing studies, I perform this analysis on the basis of a long observation period (1997 to 2014 respectively 2003 to 2014) using multiple crisis indicators (early warning signals) and multiple crisis events. For regulated firms, this study makes use of a unique sample of country-specific regulatory information, which is collected by hand for 15 countries and converted into an ordinal scale based on the severity of the regulation. Regulated firms are selected from a single industry: Real Estate Investment Trusts. These firms invest in real estate properties and let these properties to third parties. Real Estate Investment Trusts that comply with the aforementioned regulations are exempt from income taxation and are punished for a breach, which makes this industry particularly interesting for the analysis of capital structure decisions.
The results for regulated and unregulated firms are mostly inconclusive. I find no convincing evidence that the degree of regulation affects the level of cash holdings for regulated firms before and during a banking crisis. For unregulated firms, I find strong evidence that financially constrained firms have higher cash holdings than unconstrained firms. Further, there is no real evidence that either financially constrained firms or unconstrained firms increase their cash holdings when observing an early warning signal. In case of a banking crisis, the results differ for univariate tests and in panel regressions. In the univariate setting, I find evidence that both types of firms hold higher levels of cash during a banking crisis. In panel regressions, the effect is only evident for financially unconstrained firms from the US, and when controlling for financial stress, it is also apparent for financially constrained US firms. For firms from Europe, the results are predominantly inconclusive. For banking crises that are preceded by an early warning signal, there is only evidence for an increase in cash holdings for unconstrained US firms when controlling for financial stress.
Mittels Querschnittserhebungen ist es möglich Populationsparameter zu einem bestimmten Zeitpunkt zu schätzen. Jedoch ist meist die Veränderung von Populationsparametern von besonderem Interesse. So ist es zur Evaluation von politischen Zielvorgaben erforderlich die Veränderung von Indikatoren, wie Armutsmaßen, über die Zeit zu verfolgen. Um zu testen ob eine gemessene Veränderung sich signifikant von Null unterscheidet bedarf es einer Varianzschätzung für Veränderungen von Querschnitten. In diesem Zusammenhang ergeben sich oft zwei Probleme; Zum einen sind die relevanten Statistiken meist nicht-linear und zum anderen basieren die untersuchten Querschnittserhebungen auf Stichproben die nicht unabhängig voneinander gezogen wurden. Ziel der vorliegenden Dissertation ist es einen theoretischen Rahmen zur Herleitung und Schätzung der Varianz einer geschätzten Veränderung von nicht-linearen Statistiken zu geben. Hierzu werden die Eigenschaften von Stichprobendesigns erarbeitetet, die zur Koordination von Stichprobenziehungen in einer zeitlichen Abfolge verwendet werden. Insbesondere werden Ziehungsalgorithmen zur Koordination von Stichproben vorgestellt, erarbeitet und deren Eigenschaften beschrieben. Die Problematik der Varianzschätzung im Querschnitt für nicht-lineare Schätzer bei komplexen Stichprobendesigns wird ebenfalls behandelt. Schließlich wird ein allgemeiner Ansatz zur Schätzung von Veränderungen aufgezeigt und es werden Varianzschätzer für die Veränderung von Querschnittschätzern basierend auf koordinierten Querschnittstichproben untersucht. Insbesondere dem Fall einer sich über die Zeit verändernden Population wird eine besondere Bedeutung im Rahmen der Arbeit beigemessen, da diese im Anwendungsfall die Regel darstellen.
Die Vermittlung des Grauens
(2018)
Im Rahmen meiner zweiteiligen Dissertation „Die Vermittlung des Grauens“ untersuchte ich das Einsatzpotential von Multimedia-Technologien an museal aufbereiteten Originalschauplätzen des Opfergedenkens in Frankreich und Deutschland. Vordergründig stand die Klärung der Frage, ob die heute verfügbaren technischen Hilfsmittel die traditionelle Vermittlungsarbeit sinnvoll ergänzen können. Die Forts Douaumont und Vaux in Verdun und die Alte Synagoge Essen schienen mir aufgrund ihrer stark divergierenden Musealisierungen für eine dahingehende Analyse besonders geeignet. Vor Ort widmete ich mich zum einen dem Prozess der „Vergegenwärtigung der Vergangenheit am nahezu originalbelassenen Erinnerungsort durch Multimediaguides“ und begab mich ebenso auf „Spurensuche in der Alten Synagoge Essen“, da die Neukonzeptionierung der Stätte sämtliche Erinnerungen an das örtliche Geschehen im Nationalsozialismus überzeichnet hat. Diese Umstände wirken sich dementsprechend auch unterschiedlich auf die Authentizität des jeweiligen Ortes aus, worauf das dortige Konzept reagieren und für einen angemessenen Ausgleich sorgen muss. Um die Aussagekraft demgemäß zu fördern, bieten sich heutzutage insbesondere Technologien wie Audio- und Multimediaguides an, deren Potential anhand der genannten Objekte überprüft wurde. Neben diese mittlerweile schon traditionellen Maßnahmen, traten im Laufe der Zeit weitere Präsentationsmöglichkeiten wie die Touchscreen-, Virtual oder Augmented Reality-Technologien, der QR-Code, die Nahbereichskommunikation NFC und die sogenannten Museums-Apps, die ebenso zur Sprache kamen. Dieser Umstand trägt sich nicht zuletzt dadurch, dass bei der musealen Vermittlungsarbeit das Zusammenwirken von formalen Bildungsträgern und informellen Lernorten immer mehr an Bedeutung gewinnt. Die große Herausforderung hierbei ist die altersgerechte Darbietung der Inhalte.
Zeugen die ein Tatgeschehen nicht beobachtet, sondern nur auditiv wahrgenommen haben, werden als Ohrenzeugen bezeichnet. Im Rahmen des Strafverfahrens erhalten Ohrenzeugen die Aufgabe, die Täterstimme im Rahmen einer akustischen Wahlgegenüberstellung (Voice Line-up) wiederzuerkennen. Die forensische Praxis zeigt, dass Ohrenzeugen diese Aufgabe unterschiedlich gut bewältigen können, ohne dass sich ein klares Muster erkennen lässt. In der Ohrenzeugenforschung gibt es jedoch Hinweise, dass musikalische Ausbildung die Fähigkeit zur Sprechererkennung verbessert.
Ziel dieser Arbeit ist es zu prüfen, ob das Ausmaß musikalischer Wahrnehmungskompetenz eine Prognose der Sprechererkennungsleistung erlaubt.
Um dies zu prüfen, nahmen 60 Versuchspersonen sowohl an einem „Musikalitätstest“ in Form der Montreal Battery for the Evaluation of Amusia (MBEA) als auch an einem target present Voice Line-up teil. Mittels Regressionsmodellen konnte bestimmt werden, dass die Wahrscheinlichkeit für eine korrekte Sprechererkennung steigt, je höher das Testergebnis der MBEA ausfällt. Dieses Testergebnis ermöglicht eine signifikante Prognose der Sprechererkennungsleistung. Die ebenfalls mittels Fragebögen erhobene Dauer der musikalischen Ausbildung erlaubt hingegen keine signifikante Prognose. Das durchgeführte Experiment zeigt auch, dass die Dauer der musikalischen Ausbildung das Testergebnis im Musikalitätstest nur eingeschränkt erklärt.
Diese Beobachtungen führen zu dem Schluss, dass bei einer Bewertung von Ohrenzeugen ein direktes Testen von musikalischer Wahrnehmungsfähigkeit einer Inferenz auf der Basis musik-biografischer Angaben vorzuziehen ist.
A basic assumption of standard small area models is that the statistic of interest can be modelled through a linear mixed model with common model parameters for all areas in the study. The model can then be used to stabilize estimation. In some applications, however, there may be different subgroups of areas, with specific relationships between the response variable and auxiliary information. In this case, using a distinct model for each subgroup would be more appropriate than employing one model for all observations. If no suitable natural clustering variable exists, finite mixture regression models may represent a solution that „lets the data decide“ how to partition areas into subgroups. In this framework, a set of two or more different models is specified, and the estimation of subgroup-specific model parameters is performed simultaneously to estimating subgroup identity, or the probability of subgroup identity, for each area. Finite mixture models thus offer a fexible approach to accounting for unobserved heterogeneity. Therefore, in this thesis, finite mixtures of small area models are proposed to account for the existence of latent subgroups of areas in small area estimation. More specifically, it is assumed that the statistic of interest is appropriately modelled by a mixture of K linear mixed models. Both mixtures of standard unit-level and standard area-level models are considered as special cases. The estimation of mixing proportions, area-specific probabilities of subgroup identity and the K sets of model parameters via the EM algorithm for mixtures of mixed models is described. Eventually, a finite mixture small area estimator is formulated as a weighted mean of predictions from model 1 to K, with weights given by the area-specific probabilities of subgroup identity.
Stiftungsunternehmen sind Unternehmen, die sich ganz oder teilweise im Eigentum einer gemeinnützigen oder privaten Stiftung befinden. Die Anzahl an Stiftungsunternehmen in Deutschland ist in den letzten Jahren deutlich gestiegen. Bekannte deutsche Unternehmen wie Aldi, Bosch, Bertelsmann, LIDL oder Würth befinden sich im Eigentum von Stiftungen. Einige von ihnen, wie beispielsweise Fresenius, ZF Friedrichshafen oder Zeiss, sind sogar an der Börse notiert. Die Mehrzahl der Stiftungsunternehmen entsteht dadurch, dass Unternehmensgründer oder Unternehmerfamilien ihr Unternehmen in eine Stiftung einbringen, anstatt es zu vererben oder zu verkaufen.
Die Motive hierfür sind vielfältig und können familiäre Gründe (z. B. Kinderlosigkeit, Vermeidung von Familienstreit), unternehmensbezogene Gründe (z. B. Möglichkeit der langfristigen Planung durch stabile Eigentümerstruktur) und steuerliche Gründe (Vermeidung oder Reduzierung der Erbschaftssteuer) haben oder sind durch die Person des Gründers motiviert (Möglichkeit, das Unternehmen auch nach dem eigenen Tod über die Stiftung noch weiterhin zu prägen). Aufgrund der Tatsache, dass Stiftungsunternehmen zumeist aus Familienunternehmen hervorgehen, wird in der Forschung häufig nicht zwischen Familien- und Stiftungsunternehmen differenziert. Aus diesem Grund werden in dieser Dissertation zu Beginn anhand des Drei-Kreis-Modells für Familienunternehmen die Unterschiede zwischen Stiftungs- und Familienunternehmen dargestellt. Die Ergebnisse zeigen, dass nur eine sehr geringe Anzahl von Stiftungsunternehmen eine große Ähnlichkeit zu klassischen Familienunternehmen aufweist. Die meisten Stiftungsunternehmen unterscheiden sich zum Teil sehr stark von Familienunternehmen. Diese Ergebnisse verdeutlichen, dass Stiftungsunternehmen als separates Forschungsfeld betrachtet werden sollten.
Da innerhalb der Gruppe der Stiftungsunternehmen ebenfalls eine starke Heterogenität herrscht, werden im Anschluss Performanceunterschiede innerhalb der Gruppe der Stiftungsunternehmen untersucht. Hierzu wurden die Daten von 142 deutschen Stiftungsunternehmen für die Jahre 2006-2016 erhoben und mittels einer lineareren Regression ausgewertet. Die Ergebnisse zeigen, dass zwischen den verschiedenen Typen signifikante Unterschiede herrschen. Unternehmen, die von einer gemeinnützigen Stiftung gehalten werden, weisen eine signifikant schlechtere Performance auf, als Unternehmen die eine private Stiftung als Shareholder haben.
Im nächsten Schritt wird die Gruppe der börsennotierten Stiftungsunternehmen untersucht. Mittels einer Ereignisstudie wird getestet, wie sich die Stiftung als Eigentümer eines börsennotierten Unternehmens auf den Shareholder Value auswirkt. Die Ergebnisse zeigen, dass eine Anteilsverringerung einer Stiftung einen positiven Einfluss auf den Shareholder Value hat. Stiftungen werden vom Kapitalmarkt dementsprechend negativ bewertet. Aufgrund der divergierenden Ziele von Stiftung und Unternehmen birgt die Verbindung zwischen Stiftung und Unternehmen potentielle Konflikte und Herausforderungen für die beteiligten Personen. Mittels eines qualitativen explorativen Ansatzes, wird basierend auf Interviews, ein Modell entwickelt, welches die potentiellen Konflikte in Stiftungsunternehmen anhand des Beispiels der Doppelstiftung aufzeigt.
Im letzten Schritt werden Handlungsempfehlungen in Form eines Entwurfs für einen Corporate Governance Kodex erarbeitet, die (potentiellen) Stifterinnen und Stiftern helfen sollen, mögliche Konflikte entweder zu vermeiden oder bereits bestehende Probleme zu lösen.
Die Ergebnisse dieser Dissertation sind relevant für Theorie und Praxis. Aus theoretischer Sicht liegt der Wert dieser Untersuchungen darin, dass Forscher künftig besser zwischen Stiftungs- und Familienunternehmen unterscheiden können. Zudem bringt diese Arbeit den aktuellen Forschungsstand zum Thema Stiftungsunternehmen weiter. Außerdem bietet diese Dissertation insbesondere potentiellen Stiftern einen Überblick über die verschiedenen Ausgestaltungsmöglichkeiten und die Vor- und Nachteile, die diese Konstruktionen mit sich bringen. Die Handlungsempfehlungen ermöglichen es Stiftern, vorab potentielle Gefahren erkennen zu können und diese zu umgehen.
Die Dissertation ‚Konzepte von Geschlecht im Porno-Rap. Eine korpus- und genderlinguistische Frame-Analyse‘ befasst sich mit sprachlichen Realisierungen, die Konzepte von Geschlecht (‚Mann‘ und ‚Frau‘) in deutschsprachigen Rapmusik-Texten des sog. ‚Porno-Rap‘ – im Zuge des sog. ‚Gangsta-Rap‘ – herstellen. Grundlegend dabei ist, dass die sprachlichen Realisierungen von Konzepten, die einer linguistischen Analyse zugänglich sind, untrennbar mit kognitiven Konzepten verbunden sind. Aus der Zentralität bestimmter Ausdrücke sowie der ihnen zugeordneten Werte sind analytische Zugänge möglich, die einen Zusammenhang – im Sinne eines Potentials – von sprachlichen und kognitiven Konzepten herstellen. Sprache bildet demnach einerseits Konzepte ab, gestaltet sie jedoch auch grundlegend mit, was auch die Möglichkeit begründet, sie linguistisch zu untersuchen.
Mit Frames können Wissensordnungen identifiziert, analysiert und deren Funktion in Verstehensprozessen sowie interpretativen Vorgängen bestimmt werden. Linguistische Ansätze beziehen sich dabei auf kognitionswissenschaftliche Erkenntnisse. Die Erfassung und Beschreibung der Frames und ihrer Wissenselemente von ‚Frau‘ und ‚Mann‘ bieten hierfür zahlreiche detaillierte und differenzierende Vergleichsmöglichkeiten der Geschlechtskonzeptionen. Auf Grundlage der vorhandenen Sprachbasis ist die Frage wesentlich, welcher Art und Strukturiertheit die im Diskurs enthaltenen Werte als Bestandteil der Konzepte für ‚Frau‘ und ‚Mann‘ als sprachliche Realisierung sind, um die Gestalt der Konzepte zu identifizieren, zu deskribieren und damit Aussagen über deren kognitives Potential treffen zu können. Die Konzepte von Geschlecht im untersuchten Rap-Diskurs sind durch die qualitativ-hermeneutische und quantitativ-korpuslinguistische Frame-Analyse offengelegt.
Vor diesem Hintergrund werden 236 Texte weiblicher Porno-Rapperinnen und männlicher Porno-Rapper analysiert und damit der Fokus durch die jeweilige Bestimmung eines Selbst- und Fremdbildes bzgl. ‚Mann‘ und ‚Frau‘, als Konzepte für Geschlecht im Diskurs auf den Untersuchungsgegenstand gerichtet. Die Untersuchung der Konzepte von ‚Frau‘ und ‚Mann‘ offenbart drei wesentliche Erkenntnisse: Es besteht erstens eine Struktur der Binarität, die zweitens als ungleichwertige identifiziert ist und drittens konstituierende Wechselbezüge zwischen ‚Mann‘ und ‚Frau‘ belegt. Sprachliche Gewalt und deren Prozesse, die immanenter Teil des ‚Porno-Rap'-Diskurses und in rekurrenter Form nachgewiesen sind (bspw. die Wissenselemente Verletzungsmacht / Verletzungsoffenheit sowie Triebdominanz / Triebgewalt), können potentiell als Gewalt über diesen Kontext hinaus wirken und sich zugleich in das Soziale eines Individuums und/oder eines Kollektivs einschreiben.
Ein Bewusstsein über die in dem Diskurs vermittelten Werte (Instanzen im frame-semantischen Sinn) ermöglicht, bestimmten Einstellungen, Denkprozessen und Handlungen kritisch reflexiv zu begegnen – insbesondere im Hinblick auf Konstatierungen linguistischer Sprachkritik, gemäß derer eine Sprachwahl konstitutiv (eine) ‚Wirklichkeit‘ schafft und nicht belanglos ist.
Die Dissertation untersucht die Vorstellungen über „kranke Pflanzen“ in der griechisch-römischen Antike. Ziel der Arbeit ist es, diese anhand der Schriftzeugnisse von Homer bis Boethius am Anfang des 6. Jahrhunderts darzustellen. Die Darstellung der Phytomedizin der Antike erfolgt dabei in vier Themenkomplexen: in einem lexikalischen Teil werden die konkreten Schäden und der Umgang mit ihnen aufgezeigt; in der Folge werden deren Stellenwert in wissenschaftlicher, gesellschaftlicher und poetologischer Hinsicht beleuchtet. Kern der Arbeit ist die systematische Darstellung der antiken phytomedizinischen Kenntnisse zu den einzelnen Pflanzenkrankheiten und sonstigen Schadfaktoren sowie eine Darstellung der in der antiken Literatur verankerten Gegenmaßnahmen.
Deutschland befindet sich mitten im demographischen Wandel, woraus sich tiefgreifende, langfristige Auswirkungen sowohl auf das Konsumverhalten als auch auf die Konsumstruktur ergeben. Vorliegende Arbeit fokussiert die Veränderung der Nachfragepotentiale im Tourismus, die aus dem demographischen Wandel resultieren. Ziel war es, Daten zur demographischen Entwicklung mit zu erwartenden Veränderungen ausgewählter sozioökonomischer Faktoren und hierbei konkret der Haushaltsgröße, des Bildungstandes sowie des Äquivalenzeinkommens zu verbinden und systemendogen zu erklären.
Mit Hilfe eines mehrstufigen Verfahrens wurde dargelegt, wie sich die relevanten sozioökonomischen Faktoren im Zuge einer alternden Gesellschaft verändern und welche Konsequenzen sich daraus für die quantitative, strukturbedingte Nachfrage nach touristischen Leistungen bis zum Jahr 2030 ergeben. Die ökonometrische Grundlage für die Prognosen bilden unterschiedliche binär-logistische Regressionsanalysen auf der Basis der Scientific Use Files der Reiseanalyse 2010, woraus sich Wahrscheinlichkeiten für die Teilnahme an einer mindestens fünftägigen Reise, einer Kurzreise sowie an unterschiedlichen Reisearten generieren lassen. Die additive Form des Modells der logistischen Regression macht es dabei möglich, den Fokus auf einen sozioökonomischen Faktor oder auf das Zusammenspiel mehrerer Variablen zu setzen und somit die Wirkungsstärke der einzelnen Faktoren auf die Nachfrage zu bestimmen.
Bei der Berechnung der künftigen Anzahl an Reisenden, führt die Endogenisierung der Struktureffekte altersklassenübergreifend zu (deutlich) höheren Prognoseergebnissen als im Falle einer rein demographischen Betrachtung. Für die Teilnahme an einer Reise mit einer Dauer von mindestens fünf Tagen prognostiziert das Modell unter ausschließlicher Berücksichtigung des demographischen Wandels bis zum Jahr 2030 eine Abnahme der Reisenden um 4,16 Prozent. Bezieht man die Entwicklungstendenzen der Haushalte sowie der Einkommens- und Bildungsstruktur mit ein, ergibt sich ein ganz anderes Bild: Der demographisch bedingte Rückgang wird nicht nur ausgeglichen, sondern es kommt zu einer Zunahme der Reisenden um 7,49 Prozent. Prognosen zur Anzahl der Kurzreisenden bzw. zur Teilnahme an ausgewählten Reisearten zeigen ähnliche Ergebnisse. Auch hier wirken die Struktureffekte dem demographischen Wandel bei der Nachfrage nach touristischen Leitungen deutlich entgegen.
Die vorliegende Arbeit befasst sich mit einer komplexen Fragestellung: Wie geschieht der dynamische Umbau der sprachlichen Strukturen unter der Wirkung der innersprachlichen und außersprachlichen Parameter. Im Fokus der Forschung steht der Mechanismus des Werdens der Sprachstruktur, der hier als ein einziger Modus des Daseins der Sprache betrachtet wird. Als Material der Untersuchung dient die Operationalisierung der Bestandteile der verbalen Wortbildungsprozesse in der deutschen Sprache. Die Auswahl des verbalen Teils des Vokabulars ist dadurch bedingt, dass diese Wortart ein Zentralelement ist, das die ganze Sprachmaterie konsolidiert. Als einer der Schlüsselparameter gilt dabei der Frequenzfaktor, der bisher keinen einheitlichen Status in der Sprachtheorie bekommen hat. Die Suche nach dem Ursprung der Macht dieses Faktors führt unumgänglich über die Grenzen des Sprachsystems hinaus. Die Beobachtungen über das Verhalten des Frequenzfaktors in den Prozessen und Strukturen unterschiedlichster Natur lassen behaupten, dass wir es hier mit einem sehr komplexen Phänomen zu tun haben, das ein Bestandteil des allgemeinen kognitiven Anpassungsmechanismus des Menschen zur Umwelt ist. Als solcher ist er auch ein unveräußerlicher Aspekt der Semiose, des Sprachzeichens.
Reptiles belong to a taxonomic group characterized by increasing worldwide population declines. However, it has not been until comparatively recent years that public interest in these taxa has increased, and conservation measures are starting to show results. While many factors contribute to these declines, environmental pollution, especially in form of pesticides, has seen a strong increase in the last few decades, and is nowadays considered a main driver for reptile diversity loss. In light of the above, and given that reptiles are extremely underrepresented in ecotoxicological studies regarding the effects of plant protection products, this thesis aims at studying the impacts of pesticide exposure in reptiles, by using the Common wall lizard (Podarcis muralis) as model species. In a first approach, I evaluated the risk of pesticide exposure for reptile species within the European Union, as a means to detect species with above average exposure probabilities and to detect especially sensitive reptile orders. While helpful to detect species at risk, a risk evaluation is only the first step towards addressing this problem. It is thus indispensable to identify effects of pesticide exposure in wildlife. For this, the use of enzymatic biomarkers has become a popular method to study sub-individual responses, and gain information regarding the mode of action of chemicals. However, current methodologies are very invasive. Thus, in a second step, I explored the use of buccal swabs as a minimally invasive method to detect changes in enzymatic biomarker activity in reptiles, as an indicator for pesticide uptake and effects at the sub-individual level. Finally, the last part of this thesis focuses on field data regarding pesticide exposure and its effects on reptile wildlife. Here, a method to determine pesticide residues in food items of the Common wall lizard was established, as a means to generate data for future dietary risk assessments. Subsequently, a field study was conducted with the aim to describe actual effects of pesticide exposure on reptile populations at different levels.
The harmonic Faber operator
(2018)
P. K. Suetin points out in the beginning of his monograph "Faber Polynomials and Faber Series" that Faber polynomials play an important role in modern approximation theory of a complex variable as they are used in representing analytic functions in simply connected domains, and many theorems on approximation of analytic functions are proved with their help [50]. In 1903, the Faber polynomials were firstly discovered by G. Faber. It was Faber's aim to find a generalisation of Taylor series of holomorphic functions in the open unit disc D in the following way. As any holomorphic function in D has a Taylor series representation f(z)=\sum_{\nu=0}^{\infty}a_{\nu}z^{\nu} (z\in\D) converging locally uniformly inside D, for a simply connected domain G, Faber wanted to determine a system of polynomials (Q_n) such that each function f being holomorphic in G can be expanded into a series
f=\sum_{\nu=0}^{\infty}b_{\nu}Q_{\nu} converging locally uniformly inside G. Having this goal in mind, Faber considered simply connected domains bounded by an analytic Jordan curve. He constructed a system of polynomials (F_n) with this property. These polynomials F_n were named after him as Faber polynomials. In the preface of [50], a detailed summary of results concerning Faber polynomials and results obtained by the aid of them is given. An important application of Faber polynomials is e.g. the transfer of known assertions concerning polynomial approximation of functions belonging to the disc algebra to results of the approximation of functions being continuous on a compact continuum K which contains at least two points and has a connected complement and being holomorphic in the interior of K. In this field, the Faber operator denoted by T turns out to be a powerful tool (for an introduction, see e.g. D. Gaier's monograph). It
assigns a polynomial of degree at most n given in the monomial basis \sum_{\nu=0}^{n}a_{\nu}z^{\nu} with a polynomial of degree at most n given in the basis of Faber polynomials \sum_{\nu=0}^{n}a_{\nu}F_{\nu}. If the Faber operator is continuous with respect to the uniform norms, it has a unique continuous extension to an operator mapping the disc algebra onto the space of functions being continuous on the whole compact continuum and holomorphic in its interior. For all f being element of the disc algebra and all polynomials P, via the obvious estimate for the uniform norms ||T(f)-T(P)||<= ||T|| ||f-P||, it can be seen that the original task of approximating F=T(f) by polynomials is reduced to the polynomial approximation of the function f. Therefore, the question arises under which conditions the Faber operator is continuous and surjective. A fundamental result in this regard was established by J. M. Anderson and J. Clunie who showed that if the compact continuum is bounded by a rectifiable Jordan curve with bounded boundary rotation and free from cusps, then the Faber operator with respect to the uniform norms is a topological isomorphism. Now, let f be a harmonic function in D. Similar as above, we find that f has a uniquely determined representation f=\sum_{\nu=-\infty}^{\infty}a_{\nu}p_{\nu}
converging locally uniformly inside D where p_{n}(z)=z^{n} for n\in\N_{0} and p_{-n}(z)=\overline{z}^{n} for n\in\N}. One may ask whether there is an analogue for harmonic functions on simply connected domains G. Indeed, for a domain G bounded by an analytic Jordan curve, the conjecture that each function f being harmonic in G has a uniquely determined representation f=\sum_{\nu= \infty}^{\infty}b_{\nu}F_{\nu} where F_{-n}(z)=\overline{F_{n}(z\)} for n\inN, converging locally uniformly inside G, holds true. Let now K be a compact continuum containing at least two points and having a connected complement. A main component of this thesis will be the examination of the harmonic Faber operator mapping a harmonic polynomial given in the basis of the harmonic monomials \sum_{\nu=-n}^{n}a_{\nu}p_{\nu} to a harmonic polynomial given as \sum_{\nu=-n}^{n}a_{\nu}F_{\nu}.
If this operator, which is based on an idea of J. Müller, is continuous with respect to the uniform norms, it has a unique continuous extension to an operator mapping the functions being continuous on \partial\D onto the continuous functions on K being
harmonic in the interior of K. Harmonic Faber polynomials and the harmonic Faber operator will be the objects accompanying us throughout
our whole discussion. After having given an overview about notations and certain tools we will use in our consideration in the first chapter, we begin our studies with an introduction to the Faber operator and the harmonic Faber operator. We start modestly and consider domains bounded by an analytic Jordan curve. In Section 2, as a first result, we will show that, for such a domain G, the harmonic Faber operator has a unique continuous extension to an operator mapping the space of the harmonic functions in D onto the space
of the harmonic functions in G, and moreover, the harmonic Faber
operator is an isomorphism with respect to the topologies of locally
uniform convergence. In the further sections of this chapter, we illumine the behaviour of the (harmonic) Faber operator on certain function spaces. In the third chapter, we leave the situation of compact continua bounded by an analytic Jordan curve. Instead we consider closures of domains bounded by Jordan curves having a Dini continuous curvature. With the aid of the concept of compact operators and the Fredholm alternative, we are able to show that the harmonic Faber operator is a topological isomorphism. Since, in particular, the main result of the third chapter holds true for closures K of domains bounded by analytic Jordan curves, we can make use of it to obtain new results concerning the approximation of functions being continuous on K and harmonic in the interior of K by harmonic polynomials. To do so, we develop techniques applied by L. Frerick and J. Müller in [11] and adjust them to our setting. So, we can transfer results about the classic Faber operator to the harmonic Faber operator. In the last chapter, we will use the theory of harmonic Faber polynomials
to solve certain Dirichlet problems in the complex plane. We pursue
two different approaches: First, with a similar philosophy as in [50],
we develop a procedure to compute the coefficients of a series \sum_{\nu=-\infty}^{\infty}c_{\nu}F_{\nu} converging uniformly to the solution of a given Dirichlet problem. Later, we will point out how semi-infinite programming with harmonic Faber polynomials as ansatz functions can be used to get an approximate solution of a given Dirichlet problem. We cover both approaches first from a theoretical point of view before we have a focus on the numerical implementation of concrete examples. As application of the numerical computations, we considerably obtain visualisations of the concerned Dirichlet problems rounding out our discussion about the harmonic Faber polynomials and the harmonic Faber operator.
Optimal Control of Partial Integro-Differential Equations and Analysis of the Gaussian Kernel
(2018)
An important field of applied mathematics is the simulation of complex financial, mechanical, chemical, physical or medical processes with mathematical models. In addition to the pure modeling of the processes, the simultaneous optimization of an objective function by changing the model parameters is often the actual goal. Models in fields such as finance, biology or medicine benefit from this optimization step.
While many processes can be modeled using an ordinary differential equation (ODE), partial differential equations (PDEs) are needed to optimize heat conduction and flow characteristics, spreading of tumor cells in tissue as well as option prices. A partial integro-differential equation (PIDE) is a parital differential equation involving an integral operator, e.g., the convolution of the unknown function with a given kernel function. PIDEs occur for example in models that simulate adhesive forces between cells or option prices with jumps.
In each of the two parts of this thesis, a certain PIDE is the main object of interest. In the first part, we study a semilinear PIDE-constrained optimal control problem with the aim to derive necessary optimality conditions. In the second, we analyze a linear PIDE that includes the convolution of the unknown function with the Gaussian kernel.
Die vorliegende Dissertation befasst sich mit der Bildung der Modelle der Komposita in der englischen Sprache.Um eine linguistische Theorie richtig zu bilden, stellen wir 7 Hypothesen auf, die auf umfangreiches englisches Sprachmaterial basieren. Wir schaffen den Regelkreis, der die Möglichkeiten für weitere Untersuchungen in diesem Bereich gibt. In unserem Fall ist diese Untersuchung ein begrenzter Bereich, der als die Bereicherung des Regelkreises von Köhler (2005) gilt (synergetisch-linguistische Modellierung).
Early life adversity (ELA) poses a high risk for developing major health problems in adulthood including cardiovascular and infectious diseases and mental illness. However, the fact that ELA-associated disorders first become manifest many years after exposure raises questions about the mechanisms underlying their etiology. This thesis focuses on the impact of ELA on startle reflexivity, physiological stress reactivity and immunology in adulthood.
The first experiment investigated the impact of parental divorce on affective processing. A special block design of the affective startle modulation paradigm revealed blunted startle responsiveness during presentation of aversive stimuli in participants with experience of parental divorce. Nurture context potentiated startle in these participants suggesting that visual cues of childhood-related content activates protective behavioral responses. The findings provide evidence for the view that parental divorce leads to altered processing of affective context information in early adulthood.
A second investigation was conducted to examine the link between aging of the immune system and long-term consequences of ELA. In a cohort of healthy young adults, who were institutionalized early in life and subsequently adopted, higher levels of T cell senescence were observed compared to parent-reared controls. Furthermore, the results suggest that ELA increases the risk of cytomegalovirus infection in early childhood, thereby mediating the effect of ELA on T cell-specific immunosenescence.
The third study addresses the effect of ELA on stress reactivity. An extended version of the Cold Pressor Test combined with a cognitive challenging task revealed blunted endocrine response in adults with a history of adoption while cardiovascular stress reactivity was similar to control participants. This pattern of response separation may best be explained by selective enhancement of central feedback-sensitivity to glucocorticoids resulting from ELA, in spite of preserved cardiovascular/autonomic stress reactivity.
The dissertation deals with methods to improve design-based and model-assisted estimation techniques for surveys in a finite population framework. The focus is on the development of the statistical methodology as well as their implementation by means of tailor-made numerical optimization strategies. In that regard, the developed methods aim at computing statistics for several potentially conflicting variables of interest at aggregated and disaggregated levels of the population on the basis of one single survey. The work can be divided into two main research questions, which are briefly explained in the following sections.
First, an optimal multivariate allocation method is developed taking into account several stratification levels. This approach results in a multi-objective optimization problem due to the simultaneous consideration of several variables of interest. In preparation for the numerical solution, several scalarization and standardization techniques are presented, which represent the different preferences of potential users. In addition, it is shown that by solving the problem scalarized with a weighted sum for all combinations of weights, the entire Pareto frontier of the original problem can be generated. By exploiting the special structure of the problem, the scalarized problems can be efficiently solved by a semismooth Newton method. In order to apply this numerical method to other scalarization techniques as well, an alternative approach is suggested, which traces the problem back to the weighted sum case. To address regional estimation quality requirements at multiple stratification levels, the potential use of upper bounds for regional variances is integrated into the method. In addition to restrictions on regional estimates, the method enables the consideration of box-constraints for the stratum-specific sample sizes, allowing minimum and maximum stratum-specific sampling fractions to be defined.
In addition to the allocation method, a generalized calibration method is developed, which is supposed to achieve coherent and efficient estimates at different stratification levels. The developed calibration method takes into account a very large number of benchmarks at different stratification levels, which may be obtained from different sources such as registers, paradata or other surveys using different estimation techniques. In order to incorporate the heterogeneous quality and the multitude of benchmarks, a relaxation of selected benchmarks is proposed. In that regard, predefined tolerances are assigned to problematic benchmarks at low aggregation levels in order to avoid an exact fulfillment. In addition, the generalized calibration method allows the use of box-constraints for the correction weights in order to avoid an extremely high variation of the weights. Furthermore, a variance estimation by means of a rescaling bootstrap is presented.
Both developed methods are analyzed and compared with existing methods in extensive simulation studies on the basis of a realistic synthetic data set of all households in Germany. Due to the similar requirements and objectives, both methods can be successively applied to a single survey in order to combine their efficiency advantages. In addition, both methods can be solved in a time-efficient manner using very comparable optimization approaches. These are based on transformations of the optimality conditions. The dimension of the resulting system of equations is ultimately independent of the dimension of the original problem, which enables the application even for very large problem instances.
Die Untersuchung verbindet Methoden der Korpuslinguistik und des close readings, um an einem repräsentativen Einzeltext mittlerer Länge das Verhältnis der syntaktischen und metrischen Ebene im mittelhochdeutschen Reimpaarvers zu untersuchen. Herausgearbeitet werden regelmäßig wiederkehrende Muster, die beide Ebenen stets gleich aufeinander abbilden. Diese Regelmäßigkeiten lassen sich aus den Lautstrukturen des mhd. Wortschatzes, den syntaktischen Bauplänen der Phrasen und Sätze, schließlich den Erfordernissen des metrischen Schemas erklären. Der häufig zur Erklärung herangezogene Reimzwang erweist sich bei näherer Betrachtung als eher sekundärer Einfluss auf die syntaktische Struktur. Neben typischen „Normalfällen“ bei denen sich statistisch häufige Betonungsmuster der Wörter, in üblichen, einfachen Satzstellungsmustern in immer gleicher Weise problemlos in den Reimpaarvers integrieren lassen, können auch wiederkehrende Abweichungsvarianten erklärt und beschrieben werden. Die festgestellten Regularitäten sind nur zu einem kleinen Teil und in wenigen Fällen deterministisch, es lässt sich jedoch, um die statistischen Auffälligkeiten zu begründen, zeigen, welche Vorteile sich aus bestimmten Varianten ergeben und welche Schwierigkeiten bei anderen entstehen, wie sich eine Variante durch eine andere ersetzen lässt. Beschrieben wird so der Gestaltungsraum des Dichters und die von ihm gewählten Lösungen. Indirekt ergibt sich zugleich ein Negativbild der Syntax, die den Zwängen des metrischen Schemas nicht unterworfen ist.
The economic growth theory analyses which factors affect economic growth and tries to analyze how it can last. A popular neoclassical growth model is the Ramsey-Cass-Koopmans model, which aims to determine how much of its income a nation or an economy should save in order to maximize its welfare. In this thesis, we present and analyze an extended capital accumulation equation of a spatial version of the Ramsey model, balancing diffusive and agglomerative effects. We model the capital mobility in space via a nonlocal diffusion operator which allows for jumps of the capital stock from one location to an other. Moreover, this operator smooths out heterogeneities in the factor distributions slower, which generated a more realistic behavior of capital flows. In addition to that, we introduce an endogenous productivity-production operator which depends on time and on the capital distribution in space. This operator models the technological progress of the economy. The resulting mathematical model is an optimal control problem under a semilinear parabolic integro-differential equation with initial and volume constraints, which are a nonlocal analog to local boundary conditions, and box-constraints on the state and the control variables. In this thesis, we consider this problem on a bounded and unbounded spatial domain, in both cases with a finite time horizon. We derive existence results of weak solutions for the capital accumulation equations in both settings and we proof the existence of a Ramsey equilibrium in the unbounded case. Moreover, we solve the optimal control problem numerically and discuss the results in the economic context.
This dissertation is dedicated to the analysis of the stabilty of portfolio risk and the impact of European regulation introducing risk based classifications for investment funds.
The first paper examines the relationship between portfolio size and the stability of mutual fund risk measures, presenting evidence for economies of scale in risk management. In a unique sample of 338 fund portfolios we find that the volatility of risk numbers decreases for larger funds. This finding holds for dispersion as well as tail risk measures. Further analyses across asset classes provide evidence for the robustness of the effect for balanced and fixed income portfolios. However, a size effect did not emerge for equity funds, suggesting that equity fund managers simply scale their strategy up as they grow. Analyses conducted on the differences in risk stability between tail risk measures and volatilities reveal that smaller funds show higher discrepancies in that respect. In contrast to the majority of prior studies on the basis of ex-post time series risk numbers, this study contributes to the literature by using ex-ante risk numbers based on the actual assets and de facto portfolio data.
The second paper examines the influence of European legislation regarding risk classification of mutual funds. We conduct analyses on a set of worldwide equity indices and find that a strategy based on the long term volatility as it is imposed by the Synthetic Risk Reward Indicator (SRRI) would lead to substantial variations in exposures ranging from short phases of very high leverage to long periods of under investments that would be required to keep the risk classes. In some cases, funds will be forced to migrate to higher risk classes due to limited means to reduce volatilities after crises events. In other cases they might have to migrate to lower risk classes or increase their leverage to ridiculous amounts. Overall, we find if the SRRI creates a binding mechanism for fund managers, it will create substantial interference with the core investment strategy and may incur substantial deviations from it. Fruthermore due to the forced migrations the SRRI degenerates to a passive indicator.
The third paper examines the impact of this volatility based fund classification on portfolio performance. Using historical data on equity indices we find initially that a strategy based on long term portfolio volatility, as it is imposed by the Synthetic Risk Reward Indicator (SRRI), yields better Sharpe Ratios (SRs) and Buy and Hold Returns (BHRs) for the investment strategies matching the risk classes. Accounting for the Fama-French factors reveals no significant alphas for the vast majority of the strategies. In our simulation study where volatility was modelled through a GJR(1,1) - model we find no significant difference in mean returns, but significantly lower SRs for the volatility based strategies. These results were confirmed in robustness checks using alternative models and timeframes. Overall we present evidence which suggests that neither the higher leverage induced by the SRRI nor the potential protection in downside markets does pay off on a risk adjusted basis.
The implicit power motive is one of the most researched motives in motivational psychology—at least in adults. Children have rarely been subject to investigation and there are virtually no results on behavioral and affective correlates of the implicit power motive in children. As behavior and affect are important components of conceptual validation, the empirical data in this dissertation focused on identifying three correlates, namely resource control behavior (study 1), power stress (study 2), and persuasive behavior (study 3). In each study, the implicit power motive was measured via the Picture Story Exercise, using an adapted version for children. Children across samples were between 4 and 11 years old.
Results from study 1 and 2 showed that children’s power-related behavior corresponded with evidence from adult samples: children with a high implicit power motive secure attractive resources and show negative reactions to a thwarted attempt to exert influence. Study 3 contradicted existing evidence with adults in that children’s persuasive behavior was not associated with nonverbal, but with verbal strategies of persuasion. Despite this inconsistency, these results are, together with the validation of a child-friendly Picture Story Exercise version, an important step into further investigating and confirming the concept of the implicit power motive and how to measure it in children.
A matrix A is called completely positive if there exists an entrywise nonnegative matrix B such that A = BB^T. These matrices can be used to obtain convex reformulations of for example nonconvex quadratic or combinatorial problems. One of the main problems with completely positive matrices is checking whether a given matrix is completely positive. This is known to be NP-hard in general. rnrnFor a given matrix completely positive matrix A, it is nontrivial to find a cp-factorization A=BB^T with nonnegative B since this factorization would provide a certificate for the matrix to be completely positive. But this factorization is not only important for the membership to the completely positive cone, it can also be used to recover the solution of the underlying quadratic or combinatorial problem. In addition, it is not a priori known how many columns are necessary to generate a cp-factorization for the given matrix. The minimal possible number of columns is called the cp-rank of A and so far it is still an open question how to derive the cp-rank for a given matrix. Some facts on completely positive matrices and the cp-rank will be given in Chapter 2. Moreover, in Chapter 6, we will see a factorization algorithm, which, for a given completely positive matrix A and a suitable starting point, computes the nonnegative factorization A=BB^T. The algorithm therefore returns a certificate for the matrix to be completely positive. As introduced in Chapter 3, the fundamental idea of the factorization algorithm is to start from an initial square factorization which is not necessarily entrywise nonnegative, and extend this factorization to a matrix for which the number of columns is greater than or equal to the cp-rank of A. Then it is the goal to transform this generated factorization into a cp-factorization. This problem can be formulated as a nonconvex feasibility problem, as shown in Section 4.1, and solved by a method which is based on alternating projections, as proven in Chapter 6. On the topic of alternating projections, a survey will be given in Chapter 5. Here we will see how to apply this technique to several types of sets like subspaces, convex sets, manifolds and semialgebraic sets. Furthermore, we will see some known facts on the convergence rate for alternating projections between these types of sets. Considering more than two sets yields the so called cyclic projections approach. Here some known facts for subspaces and convex sets will be shown. Moreover, we will see a new convergence result on cyclic projections among a sequence of manifolds in Section 5.4. In the context of cp-factorizations, a local convergence result for the introduced algorithm will be given. This result is based on the known convergence for alternating projections between semialgebraic sets. To obtain cp-facrorizations with this first method, it is necessary to solve a second order cone problem in every projection step, which is very costly. Therefore, in Section 6.2, we will see an additional heuristic extension, which improves the numerical performance of the algorithm. Extensive numerical tests in Chapter 7 will show that the factorization method is very fast in most instances. In addition, we will see how to derive a certificate for the matrix to be an element of the interior of the completely positive cone. As a further application, this method can be extended to find a symmetric nonnegative matrix factorization, where we consider an additional low-rank constraint. Here again, the method to derive factorizations for completely positive matrices can be used, albeit with some further adjustments, introduced in Section 8.1. Moreover, we will see that even for the general case of deriving a nonnegative matrix factorization for a given rectangular matrix A, the key aspects of the completely positive factorization approach can be used. To this end, it becomes necessary to extend the idea of finding a completely positive factorization such that it can be used for rectangular matrices. This yields an applicable algorithm for nonnegative matrix factorization in Section 8.2. Numerical results for this approach will suggest that the presented algorithms and techniques to obtain completely positive matrix factorizations can be extended to general nonnegative factorization problems.
Die Dissertation untersucht den Anteil der Armutsthematik an der Etablierung des Kinos in Deutschland. Der Untersuchungsschwerpunkt konzentriert sich auf die Jahre 1907 bis 1913, einer entscheidenden Zeitspanne für die Institutionalisierung des Kinos als Medium sui generis. Ziel der Untersuchung ist es, anhand von Filmanalysen wiederkehrende Muster medialer Praktiken der kinematographischen Artikulation der Sozialen Frage zu eruieren und ihre thematische Relevanz bzw. ihren Anteil für die Etablierung des Kinos in Deutschland zu bestimmen. Im Fokus stehen die Medienprodukte, ihre Motivgestaltung und Inszenierungspraktiken.
We will consider discrete dynamical systems (X,T) which consist of a state space X and a linear operator T acting on X. Given a state x in X at time zero, its state at time n is determined by the n-th iteration T^n(x). We are interested in the long-term behaviour of this system, that means we want to know how the sequence (T^n (x))_(n in N) behaves for increasing n and x in X. In the first chapter, we will sum up the relevant definitions and results of linear dynamics. In particular, in topological dynamics the notions of hypercyclic, frequently hypercyclic and mixing operators will be presented. In the setting of measurable dynamics, the most important definitions will be those of weakly and strongly mixing operators. If U is an open set in the (extended) complex plane containing 0, we can define the Taylor shift operator on the space H(U) of functions f holomorphic in U as Tf(z) = (f(z)- f(0))/z if z is not equal to 0 and otherwise Tf(0) = f'(0). In the second chapter, we will start examining the Taylor shift on H(U) endowed with the topology of locally uniform convergence. Depending on the choice of U, we will study whether or not the Taylor shift is weakly or strongly mixing in the Gaussian sense. Next, we will consider Banach spaces of functions holomorphic on the unit disc D. The first section of this chapter will sum up the basic properties of Bergman and Hardy spaces in order to analyse the dynamical behaviour of the Taylor shift on these Banach spaces in the next part. In the third section, we study the space of Cauchy transforms of complex Borel measures on the unit circle first endowed with the quotient norm of the total variation and then with a weak-* topology. While the Taylor shift is not even hypercyclic in the first case, we show that it is mixing for the latter case. In Chapter 4, we will first introduce Bergman spaces A^p(U) for general open sets and provide approximation results which will be needed in the next chapter where we examine the Taylor shift on these spaces on its dynamical properties. In particular, for 1<=p<2 we will find sufficient conditions for the Taylor shift to be weakly mixing or strongly mixing in the Gaussian sense. For p>=2, we consider specific Cauchy transforms in order to determine open sets U such that the Taylor shift is mixing on A^p(U). In both sections, we will illustrate the results with appropriate examples. Finally, we apply our results to universal Taylor series. The results of Chapter 5 about the Taylor shift allow us to consider the behaviour of the partial sums of the Taylor expansion of functions in general Bergman spaces outside its disc of convergence.
Given a compact set K in R^d, the theory of extension operators examines the question, under which conditions on K, the linear and continuous restriction operators r_n:E^n(R^d)→E^n(K),f↦(∂^α f|_K)_{|α|≤n}, n in N_0 and r:E(R^d)→E(K),f↦(∂^α f|_K)_{α in N_0^d}, have a linear and continuous right inverse. This inverse is called extension operator and this problem is known as Whitney's extension problem, named after Hassler Whitney. In this context, E^n(K) respectively E(K) denote spaces of Whitney jets of order n respectively of infinite order. With E^n(R^d) and E(R^d), we denote the spaces of n-times respectively infinitely often continuously partially differentiable functions on R^d. Whitney already solved the question for finite order completely. He showed that it is always possible to construct a linear and continuous right inverse E_n for r_n. This work is concerned with the question of how the existence of a linear and continuous right inverse of r, fulfilling certain continuity estimates, can be characterized by properties of K. On E(K), we introduce a full real scale of generalized Whitney seminorms (|·|_{s,K})_{s≥0}, where |·|_{s,K} coincides with the classical Whitney seminorms for s in N_0. We equip also E(R^d) with a family (|·|_{s,L})_{s≥0} of those seminorms, where L shall be a a compact set with K in L-°. This family of seminorms on E(R^d) suffices to characterize the continuity properties of an extension operator E, since we can without loss of generality assume that E(E(K)) in D^s(L).
In Chapter 2, we introduce basic concepts and summarize the classical results of Whitney and Stein.
In Chapter 3, we modify the classical construction of Whitney's operators E_n and show that |E_n(·)|_{s,L}≤C|·|_{s,K} for s in[n,n+1).
In Chapter 4, we generalize a result of Frerick, Jordá and Wengenroth and show that LMI(1) for K implies the existence of an extension operator E without loss of derivatives, i.e. we have it fulfils |E(·)|_{s,L}≤C|·|_{s,K} for all s≥0. We show that a large class of self similar sets, which includes the Cantor set and the Sierpinski triangle, admits an extensions operator without loss of derivatives.
In Chapter 5 we generalize a result of Frerick, Jordá and Wengenroth and show that WLMI(r) for r≥1 implies the existence of a tame linear extension operator E having a homogeneous loss of derivatives, such that |E(·)|_{s,L}≤C|·|_{(r+ε)s,K} for all s≥0 and all ε>0.
In the last chapter we characterize the existence of an extension operator having an arbitrary loss of derivatives by the existence of measures on K.