Filtern
Erscheinungsjahr
- 2023 (37) (entfernen)
Dokumenttyp
- Dissertation (37) (entfernen)
Schlagworte
- Deutschland (5)
- survey statistics (3)
- Analysis (2)
- Individuum (2)
- Kognition (2)
- Mittelstand (2)
- Modellierung (2)
- Optimierung (2)
- Partielle Differentialgleichung (2)
- Unternehmen (2)
Institut
- Fachbereich 4 (11)
- Fachbereich 1 (4)
- Fachbereich 2 (3)
- Fachbereich 3 (3)
- Wirtschaftswissenschaften (2)
- Fachbereich 6 (1)
- Soziologie (1)
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
The following dissertation contains three studies examining academic boredom development in five high-track German secondary schools (AVG-project data; Study 1: N = 1,432; Study 2: N = 1,861; Study 3: N = 1,428). The investigation period spanned 3.5 years, with four waves of measurement from grades 5 to 8 (T1: 5th grade, after transition to secondary school; T2: 5th grade, after mid-term evaluations; T3: 6th grade, after mid-term evaluations; T4: 8th grade, after mid-term evaluations). All three studies featured cross-sectional and longitudinal analyses, separating, and comparing the subject domains of mathematics and German.
Study 1 provided an investigation of academic boredom’s factorial structure alongside correlational and reciprocal relations of different forms of boredom and academic self-concept. Analyses included reciprocal effects models and latent correlation analyses. Results indicated separability of boredom intensity, boredom due to underchallenge and boredom due to overchallenge, as separate, correlated factors. Evidence for reciprocal relations between boredom and academic self-concept was limited.
Study 2 examined the effectiveness and efficacy of full-time ability grouping for as a boredom intervention directed at the intellectually gifted. Analyses included propensity score matching, and latent growth curve modelling. Results pointed to limited effectiveness and efficacy for full-time ability grouping regarding boredom reduction.
Study 3 explored gender differences in academic boredom development, mediated by academic interest, academic self-concept, and previous academic achievement. Analyses included measurement invariance testing, and multiple-indicator-multi-cause-models. Results showed one-sided gender differences, with boys reporting less favorable boredom development compared to girls, even beyond the inclusion of relevant mediators.
Findings from all three studies were embedded into the theoretical framework of control-value theory (Pekrun, 2006; 2019; Pekrun et al., 2023). Limitations, directions for future research, and practical implications were acknowledged and discussed.
Overall, this dissertation yielded important insights into boredom’s conceptual complexity. This concerned factorial structure, developmental trajectories, interrelations to other learning variables, individual differences, and domain specificities.
Keywords: Academic boredom, boredom intensity, boredom due to underchallenge, boredom due to overchallenge, ability grouping, gender differences, longitudinal data analysis, control-value theory
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Diese Dissertation beschäftigt sich mit der Fragestellung, ob und wie Intersektionalität als analytische Perspektive für literarische Texte eine nützliche Ergänzung für ethnisch geordnete Literaturfelder darstellt. Diese Fragestellung wird anhand der Analyse dreier zeitgenössischer chinesisch-kanadischer Romane untersucht.
In der Einleitung wird die Relevanz der Themenbereiche Intersektionalität und asiatisch-kanadische Literatur erörtert. Das darauffolgende Kapitel bietet einen historischen Überblick über die chinesisch-kanadische Einwanderung und geht detailliert auf die literarischen Produktionen ein. Es wird aufgezeigt, dass, obwohl kulturelle Güter auch zur Artikulation von Ungleichheitsverhältnissen aufgrund von zugeschriebener ethnischer Zugehörigkeit entstehen, ein Diversifizierungsbestreben innerhalb der literarischen Gemeinschaft von chinesisch-kanadischen Autor:innen identifiziert werden kann. Das dritte Kapitel widmet sich dem Begriff „Intersektionalität“ und stellt, nach einer historischen Einordnung des Konzeptes mit seinen Ursprüngen im Black Feminism, Intersektionalität als bindendes Element zwischen Postkolonialismus, Diversität und Empowerment dar – Konzepte, die für die Analyse (kanadischer) Literatur in dieser Dissertation von besonderer Relevanz sind. Anschließend wird die Rolle von Intersektionalität in der Literaturwissenschaft aufgegriffen. Die darauffolgenden exemplarischen Analysen von Kim Fus For Today I Am a Boy, Wayson Choys The Jade Peony und Yan Lis Lily in the Snow veranschaulichen die vorangegangen methodischen Überlegungen. Allen drei Romanen vorangestellt ist die Kontextualisierung des jeweiligen Werkes als chinesisch-kanadisch, aber auch bisher vorgenommene Überlegungen, die diese Einordnung infrage stellen. Nach einer Zusammenfassung des Inhalts folgt eine intersektionale Analyse auf der inhaltlichen Ebene, die in den familiären und weiteren sozialen Bereich unterteilt ist, da sich die Hierarchiemechanismen innerhalb dieser Bereiche unterscheiden oder gegenseitig verstärken, wie aus den Analysen hervorgeht. Anschließend wird die formale Analyse mit einem intersektionalen Schwerpunkt in einem separaten Unterkapitel näher beleuchtet. Ein drittes Unterkapitel widmet sich einem dem jeweiligen Roman spezifischen Aspekt, der im Zusammenhang mit einer intersektionalen Analyse von besonderer Relevanz ist. Die Arbeit schließt mit einem übergreifenden Fazit, welches die wichtigsten Ergebnisse aus der Analyse zusammenfasst und mit weiteren Überlegungen zu den Implikationen dieser Dissertation, vor allem im Hinblick auf sogenannte kanadische „master narratives“, die eine weitreichende, kontextuelle Relevanz für das Arbeiten mit literarischen Texten aufweisen und durch einen intersektionalen literarischen Ansatz in Zukunft gegebenenfalls gewinnbringend ergänzt werden können.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
Do Personality Traits, Trust and Fairness Shape the Stock-Investing Decisions of an Individual?
(2023)
This thesis is comprised of three projects, all of which are fundamentally connected to the choices that individuals make about stock investments. Differences in stock market participation (SMP) across countries are large and difficult to explain. The second chapter focuses on differences between Germany (low SMP) and East Asian countries (mostly high SMP). The study hypothesis is that cultural differences regarding social preferences and attitudes towards inequality lead to different attitudes towards stock markets and subsequently to different SMPs. Using a large-scale survey, it is found that these factors can, indeed, explain a substantial amount of the country differences that other known factors (financial literacy, risk preferences, etc.) could not. This suggests that social preferences should be given a more central role in programs that aim to enhance SMP in countries like Germany. The third chapter documented the importance of trust as well as herding for stock ownership decisions. The findings show that trust as a general concept has no significant contribution to stock investment intention. A thorough examination of general trust elements reveals that in group and out-group trust have an impact on individual stock market investment. Higher out group trust directly influences a person's decision to invest in stocks, whereas higher in-group trust increases herding attitudes in stock investment decisions and thus can potentially increase the likelihood of stock investments as well. The last chapter investigates the significance of personality traits in stock investing and home bias in portfolio selection. Findings show that personality traits do indeed have a significant impact on stock investment and portfolio allocation decisions. Despite the fact that the magnitude and significance of characteristics differ between two groups of investors, inexperienced and experienced, conscientiousness and neuroticism play an important role in stock investments and preferences. Moreover, high conscientiousness scores increase stock investment desire and portfolio allocation to risky assets like stocks, discouraging home bias in asset allocation. Regarding neuroticism, a higher-level increases home bias in portfolio selection and decreases willingness to stock investment and portfolio share. Finally, when an investor has no prior experience with portfolio selection, patriotism generates home bias. For experienced investors, having a low neuroticism score and a high conscientiousness and openness score seemed to be a constant factor in deciding to invest in a well-diversified international portfolio
This thesis comprises of four research papers on the economics of education and industrial relations, which contribute to the field of empirical economic research. All of the corresponding papers focus on analysing how much time individuals spend on specific activities. The allocation of available time resources is a decision that individuals make throughout their lifetime. In this thesis, we consider individuals at different stages of their lives - students at school, university students, and dependent employees at the workplace.
Part I includes two research studies on student's behaviour in secondary and tertiary education.
Chapter 2 explores whether students who are relatively younger or older within the school year exhibit differential time allocation. Building on previous findings showing that relatively younger students perform worse in school, the study shows that relatively younger students are aware of their poor performance in school and feel more strain as a result. Nevertheless, there are no clear differences to be found in terms of time spent on homework, while relatively younger students spend more time watching television and less time on sports activities. Thus, the results suggest that the lower learning outcomes are not associated with different time allocations between school-related activities and non-school-related activities.
Chapter 3 analyses how individual ability and labour market prospects affect study behaviour. The theoretical modelling predicts that both determinants increase study effort. The empirical investigation is based on cross-sectional data from the National Educational Panel Study (NEPS) and includes thousands of students in Germany. The analyses show that more gifted students exhibit lower subjective effort levels and invest less time in self-study. In contrast, very good labour market prospects lead to more effort exerted by the student, both qualitatively and quantitatively. The potential endogeneity problem is taken into account by using regional unemployment data as an instrumental variable.
Part II includes two labour economic studies on determinants of overtime. Both studies belong to the field of industrial relations, as they focus on union membership on the one hand and the interplay of works councils and collective bargaining coverage on the other.
Chapter 4 shows that union members work less overtime than non-members do. The econometric approach takes the problem of unobserved heterogeneity into account; but provides no evidence that this issue affects the results. Different channels that could lead to this relationship are analysed by examining relevant subgroups separately. For example, this effect of union membership can also be observed in establishments with works councils and for workers who are very likely to be covered by collective bargaining agreements. The study concludes that the observed effect is due to the fact that union membership can protect workers from corresponding increased working time demands by employers.
Chapter 5 builds on previous studies showing a negative effect of works councils on overtime. In addition to co-determination by works councils at the firm level, collective bargaining coverage is an important factor in the German industrial relations system. Corresponding data was not available in the SOEP for quite some time. Therefore, the study uses recent SOEP data, which also contains information on collective bargaining coverage. A cross-sectional analysis is conducted to examine the effects of works councils in establishments with and without collective bargaining coverage. Similar to studies analysing other outcome variables, the results show that the effect of works councils exists only for employees covered by a collective bargaining agreement.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
Allocating scarce resources efficiently is a major task in mechanism design. One of the most fundamental problems in mechanism design theory is the problem of selling a single indivisible item to bidders with private valuations for the item. In this setting, the classic Vickrey auction of~\citet{vickrey1961} describes a simple mechanism to implement a social welfare maximizing allocation.
The Vickrey auction for a single item asks every buyer to report its valuation and allocates the item to the highest bidder for a price of the second highest bid. This auction features some desirable properties, e.g., buyers cannot benefit from misreporting their true value for the item (incentive compatibility) and the auction can be executed in polynomial time.
However, when there is more than one item for sale and buyers' valuations for sets of items are not additive or the set of feasible allocations is constrained, then constructing mechanisms that implement efficient allocations and have polynomial runtime might be very challenging. Consider a single seller selling $n\in \N$ heterogeneous indivisible items to several bidders. The Vickrey-Clarke-Groves auction generalizes the idea of the Vickrey auction to this multi-item setting. Naturally, every bidder has an intrinsic value for every subset of items. As in in the Vickrey auction, bidders report their valuations (Now, for every subset of items!). Then, the auctioneer computes a social welfare maximizing allocation according to the submitted bids and charges buyers the social cost of their winning that is incurred by the rest of the buyers. (This is the analogue to charging the second highest bid to the winning bidder in the single item Vickrey auction.) It turns out that the Vickrey-Clarke-Groves auction is also incentive compatible but it poses some problems: In fact, say for $n=40$, bidders would have to submit $2^{40}-1$ values (one value for each nonempty subset of the ground set) in total. Thus, asking every bidder for its valuation might be impossible due to time complexity issues. Therefore, even though the Vickrey-Clarke-Groves auction implements a social welfare maximizing allocation in this multi-item setting it might be impractical and there is need for alternative approaches to implement social welfare maximizing allocations.
This dissertation represents the results of three independent research papers all of them tackling the problem of implementing efficient allocations in different combinatorial settings.
There is no longer any doubt about the general effectiveness of psychotherapy. However, up to 40% of patients do not respond to treatment. Despite efforts to develop new treatments, overall effectiveness has not improved. Consequently, practice-oriented research has emerged to make research results more relevant to practitioners. Within this context, patient-focused research (PFR) focuses on the question of whether a particular treatment works for a specific patient. Finally, PFR gave rise to the precision mental health research movement that is trying to tailor treatments to individual patients by making data-driven and algorithm-based predictions. These predictions are intended to support therapists in their clinical decisions, such as the selection of treatment strategies and adaptation of treatment. The present work summarizes three studies that aim to generate different prediction models for treatment personalization that can be applied to practice. The goal of Study I was to develop a model for dropout prediction using data assessed prior to the first session (N = 2543). The usefulness of various machine learning (ML) algorithms and ensembles was assessed. The best model was an ensemble utilizing random forest and nearest neighbor modeling. It significantly outperformed generalized linear modeling, correctly identifying 63.4% of all cases and uncovering seven key predictors. The findings illustrated the potential of ML to enhance dropout predictions, but also highlighted that not all ML algorithms are equally suitable for this purpose. Study II utilized Study I’s findings to enhance the prediction of dropout rates. Data from the initial two sessions and observer ratings of therapist interventions and skills were employed to develop a model using an elastic net (EN) algorithm. The findings demonstrated that the model was significantly more effective at predicting dropout when using observer ratings with a Cohen’s d of up to .65 and more effective than the model in Study I, despite the smaller sample (N = 259). These results indicated that generating models could be improved by employing various data sources, which provide better foundations for model development. Finally, Study III generated a model to predict therapy outcome after a sudden gain (SG) in order to identify crucial predictors of the upward spiral. EN was used to generate the model using data from 794 cases that experienced a SG. A control group of the same size was also used to quantify and relativize the identified predictors by their general influence on therapy outcomes. The results indicated that there are seven key predictors that have varying effect sizes on therapy outcome, with Cohen's d ranging from 1.08 to 12.48. The findings suggested that a directive approach is more likely to lead to better outcomes after an SG, and that alliance ruptures can be effectively compensated for. However, these effects
were reversed in the control group. The results of the three studies are discussed regarding their usefulness to support clinical decision-making and their implications for the implementation of precision mental health.
Diese Dissertationsschrift befasst sich mit der Erforschung des motorischen Gedächtnisses. Wir gehen der Frage nach, ob sich dort Analogien zu im deklarativen Gedächtnis bekannten kontextuellen und inhibitorischen Effekten finden lassen.
Der erste von drei peer reviewed Artikeln setzt sich mit der generellen Bedeutung von externen Kontextmerkmalen für einen motorischen Gedächtnisabruf auseinander. Wir veränderten zwei verschiedene Sätze motorischer Sequenzen entlang einer hohen Zahl entsprechender Merkmale. Signifikant unterschiedliche Erinnerungsleistungen wiesen auf eine Kontextabhängigkeit motorischer Inhalte hin. Die Erinnerungsleistung variierte entlang der seriellen Output-Position. Bei einem Kontextwechsel blieb die Erinnerungsleistung über den Abrufverlauf nahezu stabil, bei Kontextbeibehaltung fiel diese schnell signifikant ab.
Beide weiteren peer reviewed Artikel wenden sich dann der Inhibition motorischer Sequenzen zu. Im zweiten Artikel begutachten wir drei Sätze motorischer Sequenzen, die wir mit verschiedenen Händen ausführen ließen, auf ein selektives gerichtetes Vergessen. Die Vergessen-Gruppe zeigte dies nur, wenn für Satz Zwei und Drei dieselbe Hand benutzt wurde und somit ein hohes Interferenzpotenzial zwischen diesen Listen bestand. War dieses im Vergleich niedrig, indem beide Sätze mit verschiedenen Händen auszuführen waren, trat kein selektives gerichtetes Vergessen auf. Das deutet auf kognitive Inhibition als wirkursächlichen Prozess.
Im dritten Artikel schließlich untersuchen wir Effekte willentlicher kognitiver Unterdrückung sowohl des Gedächtnisabrufs als auch des Ausführens in einer motorischen Adaptation des TNT (think/no-think) – Paradigmas (Anderson & Green, 2001). Waren die Sequenzen in Experiment 1 anfänglich stärker trainiert worden, so zeigten willentlich unterdrückte (no-think) motorische Repräsentationen eine deutliche Verlangsamung in deren Zugänglichkeit und tendenziell auch in der Ausführung, - im Vergleich zu Basisraten-Sequenzen. Waren die Sequenzen in Experiment 2 dagegen nur moderat trainiert, wurden diese auch schlechter erinnert und deutlich verlangsamt ausgeführt. Willentliche kognitive Unterdrückung kann motorische Gedächtnisrepräsentation und deren Ausführung beeinflussen.
Unsere drei Artikel bestätigen motorische Analogien bekannter Kontext- und Inhibitionseffekte im deklarativen Gedächtnis. Wir führen ein selektives gerichtetes Vergessen motorischer Inhalte eindeutig auf Inhibition zurück und bestätigen darüber hinaus Effekte der willentlichen Unterdrückung motorischer Gedächtnisrepräsentation.
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
Das Ziel dynamischer Mikrosimulationen ist es, die Entwicklung von Systemen über das Verhalten der einzelnen enthaltenen Bestandteile zu simulieren, um umfassende szenariobasierte Analysen zu ermöglichen. Im Bereich der Wirtschafts- und Sozialwissenschaften wird der Fokus üblicherweise auf Populationen bestehend aus Personen und Haushalten gelegt. Da politische und wirtschaftliche Entscheidungsprozesse meist auf lokaler Ebene getroffen werden, bedarf es zudem kleinräumiger Informationen, um gezielte Handlungsempfehlungen ableiten zu können. Das stellt Forschende wiederum vor große Herausforderungen im Erstellungsprozess regionalisierter Simulationsmodelle. Dieser Prozess reicht von der Generierung geeigneter Ausgangsdatensätze über die Erfassung und Umsetzung der dynamischen Komponenten bis hin zur Auswertung der Ergebnisse und Quantifizierung von Unsicherheiten. Im Rahmen dieser Arbeit werden ausgewählte Komponenten, die für regionalisierte Mikrosimulationen von besonderer Relevanz sind, beschrieben und systematisch analysiert.
Zunächst werden in Kapitel 2 theoretische und methodische Aspekte von Mikrosimulationen vorgestellt, um einen umfassenden Überblick über verschiedene Arten und Möglichkeiten der Umsetzung dynamischer Modellierungen zu geben. Im Fokus stehen dabei die Grundlagen der Erfassung und Simulation von Zuständen und Zustandsänderungen sowie die damit verbundenen strukturellen Aspekte im Simulationsprozess.
Sowohl für die Simulation von Zustandsänderungen als auch für die Erweiterung der Datenbasis werden primär logistische Regressionsmodelle zur Erfassung und anschließenden wahrscheinlichkeitsbasierten Vorhersage der Bevölkerungsstrukturen auf Mikroebene herangezogen. Die Schätzung beruht insbesondere auf Stichprobendaten, die in der Regel neben einem eingeschränktem Stichprobenumfang keine oder nur unzureichende regionale Differenzierungen zulassen. Daher können bei der Vorhersage von Wahrscheinlichkeiten erhebliche Differenzen zu bekannten Totalwerten entstehen. Um eine Harmonisierung mit den Totalwerten zu erhalten, lassen sich Methoden zur Anpassung von Wahrscheinlichkeiten – sogenannte Alignmentmethoden – anwenden. In der Literatur werden zwar unterschiedliche Möglichkeiten beschrieben, über die Auswirkungen dieser Verfahren auf die Güte der Modelle ist jedoch kaum etwas bekannt. Zur Beurteilung verschiedener Techniken werden diese im Rahmen von Kapitel 3 in umfassenden Simulationsstudien unter verschiedenen Szenarien umgesetzt. Hierbei kann gezeigt werden, dass durch die Einbindung zusätzlicher Informationen im Modellierungsprozess deutliche Verbesserungen sowohl bei der Schätzung der Parameter als auch bei der Vorhersage der Wahrscheinlichkeiten erzielt werden können. Zudem lassen sich dadurch auch bei fehlenden regionalen Identifikatoren in den Modellierungsdaten kleinräumige Wahrscheinlichkeiten erzeugen. Insbesondere die Maximierung der Likelihood des zugrundeliegenden Regressionsmodells unter der Nebenbedingung, dass die bekannten Totalwerte eingehalten werden, weist in allen Simulationsstudien überaus gute Ergebnisse auf.
Als eine der einflussreichsten Komponenten in regionalisierten Mikrosimulationen erweist sich die Umsetzung regionaler Mobilität. Gleichzeitig finden Wanderungen in vielen Mikrosimulationsmodellen keine oder nur unzureichende Beachtung. Durch den unmittelbaren Einfluss auf die gesamte Bevölkerungsstruktur führt ein Ignorieren jedoch bereits bei einem kurzen Simulationshorizont zu starken Verzerrungen. Während für globale Modelle die Integration von Wanderungsbewegungen über Landesgrenzen ausreicht, müssen in regionalisierten Modellen auch Binnenwanderungsbewegungen möglichst umfassend nachgebildet werden. Zu diesem Zweck werden in Kapitel 4 Konzepte für Wanderungsmodule erstellt, die zum einen eine unabhängige Simulation auf regionalen Subpopulationen und zum anderen eine umfassende Nachbildung von Wanderungsbewegungen innerhalb der gesamten Population zulassen. Um eine Berücksichtigung von Haushaltsstrukturen zu ermöglichen und die Plausibilität der Daten zu gewährleisten, wird ein Algorithmus zur Kalibrierung von Haushaltswahrscheinlichkeiten vorgeschlagen, der die Einhaltung von Benchmarks auf Individualebene ermöglicht. Über die retrospektive Evaluation der simulierten Migrationsbewegungen wird die Funktionalität der Wanderdungskonzepte verdeutlicht. Darüber hinaus werden über die Fortschreibung der Population in zukünftige Perioden divergente Entwicklungen der Einwohnerzahlen durch verschiedene Konzepte der Wanderungen analysiert.
Eine besondere Herausforderung in dynamischen Mikrosimulationen stellt die Erfassung von Unsicherheiten dar. Durch die Komplexität der gesamten Struktur und die Heterogenität der Komponenten ist die Anwendung klassischer Methoden zur Messung von Unsicherheiten oft nicht mehr möglich. Zur Quantifizierung verschiedener Einflussfaktoren werden in Kapitel 5 varianzbasierte Sensitivitätsanalysen vorgeschlagen, die aufgrund ihrer enormen Flexibilität auch direkte Vergleiche zwischen unterschiedlichsten Komponenten ermöglichen. Dabei erweisen sich Sensitivitätsanalysen nicht nur für die Erfassung von Unsicherheiten, sondern auch für die direkte Analyse verschiedener Szenarien, insbesondere zur Evaluation gemeinsamer Effekte, als überaus geeignet. In Simulationsstudien wird die Anwendung im konkreten Kontext dynamischer Modelle veranschaulicht. Dadurch wird deutlich, dass zum einen große Unterschiede hinsichtlich verschiedener Zielwerte und Simulationsperioden auftreten, zum anderen aber auch immer der Grad an regionaler Differenzierung berücksichtigt werden muss.
Kapitel 6 fasst die Erkenntnisse der vorliegenden Arbeit zusammen und gibt einen Ausblick auf zukünftige Forschungspotentiale.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
No Longer Printing the Legend: The Aporia of Heteronormativity in the American Western (1903-1969)
(2023)
This study critically investigates the U.S.-American Western and its construction of sexuality and gender, revealing that the heteronormative matrix that is upheld and defended in the genre is consistently preceded by the exploration of alternative sexualities and ways to think gender beyond the binary. The endeavor to naturalize heterosexuality seems to be baked in the formula of the U.S.-Western. However, as I show in this study, this endeavor relies on an aporia, because the U.S.-Western can only ever attempt to naturalize gender by constructing it first, hence inevitably and simultaneously construct evidence that supports the opposite: the unnaturalness and contingency of gender and sexuality.
My study relies on the works of Raewyn Connell, Pierre Bourdieu, and Judith Butler, and amalgamates in its methodology established approaches from film and literary studies (i.e., close readings) with a Foucaultian understanding of discourse and discourse analysis, which allows me to relate individual texts to cultural, socio-political and economical contexts that invariably informed the production and reception of any filmic text. In an analysis of 14 U.S.-Westerns (excluding three excursions) that appeared between 1903 and 1969 I give ample and minute narrative and film-aesthetical evidence to reveal the complex and contradictory construction of gender and sexuality in the U.S.-Western, aiming to reveal both the normative power of those categories and its structural instability and inconsistency.
This study proofs that the Western up until 1969 did not find a stable pattern to represent the gender binary. The U.S.-Western is not necessarily always looking to confirm or stabilize governing constructs of (gendered) power. However, it without fail explores and negotiates its legitimacy. Heterosexuality and male hegemony are never natural, self-evident, incontestable, or preordained. Quite conversely: the U.S.-Western repeatedly – and in a surprisingly diverse and versatile way – reveals the illogical constructedness of the heteronormative matrix.
My study therefore offers a fresh perspective on the genre and shows that the critical exploration and negotiation of the legitimacy of heteronormativity as a way to organize society is constitutive for the U.S.-Western. It is the inquiry – not necessarily the affirmation – of the legitimacy of this model that gives the U.S.-Western its ideological currency and significance as an artifact of U.S.-American popular culture.