Refine
Document Type
- Doctoral Thesis (60)
- Article (2)
- Habilitation (1)
Keywords
- Optimierung (6)
- Deutschland (4)
- Finanzierung (4)
- Schätzung (4)
- Stichprobe (4)
- Unternehmen (4)
- Erhebungsverfahren (3)
- Familienbetrieb (3)
- Social Entrepreneurship (3)
- Social entrepreneurship (3)
Institute
- Fachbereich 4 (63) (remove)
This thesis consists of four highly related chapters examining China’s rise in the aluminium industry. The first chapter addresses the conditions that allowed China, which first entered the market in the 1950s, to rise to world leadership in aluminium production. Although China was a latecomer, its re-entry into the market after the oil crises in the 1970s was a success and led to its ascent as the world’s largest aluminium producer by 2001. With an estimated production of 40.4 million tonnes in 2022, China represented almost 60% of the global output. Chapter 1 examines the factors underlying this success, such as the decline of international aluminium cartels, the introduction of innovative technology, the US granting China the MFN tariff status, Chinese-specific factors, and supportive government policies. Chapter 2 develops a mathematical model to analyze firms’ decisions in the short term. It examines how an incumbent with outdated technology and a new entrant with access to a new type of technology make strategic decisions, including the incumbent’s decision whether to deter entry, the production choice of firms, the optimal technology adoption rate of the newcomer, and cartel formation. Chapter 3 focuses on the adoption of new technology by firms upon market entry in four scenarios: firstly, a free market Cournot competition; secondly, a situation in which the government determines technology adoption rates; thirdly, a scenario in which the government controls both technology and production; and finally, a scenario where the government dictates technology adoption rates, production levels, and also the number of market participants. Chapter 4 applies the Spencer and Brander (1983) framework to examine strategic industrial policy. The model assumes that there are two exporting firms in two different countries that sell a product to a third country. We examine how the domestic firm is influenced by government intervention, such as the provision of a fixed-cost subsidy to improve its competitiveness relative to the foreign company. Chapter 4 initially investigates a scenario where only one government offers a fixed-cost subsidy, followed by an analysis of the case when both governments simultaneously provide financial help. Taken together, these chapters provide a comprehensive analysis of the strategic, technological, and political factors contributing to China’s leadership in the global aluminium industry.
Chapter 1: The Rise of China as a Latecomer in the Global Aluminium Industry
This chapter examines China’s remarkable transformation into a global leader in the aluminium industry, a sector in which the country accounted for approximately 58.9% of worldwide production in 2022. We examine how China, a latecomer to the aluminium industry that started off with labor-intensive technology in 1953, grew into the largest aluminium producer with some of the most advanced smelters in the world. This analysis identifies and discusses several opportunities that Chinese aluminium producers took advantage of. The first set of opportunities happened during the 1970s oil crises, which softened international competition and allowed China to acquire innovative smelting technology from Japan. The second set of opportunities started at about the same time when China opened its economy in 1978. The substantial demand for aluminium in China is influenced by both external and internal factors. Externally, the US granted China’s MFN tariff status in 1980 and China entered the World Trade Organization (WTO) in 2001. Both events contributed to a surge in Chinese aluminium consumption. Internally, China’s investment-led growth model boosted further its aluminium demand. Additional factors specific to China, such as low labor costs and the abundance of coal as an energy source, offer Chinese firms competitive advantages against international players. Furthermore, another window of opportunity is due to Chinese governmental policies, including phasing out old technology, providing subsidies, and gradually opening the economy to enhance domestic competition before expanding globally. By describing these elements, the study provides insights into the dynamic interplay of external circumstances and internal strategies that contributed to the success of the Chinese aluminium industry.
Chapter 2: Technological Change and Strategic Choices for Incumbent and New Entrant
This chapter introduces an oligopoly model that includes two actors: an incumbent and a potential entrant, that compete in the same market. We assume that two participants are located in different parts of the market: the incumbent is situated in area 1, whereas the potential entrant may venture into the other region, area 2. The incumbent exists in stage zero, where it can decide whether to deter the newcomer’s entry. A new type of technology exists in period one, when the newcomer may enter the market. In the short term, the incumbent is trapped with the outdated technology, while the new entrant may choose to partially or completely adopt the latest technology. Our results suggest the following: Firstly, the incumbent only tries to deter the new entrant if a condition for entry cost is met. Secondly, the new entrant is only interested in forming a cartel with the incumbent if a function of the ratio of the variable to new technology’s fixed-cost parameters is sufficiently high. Thirdly, if the newcomer asks to form a cartel, the incumbent will always accept this request. Finally, we can obtain the optimal new technology adoption rate for the newcomer.
Chapter 3: Technological Adoption and Welfare in Cournot Oligopoly
This study examines the difference between the optimal technology adoption rates chosen by firms in a homogeneous Cournot oligopoly and that preferred by a benevolent government upon firms’ market entry. To address the question of whether the technology choices of firms and government are similar, we analyze several different scenarios, which differ in the extent of government intervention in the market. Our results suggest a relationship between the number of firms in the market and the impact of government intervention on technology adoption rates. Especially in situations with a low number of firms that are interested in entering the market, greater government influence tends to lead to higher technology adoption rates of firms. Conversely, in scenarios with a higher number of firms and a government that lacks control over the number of market players, the technology adoption rate of firms will be highest when the government plays no role.
Chapter 4: International Technological Innovation and Industrial Strategies
Supporting domestic firms when they first enter the market may be seen as a favorable policy choice by governments around the world thanks to their ability to enhance the competitive advantage of domestic firms in non-cooperative competition against foreign enterprises (infant industry protection argument). This advantage may allow domestic firms to increase their market share and generate higher profits, thereby improving domestic welfare. This chapter utilizes the Spencer and Brander (1983) framework as a theoretical foundation to elucidate the effects of fixed-cost subsidies on firms’ production levels, technological innovations, and social welfare. The analysis examines two firms in different countries, each producing a homogeneous product that is sold in a third, separate country. We first examine the Cournot-Nash equilibrium in the absence of government intervention, followed by analyzing a scenario where just one government provides a financial subsidy for its domestic firm, and finally, we consider a situation where both governments simultaneously provide financial assistance for their respective firms. Our results suggest that governments aim to maximize social welfare by providing fixed-cost subsidies to their respective firms, finding themselves in a Chicken game scenario. Regarding technology innovation, subsidies lead to an increased technological adoption rate for recipient firms, regardless of whether one or both firms in a market receive support, compared to the situation without subsidies. The technology adoption rate of the recipient firm is higher than of its rival when only the recipient firm benefits from the fixed-cost subsidy. The lowest technology adoption rate of a firm occurs when the firm does not receive a fixed-cost subsidy, but its competitor does. Furthermore, global welfare will benefit the most in case when both exporting countries grant fixed-cost subsidies, and this welfare level is higher when only one country subsidizes than when no subsidies are provided by any country.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
Sowohl national als auch international wird die zunehmende Digitalisierung von Prozessen gefordert. Die Heterogenität und Komplexität der dabei entstehenden Systeme erschwert die Partizipation für reguläre Nutzergruppen, welche zum Beispiel kein Expertenwissen in der Programmierung oder einen informationstechnischen Hintergrund aufweisen. Als Beispiel seien hier Smart Contracts genannt, deren Programmierung komplex ist und bei denen etwaige Fehler unmittelbar mit monetärem Verlust durch die direkte Verknüpfung der darunterliegenden Kryptowährung verbunden sind. Die vorliegende Arbeit stellt ein alternatives Protokoll für cyber-physische Verträge vor, das sich besonders gut für die menschliche Interaktion eignet und auch von regulären Nutzergruppen verstanden werden kann. Hierbei liegt der Fokus auf der Transparenz der Übereinkünfte und es wird weder eine Blockchain noch eine darauf beruhende digitale Währung verwendet. Entsprechend kann das Vertragsmodell der Arbeit als nachvollziehbare Verknüpfung zwischen zwei Parteien verstanden werden, welches die unterschiedlichen Systeme sicher miteinander verbindet und so die Selbstorganisation fördert. Diese Verbindung kann entweder computergestützt automatisch ablaufen, oder auch manuell durchgeführt werden. Im Gegensatz zu Smart Contracts können somit Prozesse Stück für Stück digitalisiert werden. Die Übereinkünfte selbst können zur Kommunikation, aber auch für rechtlich bindende Verträge genutzt werden. Die Arbeit ordnet das neue Konzept in verwandte Strömungen wie Ricardian oder Smart Contracts ein und definiert Ziele für das Protokoll, welche in Form der Referenzimplementierung umgesetzt werden. Sowohl das Protokoll als auch die Implementierung werden im Detail beschrieben und durch eine Erweiterung der Anwendung ergänzt, welche es Nutzenden in Regionen ohne direkte Internetverbindung ermöglicht, an ebenjenen Verträgen teilnehmen zu können. Weiterhin betrachtet die Evaluation die rechtlichen Rahmenbedinungen, die Übertragung des Protokolls auf Smart Contracts und die Performanz der Implementierung.
Sozialunternehmen haben mindestens zwei Ziele: die Erfüllung ihrer sozialen bzw. ökologischen Mission und finanzielle Ziele. Zwischen diesen Zielen können Spannungen entstehen. Wenn sie sich in diesem Spannungsfeld wiederholt zugunsten der finanziellen Ziele entscheiden, kommt es zum Mission Drift. Die Priorisierung der finanziellen Ziele überlagert dabei die soziale Mission. Auch wenn das Phänomen in der Praxis mehrfach beobachtet und in Einzelfallanalysen beschrieben wurde, gibt es bislang wenig Forschung zu Mission Drift. Der Fokus der vorliegenden Arbeit liegt darauf, diese Forschungslücke zu schließen und eigene Erkenntnisse für die Auslöser und Treiber des Mission Drifts von Sozialunternehmen zu ermitteln. Ein Augenmerk liegt auf den verhaltensökonomischen Theorien und der Mixed-Gamble-Logik. Dieser Logik zufolge liegt bei Entscheidungen immer eine Gleichzeitigkeit von Gewinnen und Verlusten vor, sodass Entscheidungsträger die Furcht vor Verlusten gegenüber der Aussicht auf Gewinne abwägen müssen. Das Modell wird genutzt, um eine neue theoretische Betrachtungsweise auf die Abwägung zwischen sozialen und finanziellen Zielen bzw. Mission Drift zu erhalten. Mit einem Conjoint Experiment werden Daten über das Entscheidungsverhalten von Sozialunternehmern generiert. Im Zentrum steht die Abwägung zwischen sozialen und finanziellen Zielen in verschiedenen Szenarien (Krisen- und Wachstumssituationen). Mithilfe einer eigens erstellten Stichprobe von 1.222 Sozialunternehmen aus Deutschland, Österreich und der Schweiz wurden 187 Teilnehmende für die Studie gewonnen. Die Ergebnisse dieser Arbeit zeigen, dass eine Krisensituation Auslöser für Mission Drift von Sozialunternehmen sein kann, weil in diesem Szenario den finanziellen Zielen die größte Bedeutung zugemessen wird. Für eine Wachstumssituation konnten hingegen keine solche Belege gefunden werden. Hinzu kommen weitere Einflussfaktoren, welche die finanzielle Orientierung verstärken können, nämlich die Gründeridentitäten der Sozialunternehmer, eine hohe Innovativität der Unternehmen und bestimmte Stakeholder. Die Arbeit schließt mit einer ausführlichen Diskussion der Ergebnisse. Es werden Empfehlungen gegeben, wie Sozialunternehmen ihren Zielen bestmöglich treu bleiben können. Außerdem werden die Limitationen der Studie und Wege für zukünftige Forschung im Bereich Mission Drift aufgezeigt.
Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data
(2024)
Visualizing brain simulation data is in many aspects a challenging task. For one, data used in brain simulations and the resulting datasets is heterogeneous and insight is derived by relating all different kinds of it. Second, the analysis process is rapidly changing while creating hypotheses about the results. Third, the scale of data entities in these heterogeneous datasets is manifold, reaching from single neurons to brain areas interconnecting millions. Fourth, the heterogeneous data consists of a variety of modalities, e.g.: from time series data to connectivity data, from single parameters to a set of parameters spanning parameter spaces with multiple possible and biological meaningful solutions; from geometrical data to hierarchies and textual descriptions, all on mostly different scales. Fifth, visualizing includes finding suitable representations and providing real-time interaction while supporting varying analysis workflows. To this end, this thesis presents a scalable and flexible software architecture for visualizing, integrating and interacting with brain simulations data. The scalability and flexibility is achieved by interconnected services forming in a series of Coordinated Multiple View (CMV) systems. Multiple use cases are presented, introducing views leveraging this architecture, extending its ecosystem and resulting in a Problem Solving Environment (PSE) from which custom-tailored CMV systems can be build. The construction of such CMV system is assisted by semantic reasoning hence the term semantic-aware CMVs.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.
Striving for sustainable development by combating climate change and creating a more social world is one of the most pressing issues of our time. Growing legal requirements and customer expectations require also Mittelstand firms to address sustainability issues such as climate change. This dissertation contributes to a better understanding of sustainability in the Mittelstand context by examining different Mittelstand actors and the three dimensions of sustainability - social, economic, and environmental sustainability - in four quantitative studies. The first two studies focus on the social relevance and economic performance of hidden champions, a niche market leading subgroup of Mittelstand firms. At the regional level, the impact of 1,645 hidden champions located in Germany on various dimensions of regional development is examined. A higher concentration of hidden champions has a positive effect on regional employment, median income, and patents. At the firm level, analyses of a panel dataset of 4,677 German manufacturing firms, including 617 hidden champions, show that the latter have a higher return on assets than other Mittelstand firms. The following two chapters deal with environmental strategies and thus contribute to the exploration of the environmental dimension of sustainability. First, the consideration of climate aspects in investment decisions is compared using survey data from 468 European venture capital and private equity investors. While private equity firms respond to external stakeholders and portfolio performance and pursue an active ownership strategy, venture capital firms are motivated by product differentiation and make impact investments. Finally, based on survey data from 443 medium-sized manufacturing firms in Germany, 54% of which are family-owned, the impact of stakeholder pressures on their decarbonization strategies is analyzed. A distinction is made between symbolic (compensation of CO₂-emissions) and substantive decarbonization strategies (reduction of CO₂-emissions). Stakeholder pressures lead to a proactive pursuit of decarbonization strategies, with internal and external stakeholders varying in their influence on symbolic and substantial decarbonization strategies, and the relationship influenced by family ownership.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
This thesis comprises of four research papers on the economics of education and industrial relations, which contribute to the field of empirical economic research. All of the corresponding papers focus on analysing how much time individuals spend on specific activities. The allocation of available time resources is a decision that individuals make throughout their lifetime. In this thesis, we consider individuals at different stages of their lives - students at school, university students, and dependent employees at the workplace.
Part I includes two research studies on student's behaviour in secondary and tertiary education.
Chapter 2 explores whether students who are relatively younger or older within the school year exhibit differential time allocation. Building on previous findings showing that relatively younger students perform worse in school, the study shows that relatively younger students are aware of their poor performance in school and feel more strain as a result. Nevertheless, there are no clear differences to be found in terms of time spent on homework, while relatively younger students spend more time watching television and less time on sports activities. Thus, the results suggest that the lower learning outcomes are not associated with different time allocations between school-related activities and non-school-related activities.
Chapter 3 analyses how individual ability and labour market prospects affect study behaviour. The theoretical modelling predicts that both determinants increase study effort. The empirical investigation is based on cross-sectional data from the National Educational Panel Study (NEPS) and includes thousands of students in Germany. The analyses show that more gifted students exhibit lower subjective effort levels and invest less time in self-study. In contrast, very good labour market prospects lead to more effort exerted by the student, both qualitatively and quantitatively. The potential endogeneity problem is taken into account by using regional unemployment data as an instrumental variable.
Part II includes two labour economic studies on determinants of overtime. Both studies belong to the field of industrial relations, as they focus on union membership on the one hand and the interplay of works councils and collective bargaining coverage on the other.
Chapter 4 shows that union members work less overtime than non-members do. The econometric approach takes the problem of unobserved heterogeneity into account; but provides no evidence that this issue affects the results. Different channels that could lead to this relationship are analysed by examining relevant subgroups separately. For example, this effect of union membership can also be observed in establishments with works councils and for workers who are very likely to be covered by collective bargaining agreements. The study concludes that the observed effect is due to the fact that union membership can protect workers from corresponding increased working time demands by employers.
Chapter 5 builds on previous studies showing a negative effect of works councils on overtime. In addition to co-determination by works councils at the firm level, collective bargaining coverage is an important factor in the German industrial relations system. Corresponding data was not available in the SOEP for quite some time. Therefore, the study uses recent SOEP data, which also contains information on collective bargaining coverage. A cross-sectional analysis is conducted to examine the effects of works councils in establishments with and without collective bargaining coverage. Similar to studies analysing other outcome variables, the results show that the effect of works councils exists only for employees covered by a collective bargaining agreement.
Computer simulation has become established in a two-fold way: As a tool for planning, analyzing, and optimizing complex systems but also as a method for the scientific instigation of theories and thus for the generation of knowledge. Generated results often serve as a basis for investment decisions, e.g., road construction and factory planning, or provide evidence for scientific theory-building processes. To ensure the generation of credible and reproducible results, it is indispensable to conduct systematic and methodologically sound simulation studies. A variety of procedure models exist that structure and predetermine the process of a study. As a result, experimenters are often required to repetitively but thoroughly carry out a large number of experiments. Moreover, the process is not sufficiently specified and many important design decisions still have to be made by the experimenter, which might result in an unintentional bias of the results.
To facilitate the conducting of simulation studies and to improve both replicability and reproducibility of the generated results, this thesis proposes a procedure model for carrying out Hypothesis-Driven Simulation Studies, an approach that assists the experimenter during the design, execution, and analysis of simulation experiments. In contrast to existing approaches, a formally specified hypothesis becomes the key element of the study so that each step of the study can be adapted and executed to directly contribute to the verification of the hypothesis. To this end, the FITS language is presented, which enables the specification of hypotheses as assumptions regarding the influence specific input values have on the observable behavior of the model. The proposed procedure model systematically designs relevant simulation experiments, runs, and iterations that must be executed to provide evidence for the verification of the hypothesis. Generated outputs are then aggregated for each defined performance measure to allow for the application of statistical hypothesis testing approaches. Hence, the proposed assistance only requires the experimenter to provide an executable simulation model and a corresponding hypothesis to conduct a sound simulation study. With respect to the implementation of the proposed assistance system, this thesis presents an abstract architecture and provides formal specifications of all required services.
To evaluate the concept of Hypothesis-Driven Simulation Studies, two case studies are presented from the manufacturing domain. The introduced approach is applied to a NetLogo simulation model of a four-tiered supply chain. Two scenarios as well as corresponding assumptions about the model behavior are presented to investigate conditions for the occurrence of the bullwhip effect. Starting from the formal specification of the hypothesis, each step of a Hypothesis-Driven Simulation Study is presented in detail, with specific design decisions outlined, and generated inter- mediate data as well as final results illustrated. With respect to the comparability of the results, a conventional simulation study is conducted which serves as reference data. The approach that is proposed in this thesis is beneficial for both practitioners and scientists. The presented assistance system allows for a more effortless and simplified execution of simulation experiments while the efficient generation of credible results is ensured.
Even though proper research on Cauchy transforms has been done, there are still a lot of open questions. For example, in the case of representation theorems, i.e. the question when a function can be represented as a Cauchy transform, there is 'still no completely satisfactory answer' ([9], p. 84). There are characterizations for measures on the circle as presented in the monograph [7] and for general compactly supported measures on the complex plane as presented in [27]. However, there seems to exist no systematic treatise of the Cauchy transform as an operator on $L_p$ spaces and weighted $L_p$ spaces on the real axis.
This is the point where this thesis draws on and we are interested in developing several characterizations for the representability of a function by Cauchy transforms of $L_p$ functions. Moreover, we will attack the issue of integrability of Cauchy transforms of functions and measures, a topic which is only partly explored (see [43]). We will develop different approaches involving Fourier transforms and potential theory and investigate into sufficient conditions and characterizations.
For our purposes, we shall need some notation and the concept of Hardy spaces which will be part of the preliminary Chapter 1. Moreover, we introduce Fourier transforms and their complex analogue, namely Fourier-Laplace transforms. This will be of extraordinary usage due to the close connection of Cauchy and Fourier(-Laplace) transforms.
In the second chapter we shall begin our research with a discussion of the Cauchy transformation on the classical (unweighted) $L_p$ spaces. Therefore, we start with the boundary behavior of Cauchy transforms including an adapted version of the Sokhotski-Plemelj formula. This result will turn out helpful for the determination of the image of the Cauchy transformation under $L_p(\R)$ for $p\in(1,\infty).$ The cases $p=1$ and $p=\infty$ are playing special roles here which justifies a treatise in separate sections. For $p=1$ we will involve the real Hardy space $H_{1}(\R)$ whereas the case $p=\infty$ shall be attacked by an approach incorporating intersections of Hardy spaces and certain subspaces of $L_{\infty}(\R).$
The third chapter prepares ourselves for the study of the Cauchy transformation on subspaces of $L_{p}(\R).$ We shall give a short overview of the basic facts about Cauchy transforms of measures and then proceed to Cauchy transforms of functions with support in a closed set $X\subset\R.$ Our goal is to build up the main theory on which we can fall back in the subsequent chapters.
The fourth chapter deals with Cauchy transforms of functions and measures supported by an unbounded interval which is not the entire real axis. For convenience we restrict ourselves to the interval $[0,\infty).$ Bringing once again the Fourier-Laplace transform into play, we deduce complex characterizations for the Cauchy transforms of functions in $L_{2}(0,\infty).$ Moreover, we analyze the behavior of Cauchy transform on several half-planes and shall use these results for a fairly general geometric characterization. In the second section of this chapter, we focus on Cauchy transforms of measures with support in $[0,\infty).$ In this context, we shall derive a reconstruction formula for these Cauchy transforms holding under pretty general conditions as well as results on the behaviur on the left half-plane. We close this chapter by rather technical real-type conditions and characterizations for Cauchy transforms of functions in $L_p(0,\infty)$ basing on an approach in [82].
The most common case of Cauchy transforms, those of compactly supported functions or measures, is the subject of Chapter 5. After complex and geometric characterizations originating from similar ideas as in the fourth chapter, we adapt a functional-analytic approach in [27] to special measures, namely those with densities to a given complex measure $\mu.$ The chapter is closed with a study of the Cauchy transformation on weighted $L_p$ spaces. Here, we choose an ansatz through the finite Hilbert transform on $(-1,1).$
The sixth chapter is devoted to the issue of integrability of Cauchy transforms. Since this topic has no comprehensive treatise in literature yet, we start with an introduction of weighted Bergman spaces and general results on the interaction of the Cauchy transformation in these spaces. Afterwards, we combine the theory of Zen spaces with Cauchy transforms by using once again their connection with Fourier transforms. Here, we shall encounter general Paley-Wiener theorems of the recent past. Lastly, we attack the issue of integrability of Cauchy transforms by means of potential theory. Therefore, we derive a Fourier integral formula for the logarithmic energy in one and multiple dimensions and give applications to Fourier and hence Cauchy transforms.
Two appendices are annexed to this thesis. The first one covers important definitions and results from measure theory with a special focus on complex measures. The second appendix contains Cauchy transforms of frequently used measures and functions with detailed calculations.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Coastal erosion describes the displacement of land caused by destructive sea waves,
currents or tides. Due to the global climate change and associated phenomena such as
melting polar ice caps and changing current patterns of the oceans, which result in rising
sea levels or increased current velocities, the need for countermeasures is continuously
increasing. Today, major efforts have been made to mitigate these effects using groins,
breakwaters and various other structures.
This thesis will find a novel approach to address this problem by applying shape optimization
on the obstacles. Due to this reason, results of this thesis always contain the
following three distinct aspects:
The selected wave propagation model, i.e. the modeling of wave propagation towards
the coastline, using various wave formulations, ranging from steady to unsteady descriptions,
described from the Lagrangian or Eulerian viewpoint with all its specialties. More
precisely, in the Eulerian setting is first a steady Helmholtz equation in the form of a
scattering problem investigated and followed subsequently by shallow water equations,
in classical form, equipped with porosity, sediment portability and further subtleties.
Secondly, in a Lagrangian framework the Lagrangian shallow water equations form the
center of interest.
The chosen discretization, i.e. dependent on the nature and peculiarity of the constraining
partial differential equation, we choose between finite elements in conjunction
with a continuous Galerkin and discontinuous Galerkin method for investigations in the
Eulerian description. In addition, the Lagrangian viewpoint offers itself for mesh-free,
particle-based discretizations, where smoothed particle hydrodynamics are used.
The method for shape optimization w.r.t. the obstacle’s shape over an appropriate
cost function, constrained by the solution of the selected wave-propagation model. In
this sense, we rely on a differentiate-then-discretize approach for free-form shape optimization
in the Eulerian set-up, and reverse the order in Lagrangian computations.
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.