Refine
Year of publication
Document Type
- Doctoral Thesis (77)
- Article (2)
- Habilitation (1)
- Master's Thesis (1)
Has Fulltext
- yes (81)
Keywords
- Optimierung (9)
- Unternehmen (6)
- Deutschland (5)
- Stichprobe (5)
- Finanzierung (4)
- Schätzung (4)
- Erhebungsverfahren (3)
- Familienbetrieb (3)
- Gestaltoptimierung (3)
- Maschinelles Lernen (3)
Institute
- Fachbereich 4 (81) (remove)
This thesis examines how Europe sustains its leadership and competitiveness as a global center for foreign direct investment (FDI) and trade between 1991 and 2023. While EU membership historically functioned as the dominant determinant of inward FDI and trade integration, its relative influence has declined as new structural factors, based on trade dynamics and export-platform strategies, have emerged, together with the growing presence of Asian, especially Chinese, investors establishing production hubs in Central and Eastern Europe to serve the wider EU market. Lower trade costs within Europe have reinforced this shift, leading EU investors to focus on vertical FDI, while non-EU investors to adopt export-platform FDI patterns. Chinese investment has moved from infrastructure-focused projects to strategic-sector FDI, highlighting Europe’s exposure to evolving global industrial and geopolitical dynamics.
Chapter 2 examines how traditional determinants of FDI, including EU membership, interact with emerging drivers, such as trade interdependence, export-platform strategies, and Asian influence, to shape investment patterns in Europe. It employs a gravity-based empirical framework augmented with newly developed indicators, comprising the Bilateral Trade Interdependence Index, the Export-Platform Indicator, and Belt and Road Initiative (BRI) participation, together with a functional integration approach, covering over 95% of European countries and their global partners from 2010 to 2023. The findings indicate that trade dependency with non-EU partners grew most rapidly, increasing by 55% between 2011 and 2023. Stronger bilateral trade interdependence is found to significantly predict higher FDI inflows. The BRI analysis and functional classification indicate a shift from infrastructure-focused Chinese investment to strategic sectors, including electric vehicles and semiconductors. Since 2018, export-platform strategies have expanded from Europe’s core economies into Central and Eastern Europe, forming emerging production hubs, and have subsequently moved toward the Western Balkans and Turkey, likely reflecting evolving EU regulations and broader supply-chain realignments.
Chapter 3 expands the FDI analysis to cover a longer timeframe, from 1991 to 2017, focusing on the period when EU membership exerted a strong influence on FDI in Europe, transforming member countries from primarily cost-attractive destinations into global investment centers. Using an augmented gravity model covering 39 host and origin countries, the analysis finds that EU membership increased FDI inflows by 23%, with investments from core EU members expanding into new EU member states, while FDI from non-EU countries decreased. At the same time, EU membership may also be driven by trade, and EEA participation reflects non-FDI motivations. The chapter also highlights that EU accession strengthens both market-seeking (horizontal) and efficiency-seeking (vertical) FDI motives and applies methods to address negative and zero FDI values issues, ensuring robust estimation. The inclusion of lagged and lead variables shows that the EU integration process is phased over time, affecting FDI inflows with lags of up to 10–15 years after accession.
Chapter 4 expands the range of FDI determinants by deriving trade cost indices as a proxy for connectivity and extending the geographic scope of the analysis. In addition to EU members, the sample includes the Western Balkans, Turkey, and new EU candidates and applicants (Moldova, Ukraine, and Georgia) over the period 2000 to 2020, covering approximately 80% of European FDI flows. Trade costs are calculated for each country in the sample with its trade partners, not only within and between European subregions but also with non-EU partners such as China, and are combined with measures of FDI restrictiveness. The results show that China remains among the EU’s top three trading partners in goods and that trade costs significantly influence FDI inflows in Europe. The analysis also highlights that declining trade costs between European countries have reduced market-seeking (horizontal) FDI, while non-European investors, especially China, increasingly pursue export-platform FDI to serve third-country markets. A sharp reduction in trade costs between the Western Balkans and the EU (-45%) and a smaller decline with China (-35%) illustrates how regional integration reduces the need for local horizontal FDI while reinforcing Europe’s role as a hub for global production.
Chapter 5 shows that despite concerns about increasing outside influence, developed European countries remain the dominant source of FDI in the region. The chapter focuses on China’s role, examining FDI patterns across advanced EU members, new member states, and Western Balkan economies between 2000 and 2019, while distinguishing the effects of EU integration and BRI participation on FDI. Chinese influence has expanded primarily through the Belt and Road Initiative, particularly in accession and neighboring countries. Although BRI participation does not significantly increase FDI on its own, reflecting the dominant part of loan-financed infrastructure rather than private investment, it has strengthened physical and digital connectivity, laying the groundwork for future, longer-term FDI. The analysis also shows that intra-EU trade costs declined significantly after the 2004 and 2007 enlargements, while trade costs between the Western Balkans and China have fallen steadily since the launch of the BRI in 2013. As a result, Chinese influence is more pronounced in new EU member states and Western Balkan economies than in Western Europe. Over time, enhanced connectivity and supply-chain integration may support more diversified FDI inflows.
Towards Seamless Integration: Exploring Cross-Reality for Extending Physical Office Workspaces
(2026)
Immersive systems, like Augmented and Virtual Reality, offer new paradigms fordigital interaction, but confining users to a single reality often presents drawbacksfor complex tasks. Cross-Reality systems, which integrate multiple realities into asingle experience, have significant potential to enhance existing professional workflows by combining the unique strengths of physical and virtual environments. Thisdissertation investigates how Cross-Reality can enhance professional workflows byusing the traditional office as a primary use case, focusing on the central question:How can CR enhance existing workflows in physical settings by extendingthe physical environment with virtual content and environments?To address this, the dissertation presents a body of empirical work structuredaround isolating and investigating one core design challenge for each of the threeprimary types of Cross-Reality systems. The work first addresses transitionalCross-Reality systems, which allow users to switch between different realities, byexamining how to design effective transitions. It demonstrates that in task-drivenscenarios, users prioritize efficient transitions that minimize cognitive disruptionover more elaborate or interactive ones. Next, the dissertation tackles the fundamental problem of unwanted occlusion in Augmented Virtuality, a form of substitutional Cross-Reality systems, which integrate objects from one reality intoanother. It introduces and evaluates technical strategies to ensure physical toolsremain accessible within virtual spaces, revealing a critical trade-off between theefficacy of these solutions and user experience factors like cybersickness. Finally,the research explores multi-user Cross-Reality systems that enable collaborationbetween multiple users who may be experiencing different degrees of virtualitysimultaneously, and the complexities of enabling collaboration across multiplestages, underscoring the unique challenges of supporting shared awareness andmanaging asymmetric roles.These findings are grounded by a detailed analysis of the underlying hardware, which highlights how technical and perceptual issues inherent to VideoSee-Through and Optical See-Through Head-Mounted Displays directly impactthe feasibility and design of Cross-Reality systems. The overarching contributionof this dissertation is to provide a set of empirically-grounded design principlesfor applying Cross-Reality in productivity-focused environments. By shifting thedesign focus from entertainment to pragmatic qualities, this work offers valuableinsights into creating Cross-Reality systems that genuinely enhance workflows, prioritizing efficiency, usability, and seamless interaction while navigating technical
This thesis presents four contributions in the domains of schema/ontology alignment and query processing. First, we present a novel alignment approach, denoted as FiLiPo (Finding Linkage Points), to align the schema of RDF knowledge bases with the response schema of RESTful Web APIs. FiLiPo only requires knowledge about a knowledge base (e.g., class names) but no prior knowledge about the
Web APIs’ data structure. It uses fifteen different string similarity metrics to find an alignment between the schema of a knowledge base and that of aWeb API.
Next, a benchmark system named ETARA (Evaluation Toolkit for API and RDF Alignment) is introduced that was created with the goal to simulate RESTful Web APIs and is able to cover all important characteristics of Web APIs, i.e., latency, timeouts, rate limits and, furthermore, provides configurable response structures (e.g., JSON or XML). Additionally, it was designed to support researchers during
the development of alignment systems.
Afterward, the alignments determined by FiLiPo are used to create a hybrid and federated query processor named TunA (Tunable Query Optimizer forWeb APIs and User Preferences), which allows SPARQL queries combining knowledge bases and RESTful Web APIs and is tunable towards user preferences, i.e., coverage, reliability and execution time. The primary goal of TunA is to return a query result that satisfies the user’s preferences in terms of data quality, even when using unreliable data sources by performing a majority vote over multiple sources.
Lastly, we present a federated query processor, denoted as ORAQL (Overlap and Reliability Aware Query Processing Layer), which uses overlap information to reduce the number of selected sources that are available in a federation. The goal is to reduce redundant data and, hence, improve the query execution speed. Therefore, ORAQL uses a profile feature that provides information about the overlap between all data sources of a federation. Furthermore, we extend the quality estimation of TunA to cover Triple Pattern Fragment interfaces to ensure a user-provided reliability goal.
This thesis serves as proof of concept for the tensile strength simulation-based nonwoven material design. Objective is the adjustment of the parameters of an underlying production process with regard to a desired tensile strength behavior (optimization). As an example, we focus on the nonwoven airlay production and consider a thermobonding procedure for the consolidation of the nonwoven fabrics.
To be able to map production parameters to the associated tensile strength behavior, we present a model-simulation framework composed of a model for the nonwoven fiber structure generation and a model for the nonwovens’ mechanical behavior under vertical load. The model for the fiber structure generation replicates the stochastic fiber lay-down of the airlay production and results in a random three-dimensional fiber web. This web is consolidated using a virtual bonding procedure that mimics the thermobonding of the nonwoven material. The topology of the resulting adhered fiber structure can be described by a graph, which serves as basis for the subsequent tensile strength simulation. The model used for this purpose describes the mechanical behavior of the material at fiber network level. Therefore, the considered fiber structure sample is interpreted as truss and the fiber connections are equipped with a nonlinear material law, which allows to describe the elastic phase of the nonwovens’ tensile strength behavior. The existence and uniqueness of a solution to the model as well as its numerical treatment are discussed. Moreover, we present data reduction strategies that enable more efficient simulations by removing fiber structure parts that do not contribute to the tensile strength behavior.
As it becomes evident from the numerical experiments, a single tensile strength simulation for a production-like virtual sample is already computational demanding. Costs accumulate further, since Monte-Carlo simulations are required to account for the randomness in the fiber structure generation. Thus, direct simulations provide an infeasible basis for the nonwoven material design. This motivates the use of a predictive surrogate for optimization. Therefore, we consider regression-based approaches at different levels of information within the simulation framework. It turns out that the coupling of a polynomial model, for the fiber structure feature inference, with a linear one, for the stress-strain curve inference, yields accurate predictions. Once trained, the regression models allow for efficient evaluations and thus represent a suitable surrogate for the nonwoven material design. In this context, we discuss two exemplary problems of interest for the application: First, a tracking-type problem that aims to find the production parameters that result in a desired tensile strength behavior, expressed in terms of stress-strain curves. Second, an in-corridor maximization problem, which aims to identify the production parameters that maximize the probability of ending up in a specified stress-strain corridor.
Price indices play a vital role in economic measurement as they reflect price levels
and measure price fluctuations. Price level measures are used with macroeconomic
indicators to express them in real terms. These measures are also used to index wages,
rents, and pensions. Furthermore, they are used as a reference for monetary policy
conducted by central banks. Therefore, the provision of accurate price indices is one
of the most important goals of National Statistical Institutes (NSIs), and numerous
studies have been devoted to this goal.
This cumulative dissertation also contributes to this goal. It contains four chapters,
each of which represents a separate research. The first two studies are devoted to
the treatment of seasonal products by using different price index methods. The first
research is co-authored with Ken van Loon. The third research is dedicated to finding
the most accurate method to make price predictions for missing products. The fourth
research is focused on the treatment of products by using different price index methods
when products’ quality characteristics are available.
Measuring the economic activity of a country requires high-quality data of businesses. In the case of Germany, this is not only required at national level, but also at federal state level and for different economic sectors. Important sources for high-quality business data are the business register and, among others, also 14 business surveys which are conducted by the Federal Statistical Office of Germany. However, the quality requirements of the Federal Statistical Office are in contrast to the interests of the businesses themselves. For them, answering to a survey's questionnaire is an additional cost factor, also known as response burden. A high response burden should be avoided, since it can have a negative impact on the quality of the businesses' responses to the surveys. Therefore, sample coordination can be used as a method to control the distribution of response burden while securing high-quality data.
When applying already existing business survey coordination systems, developed by different statistical institutes, legal and administrative standards of German official statistics have to be taken into account. These standards consider different sampling fractions, rotation fractions, periodicity, and stratification of the aforementioned 14 business surveys. Therefore, the aim of this doctoral thesis is to check the existing business survey coordination systems for their applicability in the context of German official statistics and, if necessary, to modify them accordingly. These modifications include the introduction of individual burden indicators which aim to take the individual perception of response burden into account.
For this purpose, several synthetic data sets have been created to test the application of the modified versions of the different business survey coordination systems through Monte Carlo simulation studies. These data sets include a large panel data set, reflecting the landscape of businesses in Rhineland-Palatinate and three smaller, synthetic data sets. The latter have been created with the help of the R package BuSuCo which has been developed within the scope of this thesis. The above mentioned simulation studies are evaluated based on different measures for estimation quality as well as for the concentration and distribution of response burden.
Bilevel problems are optimization problems for which parts of the variables
are constrained to be an optimal solution to another nested optimization
problem. This structure renders bilevel problems particularly well-suited for
modeling hierarchical decision-making processes. They are widely applicable
in areas such as energy markets, transportation systems, security planning,
and pricing. However, the hierarchical nature of these problems also makes
them inherently challenging to solve, both in theory and in practice.
In this thesis, we study different nonlinear problem settings for the
nested optimization problem. First, we focus on nonlinear but convex bilevel
problems with purely integer variables. We propose a solution algorithm that
uses a branch-and-cut framework with tailored cutting planes. We prove
correctness and finite termination of the method under suitable assumptions
and put it into context of existing literature. Moreover, we provide an
extensive numerical study to showcase the applicability of our method and
we compare it to the state-of-the-art approach for a less general setting on
suitable instances from the literature. Furthermore, we discuss challenges that
arise when we try to generalize our approach to the mixed-integer setting.
Next, we study mixed-integer bilevel problems for which the nested
problem has a nonconvex and quadratic objective function, linear constraints,
and continuous variables. We state and prove a complexity-theoretical hardness result for this
problem class and develop a lower and upper bounding scheme to solve
these problems. We prove correctness and finite termination of the proposed
method under suitable assumptions and test its applicability in a numerical
study.
Finally, we consider bilevel problems with continuous variables, where
the nested problem has a convex-quadratic objective function and linear
constraints. We reformulate them as single-level optimization problems using
necessary and sufficient optimality conditions for the nested problem. Then,
we explore the family of so-called P-split reformulations for this single-level
problem and test their applicability in a preliminary numerical study.
Entrepreneurship is recognized as an important discipline to achieve sustainable development and to address sustainability goals without losing sight of economic aspects. However, entrepreneurship rates are rather low in many industrialized countries with high income levels. Research clearly shows that there is a gap in the entrepreneurial process between intentions and subsequent actions. This means that not everyone with entrepreneurial ambitions also follows through and implements actions. This gap also exists for aspects of sustainability. As a result, there is a need to better understand the traditional and sustainability-focused entrepreneurial process in order to increase corresponding actions. This dissertation offers such a comprehensive perspective and sheds light on individual and contextual predictors for traditional and sustainability-focused behavior of entrepreneurs and self-employed across four studies.
The first three studies focus on individual predictors. By providing a systematic literature review with 107 articles, Chapter 2 highlights the ambivalent role of religion for the entrepreneurial process. Relying on the theory of planned behavior (TPB) as theoretical basis, religion can have positive effects on entrepreneurial attitudes and behavioral control, but also negative consequences for other aspects of behavioral control and subjective norms due to religious restrictions.
The quantitative empirical study in Chapter 3 similarly relies on the TPB and sheds light on individual perceptual factors influencing the sustainability-related intention-action gap in entrepreneurship. Using data from the 2021 Global Entrepreneurship Monitor (GEM) Adult Population Survey (APS) including 22,008 early-stage entrepreneurs from 44 countries worldwide, the results support our theoretical reasoning that sustainability-focused intentions are positively related to social entrepreneurial actions. In addition, it is demonstrated that positive perceptual moderators such as self-efficacy and knowing other entrepreneurs as role models strengthen this relationship while a negative perception such as fear of failure restricts social actions in early-stage entrepreneurship.
The next quantitative empirical study in Chapter 4 examines the behavioral consequences of well-being at a sample of 6,955 German self-employed during COVID-19. This chapter builds on two complementary behavioral perspectives to predict how reductions in financial and non-financial well-being relate to investments in venture development. In this regard, reductions in financial well-being are positively related to time investments, supporting the performance feedback perspective in terms of higher search efforts under negative performance. In contrast, reductions in non-financial well-being are negatively related to time and monetary investments, yielding support for the broadening-and-build perspective indicating that negative psychological experiences narrow the thought-action repertoire and hinder resource deployment. The insights across these first three studies about individual predictors indicate that many different, subjective beliefs, perceptions and emotional states can influence the entrepreneurial process making entrepreneurship and self-employment highly individualized disciplines.
The last quantitative empirical study provides an explorative view on a large number of contextual predictors for social and ecological considerations in entrepreneurial actions. Combining GEM data from 2021 on country level with further information from the World Bank and the OECD, a machine learning approach is employed on a sample of 84 countries worldwide. The results suggest that governmental and regulatory as well as cultural factors are relevant to predict social and ecological considerations. Moreover, market-related aspects are shown to be relevant predictors, especially socio-economic factors for social considerations and economic factors for ecological considerations. Overall, the four studies in this dissertation highlight the complexity of the entrepreneurial process being determined by many different individual and contextual factors. Due to the multitude of potential predictors, this dissertation can only give an initial overview of a selection of factors with many more aspects and interdependencies still to be examined by future research.
Zirkularität und zirkulare Geschäftsmodelle in der Holzindustrie: eine empirische Untersuchung
(2025)
Der ökologische Zustand der Erde befindet sich infolge von Umweltverschmutzung, Abfallaufkommen und CO₂-bedingtem Klimawandel in einem kritischen Zustand. Mit rund 40 % trägt der Bau- und Gebäudesektor erheblich zu den globalen Treibhausgasemissionen bei. Holz gilt als klimafreundliche Alternative zu Beton und Stahl, bedarf jedoch ebenfalls einer nachhaltigen Nutzung. Die Kreislaufwirtschaft bietet mit der Wiederverwendung ein zukunftsweisendes Konzept: So sind etwa 45% des beim Rückbau von Gebäuden anfallenden Holzes potenziell als Rohstoff nutzbar. Dadurch werden alternative Rohstoffquellen erschlossen und das Abfallaufkommen reduziert.
Trotz dieses Potenzials liegt der Zirkularitätsgrad der Weltwirtschaft derzeit nur bei 7,2 %. Vor diesem Hintergrund untersucht die Dissertation, welche Wettbewerbsstrategien und welche organisationalen Fähigkeiten die Entwicklung zirkulärer Geschäftsmodelle fördern. Der Fokus liegt auf der Holzindustrie der DACH-Region, die historisch durch forstwirtschaftliche Nachhaltigkeit geprägt ist, jedoch bislang überwiegend linearen Strukturen folgt.
Die Arbeit kombiniert theoretische Fundierung, eine vierjährige Literaturrecherche, Experteninterviews sowie im Zentrum eine quantitative Unternehmensbefragung (n = 200). Daraus wurde eine aktivitätsorientierte Skala zur Bewertung der Zirkularität eines Geschäftsmodells entwickelt. Analysiert wurden drei Perspektiven: Fähigkeiten, Strategien und Stakeholder.
Im Kontext der Fähigkeitsperspektive wurde ermittelt, dass die dynamischen Fähigkeiten positive Implikationen auf die Umsetzung von Zirkularität haben. Im Forschungsfeld der Strategieperspektive wurde deutlich, dass die Innovationsführerschaft positive Effekte auf die Umsetzung der Kreislaufwirtschaft besitzt. Zudem weisen sowohl die Innovationsführerschaft als auch die Qualitätsführerschaft einen positiven indirekten Effekt über die dynamischen Fähigkeiten auf die Entwicklung zirkulärer Geschäftsmodelle auf. Im Rahmen der Stakeholderperspektive wurde eruiert, dass der Stakeholder-Druck im Zusammenwirken mit einem grünen Unternehmensimage eine Katalysator-Wirkung besitzt. Der Einfluss der Interessengruppen führt dazu, dass die Unternehmen ein grünes Image in eine substanzielle Umsetzungsphase überführen. Darüber hinaus wurde ersichtlich, dass der Stakeholder-Druck als zentraler Veränderungsfaktor wirkt. Während die direkten Auswirkungen der dynamischen Fähigkeiten durch den Druck zurückgehen, nehmen die indirekten Effekte auf die Erreichung von Zirkularität zu. Abschließend werden Handlungsempfehlungen für Unternehmen sowie wissenschaftliche Implikationen und zukünftige Forschungsmöglichkeiten abgeleitet.
Case-Based Reasoning (CBR) is a symbolic Artificial Intelligence (AI) approach that has been successfully applied across various domains, including medical diagnosis, product configuration, and customer support, to solve problems based on experiential knowledge and analogy. A key aspect of CBR is its problem-solving procedure, where new solutions are created by referencing similar experiences, which makes CBR explainable and effective even with small amounts of data. However, one of the most significant challenges in CBR lies in defining and computing meaningful similarities between new and past problems, which heavily relies on domain-specific knowledge. This knowledge, typically only available through human experts, must be manually acquired, leading to what is commonly known as the knowledge-acquisition bottleneck.
One way to mitigate the knowledge-acquisition bottleneck is through a hybrid approach that combines the symbolic reasoning strengths of CBR with the learning capabilities of Deep Learning (DL), a sub-symbolic AI method. DL, which utilizes deep neural networks, has gained immense popularity due to its ability to automatically learn from raw data to solve complex AI problems such as object detection, question answering, and machine translation. While DL minimizes manual knowledge acquisition by automatically training models from data, it comes with its own limitations, such as requiring large datasets, and being difficult to explain, often functioning as a "black box". By bringing together the symbolic nature of CBR and the data-driven learning abilities of DL, a neuro-symbolic, hybrid AI approach can potentially overcome the limitations of both methods, resulting in systems that are both explainable and capable of learning from data.
The focus of this thesis is on integrating DL into the core task of similarity assessment within CBR, specifically in the domain of process management. Processes are fundamental to numerous industries and sectors, with process management techniques, particularly Business Process Management (BPM), being widely applied to optimize organizational workflows. Process-Oriented Case-Based Reasoning (POCBR) extends traditional CBR to handle procedural data, enabling applications such as adaptive manufacturing, where past processes are analyzed to find alternative solutions when problems arise. However, applying CBR to process management introduces additional complexity, as procedural cases are typically represented as semantically annotated graphs, increasing the knowledge-acquisition effort for both case modeling and similarity assessment.
The key contributions of this thesis are as follows: It presents a method for preparing procedural cases, represented as semantic graphs, to be used as input for neural networks. Handling such complex, structured data represents a significant challenge, particularly given the scarcity of available process data in most organizations. To overcome the issue of data scarcity, the thesis proposes data augmentation techniques to artificially expand the process datasets, enabling more effective training of DL models. Moreover, it explores several deep learning architectures and training setups for learning similarity measures between procedural cases in POCBR applications. This includes the use of experience-based Hyperparameter Optimization (HPO) methods to fine-tune the deep learning models.
Additionally, the thesis addresses the computational challenges posed by graph-based similarity assessments in CBR. The traditional method of determining similarity through subgraph isomorphism checks, which compare nodes and edges across graphs, is computationally expensive. To alleviate this issue, the hybrid approach seeks to use DL models to approximate these similarity calculations more efficiently, thus reducing the computational complexity involved in graph matching.
The experimental evaluations of the corresponding contributions provide consistent results that indicate the benefits of using DL-based similarity measures and case retrieval methods in POCBR applications. The comparison with existing methods, e.g., based on subgraph isomorphism, shows several advantages but also some disadvantages of the compared methods. In summary, the methods and contributions outlined in this work enable more efficient and robust applications of hybrid CBR and DL in process management applications.
The gender wage gap in labor market outcomes has been intensively investigated for decades, yet it remains a relevant and innovative research topic in labor economics. Chapter 2 of this dissertation explores the pressing issue of gender wage disparity in Ethiopia. By applying various empirical methodologies and measures of occupational segregation, this chapter aims to analyze the role of female occupational segregation in explaining the gender wage gap across the pay distribution. The findings reveal a significant difference in monthly wages, with women consistently earning lower wages across the wage distribution.
Importantly, the result indicates a negative association between female occupational segregation and the average earnings of both men and women. Furthermore, the estimation result shows that female occupational segregation partially explains the gender wage gap at the bottom of the wage distribution. I find that the magnitude of the gender wage gap in the private sector is higher than in the public sector.
In Chapter 3, the Ethiopian Demography and Health Survey data are leveraged to explore the causal relationship between female labor force participation and domestic violence. Domestic violence against women is a pervasive public health concern, particularly in Africa, including Ethiopia, where a significant proportion of women endure various forms of domestic violence perpetrated by intimate partners. Economic empowerment of women through increased participation in the labor market can be one of the mechanisms for mitigating the risk of domestic violence.
This study seeks to provide empirical evidence supporting this hypothesis. Using the employment rate of women at the community level as an instrumental variable, the finding suggests that employment significantly reduces the risk of domestic violence against women. More precisely, the result shows that women’s employment status significantly reduces domestic violence by about 15 percentage points. This finding is robust for different dimensions of domestic violence, such as physical, sexual, and emotional violence.
By examining the employment outcomes of immigrants in the labor market, Chapter 4 extends the dissertation's inquiry to the dynamics of immigrant economic integration into the destination country. Drawing on data from the German Socio-Economic Panel, the chapter scrutinizes the employment gap between native-born individuals and two distinct groups of first-generation immigrants: refugees and other migrants. Through rigorous analysis, Chapter 4 aims to identify the factors contributing to disparities in employment outcomes among these groups. In this chapter, I aim to disentangle the heterogeneity characteristic of refugees and other immigrants in the labor market, thereby contributing to a deeper understanding of immigrant labor market integration in Germany.
The results show that refugees and other migrants are less likely to find employment than comparable natives. The refugee-native employment gap is much wider than other migrant-native employment gap. Moreover, the findings vary by gender and migration categories. While other migrant men do not differ from native men in the probability of being employed, refugee women are the most disadvantaged group compared to other migrant women and native women in the probability of being employed. The study suggests that German language proficiency and permanent resident permits partially explain the lower employment probability of refugees in the German labor market.
Chapter 5 (co-authored with Uwe Jirjahn) utilizes the same dataset to explore the immigrant-native trade union membership gap, focusing on the role of integration in the workplace and into society. The integration of immigrants into society and the workplace is vital not only to improve migrant's performance in the labor market but also to actively participate in institutions such as trade unions. In this study, we argue that the incomplete integration of immigrants into the workplace and society implies that immigrants are less likely to be union members than natives. Our findings show that first-generation immigrants are less likely to be trade union members than natives. Notably, the analysis shows that the immigrant-native gap in union membership depends on immigrants’ integration into the workplace and society. The gap is smaller for immigrants working in firms with a works council and having social contacts with Germans. Moreover, the results reveal that the immigrant-native union membership gap is decreasing in the year since arrival in Germany.
Die Masterarbeit untersucht den Zusammenhang zwischen Libertarismus und Rechtsextremismus, wobei der Fokus auf der Entwicklung der libertären Szene in Deutschland liegt. Zunächst wird ein ausführlicher theoretischer Teil präsentiert, in dem gezeigt wird, dass zwischen einer radikal wirtschaftsliberalen und einer rechtsextremen Weltauffassung partiell gemeinsame Elemente bestehen. Insbesondere werden ein spezifischer Antiegalitarismus, eine Naturalisierung gesellschaftlicher Sachverhalte sowie eine gemeinsame Feindbildkonstruktion als verbindende Merkmale identifiziert, die beide Ideologien, die auf Ungleichwertigkeitsvorstellungen basieren, prägen. Im Anschluss folgt eine empirische Analyse des libertären Magazins eigentümlich frei, das eine zentrale Rolle in der deutschsprachigen libertären Bewegung spielt. Der soziologische Neo-Institutionalismus dient als theoretische Perspektive, um den institutionellen Wandel innerhalb der libertären Szene zu erfassen und zu analysieren. Die empirische Untersuchung bestätigt die theoretischen Annahmen und zeigt, dass sich im libertären Diskurs eine zunehmende Annäherung an rechtsextreme Ideologien vollzieht. Fünf Phasen des institutionellen Wandels werden identifiziert, die mit einer verstärkten Vernetzung der libertären Bewegung mit dem rechtsextremen Spektrum und der Veränderung von Diskursen einhergehen. Die Arbeit kommt zu dem Schluss, dass die libertäre Szene um eigentlich frei dem rechtsextremen Spektrum zuzuordnen ist. Die Untersuchung schlägt vor, den Libertarismus im Rahmen dieser Entwicklung als „Paläolibertarismus“ zu bezeichnen, was auf eine ideologische Nähe zur Alt-Right-Bewegung hinweist. Zentrale Merkmale dieser Ideologie sind neben einer radikal wirtschaftsliberalen Ausrichtung auch die Forderung nach einer Privatisierung gesellschaftlicher Institutionen und die Etablierung von sozialen Autoritäten wie Familie und Kirche zum Schutz des Individuums vor staatlicher Einflussnahme.
Convex Duality in Consumption-Portfolio Choice Problems with Epstein-Zin Recursive Preferences
(2025)
This thesis deals with consumption-investment allocation problems with Epstein-Zin recursive utility, building upon the dualization procedure introduced by [Matoussi and Xing, 2018]. While their work exclusively focuses on truly recursive utility, we extend their procedure to include time-additive utility using results from general convex analysis. The dual problem is expressed in terms of a backward stochastic differential equation (BSDE), for which existence and uniqueness results are established. In this regard, we close a gap left open in previous works, by extending results restricted to specific subsets of parameters to cover all parameter constellations within our duality setting.
Using duality theory, we analyze the utility loss of an investor with recursive preferences, that is, her difference in utility between acting suboptimally in a given market, compared to her best possible (optimal) consumption-investment behaviour. In particular, we derive universal power utility bounds, presenting a novel and tractable approximation of the investors’ optimal utility and her welfare loss associated to specific investment-consumption choices. To address quantitative shortcomings of those power utility bounds, we additionally introduce one-sided variational bounds that offer a more effective approximation for recursive utilities. The theoretical value of our power utility bounds is demonstrated through their application in a new existence and uniqueness result for the BSDE characterizing the dual problem.
Moreover, we propose two approximation approaches for consumption-investment optimization problems with Epstein-Zin recursive preferences. The first approach directly formalizes the classical concept of least favorable completion, providing an analytic approximation fully characterized by a system of ordinary differential equations. In the special case of power utility, this approach can be interpreted as a variation of the well-known Campbell-Shiller approximation, improving some of its qualitative shortcomings with respect to state dependence of the resulting approximate strategies. The second approach introduces a PDE-iteration scheme, by reinterpreting artificial completion as a dynamic game, where the investor and a dual opponent interact until reaching an equilibrium that corresponds to an approximate solution of the investors optimization problem. Despite the need for additional approximations within each iteration, this scheme is shown to be quantitatively and qualitatively accurate. Moreover, it is capable of approximating high dimensional optimization problems, essentially avoiding the curse of dimensionality and providing analytical results.
This dissertation examines the relevance of regimes for stock markets. In three research articles, we cover the identification and predictability of regimes and their relationships to macroeconomic and financial variables in the United States.
The initial two chapters contribute to the debate on the predictability of stock markets. While various approaches can demonstrate in-sample predictability, their predictive power diminishes substantially in out-of-sample studies. Parameter instability and model uncertainty are the primary challenges. However, certain methods have demonstrated efficacy in addressing these issues. In Chapter 1 and 2, we present frameworks that combine these methods meaningfully. Chapter 3 focuses on the role of regimes in explaining macro-financial relationships and examines the state-dependent effects of macroeconomic expectations on cross-sectional stock returns. Although it is common to capture the variation in stock returns using factor models, their macroeconomic risk sources are unclear. According to macro-financial asset pricing, expectations about state variables may be viable candidates to explain these sources. We examine their usefulness in explaining factor premia and assess their suitability for pricing stock portfolios.
In summary, this dissertation improves our understanding of stock market regimes in three ways. First, we show that it is worthwhile to exploit the regime dependence of stock markets. Markov-switching models and their extensions are valuable tools for filtering the stock market dynamics and identifying and predicting regimes in real-time. Moreover, accounting for regime-dependent relationships helps to examine the dynamic impact of macroeconomic shocks on stock returns. Second, we emphasize the usefulness of macro-financial variables for the stock market. Regime identification and forecasting benefit from their inclusion. This is particularly true in periods of high uncertainty when information processing in financial markets is less efficient. Finally, we recommend to address parameter instability, estimation risk, and model uncertainty in empirical models. Because it is difficult to find a single approach that meets all of these challenges simultaneously, it is advisable to combine appropriate methods in a meaningful way. The framework should be as complex as necessary but as parsimonious as possible to mitigate additional estimation risk. This is especially recommended when working with financial market data with a typically low signal-to-noise ratio.
Mixed-Integer Optimization Techniques for Robust Bilevel Problems with Here-and-Now Followers
(2025)
In bilevel optimization, some of the variables of an optimization problem have to be an optimal solution to another nested optimization problem. This specific structure renders bilevel optimization a powerful tool for modeling hierarchical decision-making processes, which arise in various real-world applications such as in critical infrastructure defense, transportation, or energy. Due to their nested structure, however, bilevel problems are also inherently hard to solve—both in theory and in practice. Further challenges arise if, e.g., bilevel problems under uncertainty are considered.
In this dissertation, we address different types of uncertainties in bilevel optimization using techniques from robust optimization. We study mixed-integer linear bilevel problems with lower-level objective uncertainty, which we tackle using the notion of Gamma-robustness. We present two exact branch-and-cut approaches to solve these Gamma-robust bilevel problems, along with cuts tailored to the important class of monotone interdiction problems. Given the overall hardness of the considered problems, we additionally propose heuristic approaches for mixed-integer, linear, and Gamma-robust bilevel problems. The latter rely on solving a linear number of deterministic bilevel problems so that no problem-specific tailoring is required. We assess the performance of both the exact and the heuristic approaches through extensive computational studies.
In addition, we study the problem of determining optimal tolls in a traffic network in which the network users hedge against uncertain travel costs in a robust way. The overall toll-setting problem can be seen as a single-leader multi-follower problem with multiple robustified followers. We model this setting as a mathematical problem with equilibrium constraints, for which we present a mixed-integer, nonlinear, and nonconvex reformulation that can be tackled using state-of-the-art general-purpose solvers. We further illustrate the impact of considering robustified followers on the toll-setting policies through a case study.
Finally, we highlight that the sources of uncertainty in bilevel optimization are much richer compared to single-level optimization. To this end, we study two aspects related to so-called decision uncertainty. First, we propose a strictly robust approach in which the follower hedges against erroneous observations of the leader's decision. Second, we consider an exemplary bilevel problem with a continuous but nonconvex lower level in which algorithmic necessities prevent the follower from making a globally optimal decision in an exact sense. The example illustrates that even very small deviations in the follower's decision may lead to arbitrarily large discrepancies between exact and computationally obtained bilevel solutions.
Partial differential equations are not always suited to model all physical phenomena, especially, if long-range interactions are involved or if the actual solution might not satisfy the regularity requirements associated with the partial differential equation. One remedy to this problem are nonlocal operators, which typically consist of integrals that incorporate interactions between two separated points in space and the corresponding solutions to nonlocal equations have to satisfy less regularity conditions.
In PDE-constrained shape optimization the goal is to minimize or maximize an objective functional that is dependent on the shape of a certain domain and on the solution to a partial differential equation, which is usually also influenced by the shape of this domain. Moreover, parameters associated with the nonlocal model are oftentimes domain dependent and thus it is a natural next step to now consider shape optimization problems that are governed by nonlocal equations.
Therefore, an interface identification problem constrained by nonlocal equations is thoroughly investigated in this thesis. Here, we focus on rigorously developing the first and second shape derivative of the associated reduced functional. In addition, we study first- and second-order shape optimization algorithms in multiple numerical experiments.
Moreover, we also propose Schwarz methods for nonlocal Dirichlet problems as well as regularized nonlocal Neumann problems. Particularly, we investigate the convergence of the multiplicative Schwarz approach and we conduct a number of numerical experiments, which illustrate various aspects of the Schwarz method applied to nonlocal equations.
Since applying the finite element method to solve nonlocal problems numerically can be quite costly, Local-to-Nonlocal couplings emerged, which combine the accuracy of nonlocal models on one part of the domain with the fast computation of partial differential equations on the remaining area. Therefore, we also examine the interface identification problem governed by an energy-based Local-to-Nonlocal coupling, which can be numerically computed by making use of the Schwarz method. Here, we again present a formula for the shape derivative of the associated reduced functional and investigate a gradient based shape optimization method.
In machine learning, classification is the task of predicting a label for each point within a data set. When the class of each point in the labeled subset is already known, this information is used to recognize patterns and make predictions about the points in the remainder of the set, referred to as the unlabeled set. This scenario falls in the field of supervised learning.
However, the number of labeled points can be restricted, because, e.g., it is expensive to obtain this information. Besides, this subset may be biased, such as in the case of self-selection in a survey. Consequently, the classification performance for unlabeled points may be limited. To improve the reliability of the results, semi-supervised learning tackles the setting of labeled and unlabeled data. Moreover, in many cases, additional information about the size of each class can be available from undisclosed sources.
This cumulative thesis presents different studies to combine this external cardinality constraint information within three important algorithms for binary classification in the supervised context: support vector machines (SVM), classification trees, and random forests. From a mathematical point of view, we focus on mixed-integer programming (MIP) models for semi-supervised approaches that consider a cardinality constraint for each class for each algorithm.
Furthermore, since the proposed MIP models are computationally challenging, we also present techniques that simplify the process of solving these problems. In the SVM setting, we introduce a re-clustering method and further computational techniques to reduce the computational cost. In the context of classification trees, we provide correct values for certain bounds that play a crucial role for the solver performance. For the random forest model, we develop preprocessing techniques and an intuitive branching rule to reduce the solution time. For all three methods, our numerical results show that our approaches have better statistical performances for biased samples than the standard approach.
Optimal Error Bounds in Normal and Edgeworth Approximation of Symmetric Binomial and Related Laws
(2024)
This thesis explores local and global normal and Edgeworth approximations for symmetric
binomial distributions. Further, it examines the normal approximation of convolution powers
of continuous and discrete uniform distributions.
We obtain the optimal constant in the local central limit theorem for symmetric binomial
distributions and its analogs in higher-order Edgeworth approximation. Further, we offer a
novel proof for the known optimal constant in the global central limit theorem for symmetric
binomial distributions using Fourier inversion. We also consider the effect of simple continuity
correction in the global central limit theorem for symmetric binomial distributions. Here, and in
higher-order Edgeworth approximation, we found optimal constants and asymptotically sharp
bounds on the approximation error. Furthermore, we prove asymptotically sharp bounds on the
error in the local case of a relative normal approximation to symmetric binomial distributions.
Additionally, we provide asymptotically sharp bounds on the approximation error in the local
central limit theorem for convolution powers of continuous and discrete uniform distributions.
Our methods include Fourier inversion formulae, explicit inequalities, and Edgeworth expansions, some of which may be of independent interest.
This thesis consists of four highly related chapters examining China’s rise in the aluminium industry. The first chapter addresses the conditions that allowed China, which first entered the market in the 1950s, to rise to world leadership in aluminium production. Although China was a latecomer, its re-entry into the market after the oil crises in the 1970s was a success and led to its ascent as the world’s largest aluminium producer by 2001. With an estimated production of 40.4 million tonnes in 2022, China represented almost 60% of the global output. Chapter 1 examines the factors underlying this success, such as the decline of international aluminium cartels, the introduction of innovative technology, the US granting China the MFN tariff status, Chinese-specific factors, and supportive government policies. Chapter 2 develops a mathematical model to analyze firms’ decisions in the short term. It examines how an incumbent with outdated technology and a new entrant with access to a new type of technology make strategic decisions, including the incumbent’s decision whether to deter entry, the production choice of firms, the optimal technology adoption rate of the newcomer, and cartel formation. Chapter 3 focuses on the adoption of new technology by firms upon market entry in four scenarios: firstly, a free market Cournot competition; secondly, a situation in which the government determines technology adoption rates; thirdly, a scenario in which the government controls both technology and production; and finally, a scenario where the government dictates technology adoption rates, production levels, and also the number of market participants. Chapter 4 applies the Spencer and Brander (1983) framework to examine strategic industrial policy. The model assumes that there are two exporting firms in two different countries that sell a product to a third country. We examine how the domestic firm is influenced by government intervention, such as the provision of a fixed-cost subsidy to improve its competitiveness relative to the foreign company. Chapter 4 initially investigates a scenario where only one government offers a fixed-cost subsidy, followed by an analysis of the case when both governments simultaneously provide financial help. Taken together, these chapters provide a comprehensive analysis of the strategic, technological, and political factors contributing to China’s leadership in the global aluminium industry.
Chapter 1: The Rise of China as a Latecomer in the Global Aluminium Industry
This chapter examines China’s remarkable transformation into a global leader in the aluminium industry, a sector in which the country accounted for approximately 58.9% of worldwide production in 2022. We examine how China, a latecomer to the aluminium industry that started off with labor-intensive technology in 1953, grew into the largest aluminium producer with some of the most advanced smelters in the world. This analysis identifies and discusses several opportunities that Chinese aluminium producers took advantage of. The first set of opportunities happened during the 1970s oil crises, which softened international competition and allowed China to acquire innovative smelting technology from Japan. The second set of opportunities started at about the same time when China opened its economy in 1978. The substantial demand for aluminium in China is influenced by both external and internal factors. Externally, the US granted China’s MFN tariff status in 1980 and China entered the World Trade Organization (WTO) in 2001. Both events contributed to a surge in Chinese aluminium consumption. Internally, China’s investment-led growth model boosted further its aluminium demand. Additional factors specific to China, such as low labor costs and the abundance of coal as an energy source, offer Chinese firms competitive advantages against international players. Furthermore, another window of opportunity is due to Chinese governmental policies, including phasing out old technology, providing subsidies, and gradually opening the economy to enhance domestic competition before expanding globally. By describing these elements, the study provides insights into the dynamic interplay of external circumstances and internal strategies that contributed to the success of the Chinese aluminium industry.
Chapter 2: Technological Change and Strategic Choices for Incumbent and New Entrant
This chapter introduces an oligopoly model that includes two actors: an incumbent and a potential entrant, that compete in the same market. We assume that two participants are located in different parts of the market: the incumbent is situated in area 1, whereas the potential entrant may venture into the other region, area 2. The incumbent exists in stage zero, where it can decide whether to deter the newcomer’s entry. A new type of technology exists in period one, when the newcomer may enter the market. In the short term, the incumbent is trapped with the outdated technology, while the new entrant may choose to partially or completely adopt the latest technology. Our results suggest the following: Firstly, the incumbent only tries to deter the new entrant if a condition for entry cost is met. Secondly, the new entrant is only interested in forming a cartel with the incumbent if a function of the ratio of the variable to new technology’s fixed-cost parameters is sufficiently high. Thirdly, if the newcomer asks to form a cartel, the incumbent will always accept this request. Finally, we can obtain the optimal new technology adoption rate for the newcomer.
Chapter 3: Technological Adoption and Welfare in Cournot Oligopoly
This study examines the difference between the optimal technology adoption rates chosen by firms in a homogeneous Cournot oligopoly and that preferred by a benevolent government upon firms’ market entry. To address the question of whether the technology choices of firms and government are similar, we analyze several different scenarios, which differ in the extent of government intervention in the market. Our results suggest a relationship between the number of firms in the market and the impact of government intervention on technology adoption rates. Especially in situations with a low number of firms that are interested in entering the market, greater government influence tends to lead to higher technology adoption rates of firms. Conversely, in scenarios with a higher number of firms and a government that lacks control over the number of market players, the technology adoption rate of firms will be highest when the government plays no role.
Chapter 4: International Technological Innovation and Industrial Strategies
Supporting domestic firms when they first enter the market may be seen as a favorable policy choice by governments around the world thanks to their ability to enhance the competitive advantage of domestic firms in non-cooperative competition against foreign enterprises (infant industry protection argument). This advantage may allow domestic firms to increase their market share and generate higher profits, thereby improving domestic welfare. This chapter utilizes the Spencer and Brander (1983) framework as a theoretical foundation to elucidate the effects of fixed-cost subsidies on firms’ production levels, technological innovations, and social welfare. The analysis examines two firms in different countries, each producing a homogeneous product that is sold in a third, separate country. We first examine the Cournot-Nash equilibrium in the absence of government intervention, followed by analyzing a scenario where just one government provides a financial subsidy for its domestic firm, and finally, we consider a situation where both governments simultaneously provide financial assistance for their respective firms. Our results suggest that governments aim to maximize social welfare by providing fixed-cost subsidies to their respective firms, finding themselves in a Chicken game scenario. Regarding technology innovation, subsidies lead to an increased technological adoption rate for recipient firms, regardless of whether one or both firms in a market receive support, compared to the situation without subsidies. The technology adoption rate of the recipient firm is higher than of its rival when only the recipient firm benefits from the fixed-cost subsidy. The lowest technology adoption rate of a firm occurs when the firm does not receive a fixed-cost subsidy, but its competitor does. Furthermore, global welfare will benefit the most in case when both exporting countries grant fixed-cost subsidies, and this welfare level is higher when only one country subsidizes than when no subsidies are provided by any country.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.