Refine
Year of publication
Document Type
- Doctoral Thesis (870) (remove)
Language
- German (499)
- English (360)
- Multiple languages (7)
- French (4)
Keywords
- Stress (37)
- Deutschland (35)
- Optimierung (21)
- Modellierung (19)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (15)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (181)
- Raum- und Umweltwissenschaften (148)
- Fachbereich 4 (68)
- Mathematik (64)
- Wirtschaftswissenschaften (61)
- Fachbereich 1 (31)
- Geschichte, mittlere und neuere (28)
- Informatik (27)
- Germanistik (26)
- Kunstgeschichte (22)
Die Abteilung Kunstschutz der deutschen Wehrmacht im besetzten Griechenland (1941-1944) bestand aus wehrpflichtigen deutschen Archäologen. Sie waren zunächst Stipendiaten oder Mitarbeiter des Archäologischen Instituts des Deutschen Reiches (AIDR) unter den Bedingungen des Nationalsozialismus, bevor sie im Zweiten Weltkrieg in der Uniform der Wehrmacht zurückkehrten. Ihre Biografien im Kontext der Abteilung Athen, deren Direktor Georg Karo bis 1936 war, sowie der Zentrale der Instituts, unter dem von 1932 bis 1936 amtierenden Präsidenten Theodor Wiegand, sind ein Untersuchungsgegenstand. Die außenpolitische Legitimation des NS-Regimes durch die Olympischen Spiele und der wichtigste wissenschaftspolitische Erfolg des Institutes, die Wiederaufnahme der Olympiagrabung, die Wiegand und Karo seit 1933 anstrebten und durch ihre politischen Netzwerke 1936 erreichten, werden in der Dissertation in ihrer wechselseitigen Bedingtheit aufgezeigt. Diese Anpassungsleistungen an das NS-Regime prägten den eigenen archäologischen Nachwuchs aber auch die griechische Gesellschaft.
Schutzmaßnahmen waren nur ein kleiner Tätigkeitsbereich der Kunstschützer aber ein wichtiger Teil der Wehrmachtspropaganda. Der Institutspräsident Martin Schede (1937 bis 1945) forderte Mitarbeitern vor allem für zwei AIDR-Projekte an: die Erstellung von Flugbildern von möglichst ganz Griechenland und Ausgrabungen auf Kreta. Bereits diese Zwischenergebnisse berechtigen zu dem Titel „Kunstschutz als Alibi“.
Die Dissertation versucht, die Frage zu beantworten, warum der archäologische Kunstschutz nicht mehr als ein Alibi sein konnte. Dies geschieht vor allem unter Berücksichtigung der politischen aber auch der militärischen Traditionslinien deutscher Archäologie in Griechenland und Deutschland.
Case-Based Reasoning (CBR) is a symbolic Artificial Intelligence (AI) approach that has been successfully applied across various domains, including medical diagnosis, product configuration, and customer support, to solve problems based on experiential knowledge and analogy. A key aspect of CBR is its problem-solving procedure, where new solutions are created by referencing similar experiences, which makes CBR explainable and effective even with small amounts of data. However, one of the most significant challenges in CBR lies in defining and computing meaningful similarities between new and past problems, which heavily relies on domain-specific knowledge. This knowledge, typically only available through human experts, must be manually acquired, leading to what is commonly known as the knowledge-acquisition bottleneck.
One way to mitigate the knowledge-acquisition bottleneck is through a hybrid approach that combines the symbolic reasoning strengths of CBR with the learning capabilities of Deep Learning (DL), a sub-symbolic AI method. DL, which utilizes deep neural networks, has gained immense popularity due to its ability to automatically learn from raw data to solve complex AI problems such as object detection, question answering, and machine translation. While DL minimizes manual knowledge acquisition by automatically training models from data, it comes with its own limitations, such as requiring large datasets, and being difficult to explain, often functioning as a "black box". By bringing together the symbolic nature of CBR and the data-driven learning abilities of DL, a neuro-symbolic, hybrid AI approach can potentially overcome the limitations of both methods, resulting in systems that are both explainable and capable of learning from data.
The focus of this thesis is on integrating DL into the core task of similarity assessment within CBR, specifically in the domain of process management. Processes are fundamental to numerous industries and sectors, with process management techniques, particularly Business Process Management (BPM), being widely applied to optimize organizational workflows. Process-Oriented Case-Based Reasoning (POCBR) extends traditional CBR to handle procedural data, enabling applications such as adaptive manufacturing, where past processes are analyzed to find alternative solutions when problems arise. However, applying CBR to process management introduces additional complexity, as procedural cases are typically represented as semantically annotated graphs, increasing the knowledge-acquisition effort for both case modeling and similarity assessment.
The key contributions of this thesis are as follows: It presents a method for preparing procedural cases, represented as semantic graphs, to be used as input for neural networks. Handling such complex, structured data represents a significant challenge, particularly given the scarcity of available process data in most organizations. To overcome the issue of data scarcity, the thesis proposes data augmentation techniques to artificially expand the process datasets, enabling more effective training of DL models. Moreover, it explores several deep learning architectures and training setups for learning similarity measures between procedural cases in POCBR applications. This includes the use of experience-based Hyperparameter Optimization (HPO) methods to fine-tune the deep learning models.
Additionally, the thesis addresses the computational challenges posed by graph-based similarity assessments in CBR. The traditional method of determining similarity through subgraph isomorphism checks, which compare nodes and edges across graphs, is computationally expensive. To alleviate this issue, the hybrid approach seeks to use DL models to approximate these similarity calculations more efficiently, thus reducing the computational complexity involved in graph matching.
The experimental evaluations of the corresponding contributions provide consistent results that indicate the benefits of using DL-based similarity measures and case retrieval methods in POCBR applications. The comparison with existing methods, e.g., based on subgraph isomorphism, shows several advantages but also some disadvantages of the compared methods. In summary, the methods and contributions outlined in this work enable more efficient and robust applications of hybrid CBR and DL in process management applications.
There is a wide range of methodologies for policy evaluation and socio-economic impact assessment. A fundamental distinction can be made between micro and macro approaches. In contrast to micro models, which focus on the micro-unit, macro models are used to analyze aggregate variables. The ability of microsimulation models to capture interactions occurring at the micro-level makes them particularly suitable for modeling complex real-world phenomena. The inclusion of a behavioral component into microsimulation models provides a framework for assessing the behavioral effects of policy changes.
The labor market is a primary area of interest for both economists and policy makers. The projection of labor-related variables is particularly important for assessing economic and social development needs, as it provides insight into the potential trajectory of these variables and can be used to design effective policy responses. As a result, the analysis of labor market behavior is a primary area of application for behavioral microsimulation models. Behavioral microsimulation models allow for the study of second-round effects, including changes in hours worked and participation rates resulting from policy reforms. It is important to note, however, that most microsimulation models do not consider the demand side of the labor market.
The combination of micro and macro models offers a possible solution as it constitutes a promising way to integrate the strengths of both models. Of particular relevance is the combination of microsimulation models with general equilibrium models, especially computable general equilibrium (CGE) models. CGE models are classified as structural macroeconomic models, which are defined by their basis in economic theory. Another important category of macroeconomic models are time series models. This thesis examines the potential for linking micro and macro models. The different types of microsimulation models are presented, with special emphasis on discrete-time dynamic microsimulation models. The concept of behavioral microsimulation is introduced to demonstrate the integration of a behavioral element into microsimulation models. For this reason, the concept of utility is introduced and the random utility approach is described in detail. In addition, a brief overview of macro models is given with a focus on general equilibrium models and time series models. Various approaches for linking micro and macro models, which can either be categorized as sequential approaches or integrated approaches, are presented. Furthermore, the concept of link variables is introduced, which play a central role in combining both models. The focus is on the most complex sequential approach, i.e., the bi-directional linking of behavioral microsimulation models with general equilibrium macro models.
The goal of this work is to compare operators that are defined on probably varying Hilbert spaces. Distance concepts for operators as well as convergence concepts for such operators are explained and examined. For distance concepts we present three main notions. All have in common that they use space-linking operators that connect the spaces. At first, we look at unitary maps and compare the unitary orbits of the operators. Then, we consider isometric embeddings, which is based on a concept of Joachim Weidmann. Then we look at contractions but with more norm equations in comparison. The latter idea is based on a concept of Olaf Post called quasi-unitary equivalence. Our main result is that the unitary and isometric distances are equal provided the operators are both self-adjoint and have 0 in their essential spectra. In the third chapter, we focus specifically on the investigation of these distance terms for compact operators or operators in p-Schatten classes. In this case, the interpretation of the spectra as null sequences allows further distance investigation. Chapter four deals mainly with convergence terms of operators on varying Hilbert spaces. The analyses in this work deal exclusively with concepts of norm resolvent convergence. The main conclusion of the chapter is that the generalisation for norm resolvent convergence of Joachim Weidmann and the generalisation of Olaf Post, called quasi-unitary equivalence, are equivalent to each other. In addition, we specify error bounds and deal with the convergence speed of both concepts. Two important implications of these convergence notions are that the approximation is spectrally exact, i.e., the spectra converge suitably, and that the convergence is transferred to the functional calculus of the bounded functions vanishing at infinity.
In this dissertation, I analyze how large players in financial markets exert influence on smaller players and how this affects the decisions of the large ones. I focus on how the large players process information in an uncertain environment, form expectations and communicate these to smaller players through their actions. I examine these relationships empirically in the foreign exchange market and in the context of a game-theoretic model of an investment project.
In Chapter 2, I investigate the relationship between the foreign exchange trading activity of large US-based market participants and the volatility of the nominal spot exchange rate. Using a novel dataset, I utilize the weekly growth rate of aggregate foreign currency positions of major market participants to proxy trading activity in the foreign exchange market. By estimating the heterogeneous autoregressive model of realized volatility (HAR-RV), I find evidence of a positive relationship between trading activity and volatility, which is mainly driven by unexpected changes in trading activity and is asymmetric for some of the currencies considered. My results contribute to the understanding of the drivers of exchange rate volatility and the role of large players in the flow of information in financial markets.
In Chapters 3 and 4, I consider a sequential global game of an investment project to examine how a large creditor influences the decisions of small creditors with her lending decision. I pay particular attention to the timing of the large player’s decision, i.e. whether she makes her decision to roll over a credit before or after the small players. I show that she faces a trade-off between signaling to and learning from small creditors. By being a focal point for coordination, her actions have a substantial impact on the probability of coordination failure and the failure of the investment project. I investigate the sensitivity of the equilibrium by comparing settings with perfect and imperfect learning. The results highlight the importance of signaling and provide a new perspective on the idea of catalytic finance and the influence of a lender-of-last-resort in self-fulfilling debt crises.
Globalization significantly transforms labor markets. Advances in production technologies, transportation, and political integration reshape how and where goods and services are produced. Local economic conditions and diverse policy responses create varying speeds of change, affecting regions' attractiveness for living and working -- and promoting mobility.
Competition for talent necessitates a deep understanding of why individuals choose specific destinations, how to ensure their effective labor market integration, and what workplace factors affect workers' well-being.
This thesis focuses on two crucial aspects of labor market change -- Migration and workplace technological change. It contributes to our understanding of the determinants of labor mobility, the factors facilitating migrant integration, and the role of workplace automation for worker well-being.
Chapter 2 investigates the relationship between minimum wages (MWs) and regional worker mobility in the EU. EU citizens are free to work anywhere in the common market, which allows them to take advantage of the significant variation in MWs across the EU. However, although MWs are set at the national level, it is also their local relevance that varies substantially -- depending on factors such as the share of affected workers or the extent to which they shift local compensation levels. These variations may attract workers from elsewhere, from within a country or from abroad.
Analyzing regional variations in the Kaitz index, a measure of local MW impact, reveals that higher MWs can significantly increase inflows of low-skilled EU workers, particularly in central Europe.
Chapter 3 examines the inequality in returns to skills experienced by immigrants, focusing on the role of linguistic proximity between migrants' origin and destination countries. Harmonized individual-level data from nine linguistically diverse migrant-hosting economies allows for an analysis of the wage gaps faced by immigrants from various origins, implicitly indicating how well they and their skills are integrated into the local labor markets. The analysis reveals that greater linguistic distance is associated with a higher wage penalty for highly skilled immigrants and a lower position in the wage distribution for those without tertiary education.
Chapter 4 investigates an institutional factor potentially relevant for the integration of immigrants -- the labor market impact of Confucius Institutes (CIs), Chinese government-sponsored institutions that promote Chinese language and culture abroad. CIs have been found to foster trade and cultural exchange, indicating their potential relevance in shaping attitudes and trust of natives towards China and Chinese individuals. Examining the relationship between local CI presence and the wages of Chinese immigrants in local labor markets of the United States, the analysis reveals that CIs associate with significantly reduced wages for nearby residing Chinese immigrants. An event study demonstrates that the mere announcement of a new CI negatively impacts local wages for Chinese immigrants, independent of the CI's actual opening.
Chapter 5 explores how working in automatable jobs affects life satisfaction in Germany. Following earlier literature, we classify occupations by potential for automation, and define the top third of occupations in this metric as \textit{automatable jobs}. We find workers in highly automatable jobs reporting a lower life satisfaction. Moreover, we detect a non-linearity, where workers in moderately automatable jobs (the second third of the distribution) experience a positive association with life satisfaction. Overall, the negative relationship of automation is most pronounced among younger and blue-collar workers, irrespective of the non-linearity.
Ensuring fairness in machine learning models is crucial for ethical and unbiased automated decision-making. Classifications from fair machine learning models should not discriminate against sensitive variables such as sexual orientation and ethnicity. However, achieving fairness is complicated by biases inherent in training
data, particularly when data is collected through group sampling, like stratified or
cluster sampling as often occurs in social surveys. Unlike the standard assumption of
independent observations in machine learning, clustered data introduces correlations that can amplify biases, especially when cluster assignment is linked to the target variable.
To address these challenges, this cumulative thesis focuses on developing methods to mitigate unfairness in machine learning models. We propose a fair mixed effects support vector machine algorithm, a Cluster-Regularized Logistic Regression and a fair Generalized Linear Mixed Model based on boosting, all of them
are capable of handling both grouped data and fairness constraints simultaneously. Additionally, we introduce a Julia package, FairML.jl, which provides a comprehensive framework for addressing fairness issues. This package offers a preprocessing technique, based on resampling methods, to mitigate biases in the data, as well as a post-processing method, that seeks for a optimal cut-off selection.
To improve fairness in classifications both processes can be incorporated in any
classification method available in the MLJ.jl package. Furthermore, FairML.jl incorporates in-processing approaches, such as optimization-based techniques for logistic regression and support vector machine, to directly address fairness during
model training in regular and mixed models.
By accounting for data complexities and implementing various fairness-enhancing
strategies, our work aims to contribute to the development of more equitable and reliable machine learning models.
This dissertation addresses the measurement and evaluation of the energy and resource efficiency of software systems. Studies show that the environmental impact of Information and Communications Technologies (ICT) is steadily increasing and is already estimated to be responsible for 3 % of the total greenhouse gas (GHG) emissions. Although it is the hardware that consumes natural resources and energy through its production, use, and disposal, software controls the hardware and therefore has a considerable influence on the used capacities. Accordingly, it should also be attributed a share of the environmental impact. To address this softwareinduced impact, the focus is on the continued development of a measurement and assessment model for energy and resource-efficient software. Furthermore, measurement and assessment methods from international research and practitioner communities were compared in order to develop a generic reference model for software resource and energy measurements. The next step was to derive a methodology and to define and operationalize criteria for evaluating and improving the environmental impact of software products. In addition, a key objective is to transfer the developed methodology and models to software systems that cause high consumption or offer optimization potential through economies of scale. These include, e. g., Cyber-Physical Systems (CPS) and mobile apps, as well as applications with high demands on computing power or data volumes, such as distributed systems and especially Artificial Intelligence (AI) systems.
In particular, factors influencing the consumption of software along its life cycle are considered. These factors include the location (cloud, edge, embedded) where the computing and storage services are provided, the role of the stakeholders, application scenarios, the configuration of the systems, the used data, its representation and transmission, or the design of the software architecture. Based on existing literature and previous experiments, distinct use cases were selected that address these factors. Comparative use cases include the implementation of a scenario in different programming languages, using varying algorithms, libraries, data structures, protocols, model topologies, hardware and software setups, etc. From the selection, experimental scenarios were devised for the use cases to compare the methods to be analyzed. During their execution, the energy and resource consumption was measured, and the results were assessed. Subtracting baseline measurements of the hardware setup without the software running from the scenario measurements makes the software-induced consumption measurable and thus transparent. Comparing the scenario measurements with each other allows the identification of the more energyefficient setup for the use case and, in turn, the improvement/optimization of the system as a whole. The calculated metrics were then also structured as indicators in a criteria catalog. These indicators represent empirically determinable variables that provide information about a matter that cannot be measured directly, such as the environmental impact of the software. Together with verification criteria that must be complied with and confirmed by the producers of the software, this creates a model with which the comparability of software systems can be established.
The gained knowledge from the experiments and assessments can then be used to forecast and optimize the energy and resource efficiency of software products. This enables developers, but also students, scientists and all other stakeholders involved in the life cycleof software, to continuously monitor and optimize the impact of their software on energy and resource consumption. The developed models, methods, and criteria were evaluated and validated by the scientific community at conferences and workshops. The central outcomes of this thesis, including a measurement reference model and the criteria catalog, were disseminated in academic journals. Furthermore, the transfer to society has been driven forward, e. g., through the publication of two book chapters, the development and presentation of exemplary best practices at developer conferences, collaboration with industry, and the establishment of the eco-label “Blue Angel” for resource and energy-efficient software products. In the long term, the objective is to effect a change in societal attitudes and ultimately to achieve significant resource savings through economies of scale by applying the methods in the development of software in general and AI systems in particular.
The gender wage gap in labor market outcomes has been intensively investigated for decades, yet it remains a relevant and innovative research topic in labor economics. Chapter 2 of this dissertation explores the pressing issue of gender wage disparity in Ethiopia. By applying various empirical methodologies and measures of occupational segregation, this chapter aims to analyze the role of female occupational segregation in explaining the gender wage gap across the pay distribution. The findings reveal a significant difference in monthly wages, with women consistently earning lower wages across the wage distribution.
Importantly, the result indicates a negative association between female occupational segregation and the average earnings of both men and women. Furthermore, the estimation result shows that female occupational segregation partially explains the gender wage gap at the bottom of the wage distribution. I find that the magnitude of the gender wage gap in the private sector is higher than in the public sector.
In Chapter 3, the Ethiopian Demography and Health Survey data are leveraged to explore the causal relationship between female labor force participation and domestic violence. Domestic violence against women is a pervasive public health concern, particularly in Africa, including Ethiopia, where a significant proportion of women endure various forms of domestic violence perpetrated by intimate partners. Economic empowerment of women through increased participation in the labor market can be one of the mechanisms for mitigating the risk of domestic violence.
This study seeks to provide empirical evidence supporting this hypothesis. Using the employment rate of women at the community level as an instrumental variable, the finding suggests that employment significantly reduces the risk of domestic violence against women. More precisely, the result shows that women’s employment status significantly reduces domestic violence by about 15 percentage points. This finding is robust for different dimensions of domestic violence, such as physical, sexual, and emotional violence.
By examining the employment outcomes of immigrants in the labor market, Chapter 4 extends the dissertation's inquiry to the dynamics of immigrant economic integration into the destination country. Drawing on data from the German Socio-Economic Panel, the chapter scrutinizes the employment gap between native-born individuals and two distinct groups of first-generation immigrants: refugees and other migrants. Through rigorous analysis, Chapter 4 aims to identify the factors contributing to disparities in employment outcomes among these groups. In this chapter, I aim to disentangle the heterogeneity characteristic of refugees and other immigrants in the labor market, thereby contributing to a deeper understanding of immigrant labor market integration in Germany.
The results show that refugees and other migrants are less likely to find employment than comparable natives. The refugee-native employment gap is much wider than other migrant-native employment gap. Moreover, the findings vary by gender and migration categories. While other migrant men do not differ from native men in the probability of being employed, refugee women are the most disadvantaged group compared to other migrant women and native women in the probability of being employed. The study suggests that German language proficiency and permanent resident permits partially explain the lower employment probability of refugees in the German labor market.
Chapter 5 (co-authored with Uwe Jirjahn) utilizes the same dataset to explore the immigrant-native trade union membership gap, focusing on the role of integration in the workplace and into society. The integration of immigrants into society and the workplace is vital not only to improve migrant's performance in the labor market but also to actively participate in institutions such as trade unions. In this study, we argue that the incomplete integration of immigrants into the workplace and society implies that immigrants are less likely to be union members than natives. Our findings show that first-generation immigrants are less likely to be trade union members than natives. Notably, the analysis shows that the immigrant-native gap in union membership depends on immigrants’ integration into the workplace and society. The gap is smaller for immigrants working in firms with a works council and having social contacts with Germans. Moreover, the results reveal that the immigrant-native union membership gap is decreasing in the year since arrival in Germany.
Although universality has fascinated over the last decades, there are still numerous open questions in this field that require further investigation. In this work, we will mainly focus on classes of functions whose Fourier series are universal in the sense that they allow us to approximate uniformly any continuous function defined on a suitable subset of the unit circle.
The structure of this thesis is as follows. In the first chapter, we will initially introduce the most important notation which is needed for our following discussion. Subsequently, after recalling the notion of universality in a general context, we will revisit significant results concerning universality of Taylor series. The focus here is particularly on universality with respect to uniform convergence and convergence in measure. By a result of Menshov, we will transition to universality of Fourier series which is the central object of study in this work.
In the second chapter, we recall spaces of holomorphic functions which are characterized by the growth of their coefficients. In this context, we will derive a relationship to functions on the unit circle via an application of the Fourier transform.
In the second part of the chapter, our attention is devoted to the $\mathcal{D}_{\textup{harm}}^p$ spaces which can be viewed as the set of harmonic functions contained in the $W^{1,p}(\D)$ Sobolev spaces. In this context, we will also recall the Bergman projection. Thanks to the intensive study of the latter in relation to Sobolev spaces, we can derive a decomposition of $\mathcal{D}_{\textup{harm}}^p$ spaces which may be seen as analogous to the Riesz projection for $L^p$ spaces. Owing to this result, we are able to provide a link between $\mathcal{D}_{\textup{harm}}^p$ spaces and spaces of holomorphic functions on $\mathbb{C}_\infty \setminus \s$ which turns out to be a crucial step in determining the dual of $\mathcal{D}_{\textup{harm}}^p$ spaces.
The last section of this chapter deals with the Cauchy dual which has a close connection to the Fantappié transform. As an application, we will determine the Cauchy dual of the spaces $D_\alpha$ and $D_{\textup{harm}}^p$, two results that will prove to be very helpful later on. Finally, we will provide a useful criterion that establishes a connection between the density of a set in the direct sum $X \oplus Y$ and the Cauchy dual of the intersection of the respective spaces.
The subsequent chapter will delve into the theory of capacities and, consequently, potential theory which will prove to be essential in formulating our universality results. In addition to introducing further necessary terminologies, we will define capacities in the first section following [16], however in the frame of separable metric spaces, and revisit the most important results about them.
Simultaneously, we make preparations that allow us to define the $\mathrm{Li}_\alpha$-capacity which will turn out to be equivalent to the classical Riesz $\alpha$-capacity. The $\mathrm{Li}_\alpha$-capacity proves to be more adapted to the $D_\alpha$ spaces. It becomes apparent in the course of our discussion that the $\mathrm{Li}_\alpha$-capacity is essential to prove uniqueness results for the class $D_\alpha$. This leads to the centerpiece of this chapter which forms the energy formula for the $\mathrm{Li}_\alpha$-capacity on the unit circle. More precisely, this identity establishes a connection between the energy of a measure and its corresponding Fourier coefficients. We will briefly deal with the complement-equivalence of capacities before we revisit the concept of Bessel and Riesz capacities, this time, however, in a much more general context, where we will mainly rely on [1]. Since we defined capacities on separable metric spaces in the first section, we can draw a connection between Bessel capacities and $\mathrm{Li}_\alpha$-capacities. To conclude this chapter, we would like to take a closer look at the geometric meaning of capacities. Here, we will point out a connection between the Hausdorff dimension and the polarity of a set, and transfer it to the $\mathrm{Li}_\alpha$-capacity. Another aspect will be the comparison of Bessel capacities across different dimensions, in which the theory of Wolff potentials crystallizes as a crucial auxiliary tool.
In the fourth chapter of this thesis, we will turn our focus to the theory of sets of uniqueness, a subject within the broader field of harmonic analysis. This theory has a close relationship with sets of universality, a connection that will be further elucidated in the upcoming chapter.
The initial section of this chapter will be dedicated to the notion of sets of uniqueness that is specifically adapted to our current context. Building on this concept, we will recall some of the fundamental results of this theory.
In the subsequent section, we will primarily rely on techniques from previous chapters to determine the closed sets of uniqueness for the class $\mathcal{D}_{\alpha}$. The proofs we will discuss are largely influenced by [16, p.\ 178] and [9, pp.\ 82].
One more time, it will become evident that the introduction of the $\mathrm{Li}_\alpha$-capacity in the third chapter and the closely associated energy formula on the unit circle, were the pivotal factors that enabled us to carry out these proofs.
In the final chapter of our discourse, we will present our results on universality. To begin, we will recall a version of the universality criterion which traces back to the work of Grosse-Erdmann (see [26]). Coupled with an outcome from the second chapter, we will prove a result that allows us to obtain the universality of a class using the technique of simultaneous approximation. This tool will play a key role in the proof of our universality results which will follow hereafter.
Our attention will first be directed toward the class $D_\alpha$ with $\alpha$ in the interval $(0,1]$. Here, we summarize that universality with respect to uniform convergence occurs on closed and $\alpha$-polar sets $E \subset \s$. Thanks to results of Carleson and further considerations, which particularly rely on the favorable behavior of the $\mathrm{Li}_\alpha$-kernel, we also find that this result is sharp. In particular, it may be seen as a generalization of the universality result for the harmonic Dirichlet space.
Following this, we will investigate the same class, however, this time for $\alpha \in [-1,0)$. In this case, it turns out that universality with respect to uniform convergence occurs on closed and $(-\alpha)$-complement-polar sets $E \subset \s$. In particular, these sets of universality can have positive arc measure. In the final section, we will focus on the class $D_{\textup{harm}}^p$. Here, we manage to prove that universality occurs on closed and $(1,p)$-polar sets $E \subset \s$. Through results of Twomey [68] combined with an observation by Girela and Pélaez [23], as well as the decomposition of $D_{\textup{harm}}^p$, we can deduce that the closed sets of universality with respect to uniform convergence of the class $D_{\textup{harm}}^p$ are characterized by $(1,p)$-polarity. We conclude our work with an application of the latter result to the class $D^p$. We will show that the closed sets of divergence for the class $D^p$ are given by the $(1,p)$-polar sets.
In dieser Dissertation wird der Workflow der Erstellung einer Augmented Reality App für das Projekt „ARmob” auf Androidgeräten beschrieben. Diese App positioniert durch SfM-Technik erstellte, nach dem neuesten Stand der Forschung rekonstruierte 3D-Objekte an ihren ursprünglichen Standort in der Realität. Die virtuellen Objekte werden jeweils vom Standpunkt und Blickwinkel des Betrachters passend in die reale Welt eingeblendet, so dass der Eindruck entsteht, die Objekte seien Teil der Realität. Die lagegenaue Darstellung ist abhängig von der Satellitenerreichbarkeit der GNSS und der Genauigkeit der weiteren Sensoren. Die App soll als Grundlage und Framework für weitere Apps zur Erforschung der Raumwahrnehmung im Bereich der Kartographie dienen.
Convex Duality in Consumption-Portfolio Choice Problems with Epstein-Zin Recursive Preferences
(2025)
This thesis deals with consumption-investment allocation problems with Epstein-Zin recursive utility, building upon the dualization procedure introduced by [Matoussi and Xing, 2018]. While their work exclusively focuses on truly recursive utility, we extend their procedure to include time-additive utility using results from general convex analysis. The dual problem is expressed in terms of a backward stochastic differential equation (BSDE), for which existence and uniqueness results are established. In this regard, we close a gap left open in previous works, by extending results restricted to specific subsets of parameters to cover all parameter constellations within our duality setting.
Using duality theory, we analyze the utility loss of an investor with recursive preferences, that is, her difference in utility between acting suboptimally in a given market, compared to her best possible (optimal) consumption-investment behaviour. In particular, we derive universal power utility bounds, presenting a novel and tractable approximation of the investors’ optimal utility and her welfare loss associated to specific investment-consumption choices. To address quantitative shortcomings of those power utility bounds, we additionally introduce one-sided variational bounds that offer a more effective approximation for recursive utilities. The theoretical value of our power utility bounds is demonstrated through their application in a new existence and uniqueness result for the BSDE characterizing the dual problem.
Moreover, we propose two approximation approaches for consumption-investment optimization problems with Epstein-Zin recursive preferences. The first approach directly formalizes the classical concept of least favorable completion, providing an analytic approximation fully characterized by a system of ordinary differential equations. In the special case of power utility, this approach can be interpreted as a variation of the well-known Campbell-Shiller approximation, improving some of its qualitative shortcomings with respect to state dependence of the resulting approximate strategies. The second approach introduces a PDE-iteration scheme, by reinterpreting artificial completion as a dynamic game, where the investor and a dual opponent interact until reaching an equilibrium that corresponds to an approximate solution of the investors optimization problem. Despite the need for additional approximations within each iteration, this scheme is shown to be quantitatively and qualitatively accurate. Moreover, it is capable of approximating high dimensional optimization problems, essentially avoiding the curse of dimensionality and providing analytical results.
This dissertation examines the relevance of regimes for stock markets. In three research articles, we cover the identification and predictability of regimes and their relationships to macroeconomic and financial variables in the United States.
The initial two chapters contribute to the debate on the predictability of stock markets. While various approaches can demonstrate in-sample predictability, their predictive power diminishes substantially in out-of-sample studies. Parameter instability and model uncertainty are the primary challenges. However, certain methods have demonstrated efficacy in addressing these issues. In Chapter 1 and 2, we present frameworks that combine these methods meaningfully. Chapter 3 focuses on the role of regimes in explaining macro-financial relationships and examines the state-dependent effects of macroeconomic expectations on cross-sectional stock returns. Although it is common to capture the variation in stock returns using factor models, their macroeconomic risk sources are unclear. According to macro-financial asset pricing, expectations about state variables may be viable candidates to explain these sources. We examine their usefulness in explaining factor premia and assess their suitability for pricing stock portfolios.
In summary, this dissertation improves our understanding of stock market regimes in three ways. First, we show that it is worthwhile to exploit the regime dependence of stock markets. Markov-switching models and their extensions are valuable tools for filtering the stock market dynamics and identifying and predicting regimes in real-time. Moreover, accounting for regime-dependent relationships helps to examine the dynamic impact of macroeconomic shocks on stock returns. Second, we emphasize the usefulness of macro-financial variables for the stock market. Regime identification and forecasting benefit from their inclusion. This is particularly true in periods of high uncertainty when information processing in financial markets is less efficient. Finally, we recommend to address parameter instability, estimation risk, and model uncertainty in empirical models. Because it is difficult to find a single approach that meets all of these challenges simultaneously, it is advisable to combine appropriate methods in a meaningful way. The framework should be as complex as necessary but as parsimonious as possible to mitigate additional estimation risk. This is especially recommended when working with financial market data with a typically low signal-to-noise ratio.
Mixed-Integer Optimization Techniques for Robust Bilevel Problems with Here-and-Now Followers
(2025)
In bilevel optimization, some of the variables of an optimization problem have to be an optimal solution to another nested optimization problem. This specific structure renders bilevel optimization a powerful tool for modeling hierarchical decision-making processes, which arise in various real-world applications such as in critical infrastructure defense, transportation, or energy. Due to their nested structure, however, bilevel problems are also inherently hard to solve—both in theory and in practice. Further challenges arise if, e.g., bilevel problems under uncertainty are considered.
In this dissertation, we address different types of uncertainties in bilevel optimization using techniques from robust optimization. We study mixed-integer linear bilevel problems with lower-level objective uncertainty, which we tackle using the notion of Gamma-robustness. We present two exact branch-and-cut approaches to solve these Gamma-robust bilevel problems, along with cuts tailored to the important class of monotone interdiction problems. Given the overall hardness of the considered problems, we additionally propose heuristic approaches for mixed-integer, linear, and Gamma-robust bilevel problems. The latter rely on solving a linear number of deterministic bilevel problems so that no problem-specific tailoring is required. We assess the performance of both the exact and the heuristic approaches through extensive computational studies.
In addition, we study the problem of determining optimal tolls in a traffic network in which the network users hedge against uncertain travel costs in a robust way. The overall toll-setting problem can be seen as a single-leader multi-follower problem with multiple robustified followers. We model this setting as a mathematical problem with equilibrium constraints, for which we present a mixed-integer, nonlinear, and nonconvex reformulation that can be tackled using state-of-the-art general-purpose solvers. We further illustrate the impact of considering robustified followers on the toll-setting policies through a case study.
Finally, we highlight that the sources of uncertainty in bilevel optimization are much richer compared to single-level optimization. To this end, we study two aspects related to so-called decision uncertainty. First, we propose a strictly robust approach in which the follower hedges against erroneous observations of the leader's decision. Second, we consider an exemplary bilevel problem with a continuous but nonconvex lower level in which algorithmic necessities prevent the follower from making a globally optimal decision in an exact sense. The example illustrates that even very small deviations in the follower's decision may lead to arbitrarily large discrepancies between exact and computationally obtained bilevel solutions.
Partial differential equations are not always suited to model all physical phenomena, especially, if long-range interactions are involved or if the actual solution might not satisfy the regularity requirements associated with the partial differential equation. One remedy to this problem are nonlocal operators, which typically consist of integrals that incorporate interactions between two separated points in space and the corresponding solutions to nonlocal equations have to satisfy less regularity conditions.
In PDE-constrained shape optimization the goal is to minimize or maximize an objective functional that is dependent on the shape of a certain domain and on the solution to a partial differential equation, which is usually also influenced by the shape of this domain. Moreover, parameters associated with the nonlocal model are oftentimes domain dependent and thus it is a natural next step to now consider shape optimization problems that are governed by nonlocal equations.
Therefore, an interface identification problem constrained by nonlocal equations is thoroughly investigated in this thesis. Here, we focus on rigorously developing the first and second shape derivative of the associated reduced functional. In addition, we study first- and second-order shape optimization algorithms in multiple numerical experiments.
Moreover, we also propose Schwarz methods for nonlocal Dirichlet problems as well as regularized nonlocal Neumann problems. Particularly, we investigate the convergence of the multiplicative Schwarz approach and we conduct a number of numerical experiments, which illustrate various aspects of the Schwarz method applied to nonlocal equations.
Since applying the finite element method to solve nonlocal problems numerically can be quite costly, Local-to-Nonlocal couplings emerged, which combine the accuracy of nonlocal models on one part of the domain with the fast computation of partial differential equations on the remaining area. Therefore, we also examine the interface identification problem governed by an energy-based Local-to-Nonlocal coupling, which can be numerically computed by making use of the Schwarz method. Here, we again present a formula for the shape derivative of the associated reduced functional and investigate a gradient based shape optimization method.
Based on data collected from two surveys conducted in Germany and Taiwan, my first paper (Chapter 2) examines the impact of culture through language priming (Chinese vs. German or English) on individuals’ price fairness perception and attitudes towards government intervention and economic policy involving inequality. We document large cross-language differences: in both surveys, subjects who were asked and answered in Chinese demonstrated significantly higher perceived price fairness in a free market mechanism than their counterparts who completed the survey in German or English language. They were also more inclined to accept a Pareto improvement policy which increases social and economic inequality. In the second survey, Chinese language induced also a lower readiness to accept government intervention in markets with price limits compared to English language. Since language functions as a cultural mindset prime, our findings imply that culture plays an important role in fairness perception and preferences regarding social and economic inequality.
Chapter 3 of this work deals with patriotism priming. By conducting two online experimental studies conducted in Germany and China, we tested three different kinds of priming methods for constructive and blind patriotism respectively. Subjects were randomly distributed to one of three treatments motivated by previous studies in different countries: a constructive patriotism priming treatment, a blind patriotism priming treatment and a non-priming baseline. While the first experiment had a between-subject design, the second one enabled both a between-subject and within-subject comparison, since the level of patriotism of individuals was measured before and after priming respectively. The design of the second survey also enabled a comparison among the three priming methods for constructive and blind patriotism. The results showed that the tested methods, especially the national achievements as a priming mechanism, functioned well overall for constructive patriotism.
Surprisingly, the priming for blind patriotism did not work in either Germany or China and the opposite results were observed. Discussion and implications for future studies are provided at the end of the chapter.
Using data from the same studies as in Chapter 3, Chapter 4 examines the impact of patriotism on individuals’ fairness perception and preferences regarding inequality and on their attitudes toward economic policy involving inequality. Across surveys and countries, a positive and significant effect of blind patriotism on economic individualism was found. For China, we also found a significant relationship between blind patriotism and the agreement to unequal economic policy. In contrast to blind patriotism, we did not find an association of constructive patriotism to economic individualism and to attitudes toward economic policy involving inequality. Political and economic implications based on the results are discussed.
The last chapter (Chapter 5) studies the self-serving bias (when an individual’s perception about fairness is biased by self-interest) in the context of price setting and profit distribution. By analyzing data from four surveys conducted in six countries, we found that the stated appropriate product price and the fair allocation of profit was significantly higher, when the outcome was favorable to oneself. This self-serving bias in price fairness perception, however, differed across countries significantly and was significantly higher in Germany, Taiwan and China than in Vietnam, Estonia and Japan.
Although economic individualism and masculinity were found to have a significant negative effect on self-interest bias in price fairness judgment, they did not sufficiently explain the differences in self-interest bias between countries. Furthermore, we also observed an increase of self-interest bias in profit allocation over time in time-series data for one country (Germany) with data from 2011 to 2023.
The four papers are all co-authored with Prof. Marc Oliver Rieger, and the first paper has been accepted for publications in Review of Behavioral Economics.
Veterinärantibiotika werden weltweit in großem Umfang zur Behandlung von Tierkrankheiten eingesetzt. Aufgrund der schlechten Resorption der Mittel im Darm der Tiere gelangen sie zum Großteil unverändert über Ausscheidungen auf landwirtschaftliche Nutzflächen. Dort können sie von Nichtzielorganismen, wie Gefäßpflanzen, aufgenommen werden und deren frühe Entwicklung bedrohen. In diesem Kontext wurde bisher vor allem der Einfluss auf Kulturpflanzen untersucht, während Wildpflanzenarten des ökologisch bedeutsamen Kulturgraslandes, die vor allem durch Gülleausbringung in Kontakt mit Antibiotikastoffen kommen, deutlich weniger fokussiert wurden. Deshalb wurde in dieser Arbeit der Einfluss realistischer Konzentrationen (0,1 - 20 mg/L) zweier häufig verwendeter Veterinärantibiotika, Tetracyclin und Sulfamethazin, auf die Keimung und das frühe Wachstum von typischen Arten des temperaten Kulturgraslandes untersucht. Da in der Natur oft mehrere Stressoren gleichzeitig auf einen Organismus einwirken, wurden auch zwei Multistressszenarien, nämlich Pharmazeutikamischungen und das Zusammenspiel von pharmazeutischem Wirkstoff mit abiotischen Bedingungen (Trockenstress) untersucht. In vier Themenblöcken wurden sowohl standardisierte Laborversuche als auch naturnähere Topf- und Feldversuche durchgeführt.
Die Ergebnisse zeigten, dass sowohl die Keimung als auch das frühe Wachstum durch beide Wirkstoffe, jedoch häufiger durch Tetracyclin, beeinträchtigt wurden. Während die Keimung uneinheitlich in Bezug auf die Effektrichtung beeinflusst wurde, zeigte sich eine starke, antibiotika- und konzentrationsabhängige Reduktion der Wurzellänge vor allem durch Tetracyclin, in den Petrischalenversuchen (20 mg/L bis 96 %, bei Dactylis glomerata). Das oberirdische Wachstum (Blattlänge, Wuchshöhe, Biomasse) wurde geringer beinflusst, und dabei oft wachstumsfördernd. In der gesamten Arbeit zeigten sich immer wieder Hormesis- Effekte, d.h. geringe Konzentrationen, die stimulierend wirkten, während höhere Konzentrationen toxisch wirkten. Die betrachteten Kombinationen verschiedener Faktoren führten entgegen der Erwartung nicht eindeutig zu stärkeren oder alleinigen Einflüssen. In einzelnen Fällen zeigten sich solche Muster, jedoch wurden auch Verluste von Einzeleffekten bei den Kombinationen beobachtet oder Einzeleffekte, die sich dort erneut abbildeten.
Es zeigten sich, wenn auch uneinheitlich, signifikante Einflüsse auf die frühen Entwicklungsstadien von typischen Wildpflanzenarten, die bereits durch andere Faktoren einen Rückgang erfahren. Gerade im Hinblick auf die wiederholte Ausbringung von Gülle und die potenzielle Akkumulation dieser hoch persistenten Stoffe stellen Veterinärantibiotika einen weiteren wichtigen Einflussfaktor dar, der die Biodiversität und Artzusammensetzung gefährdet, weshalb zu einem umweltbewussten Umgang mit ihnen geraten wird.
This thesis contains three parts that are all connected by their contribution to research about the effects of trading apps on investment behavior. The primary motivation for this study is to investigate the previously undetermined consequences and effects of trading apps, which are a new phenomenon in the broker market, on the investment and risk behavior of Neobroker users.
Chapter 2 addresses the characteristics of a typical Neobroker user and a former Neobroker user and the impact of trading apps on the investment and risk behavior of their users. The results show that Neobroker users are significantly more risk tolerant than the general German population and are influenced by trading apps regarding their investment and risk behavior. Low trading fees and the low minimum investment amount are the main reasons for the use of trading apps. Investors who stop using trading apps mostly stop investing altogether. Another worrying result is that financial literacy among all groups is low and most Neobroker users have wrong conceptions about how trading apps earn money. In general, the financial literacy of all groups considered in this chapter is surprisingly low.
The third chapter investigates the effects of trading apps on investment behavior over time and compares the investment and risk behavior of Neobroker users and general investors. By using representative data of German Neobroker users, who were surveyed repeatedly over a 8-month time interval, it becomes possible to determine causal effects of the use of trading apps over time. In total, the financial literacy of Neobroker users increases with the longer use of a trading app. A worrying result is that the risk tolerance of Neobroker users rises significantly over time. Male Neobroker users gain a higher annual return (non-risk-adjusted) than female Neobroker users. In comparison to general investors, Neobroker users are significantly younger, more risk tolerant, more likely to buy derivatives and gain a higher annual return (non-risk-adjusted).
The fourth chapter analyses the impact of personality traits on the investment and risk behavior of Neobroker users. The results show that the BIG-5 personality traits have an impact on the investment behavior of Neobroker users. Two personality traits, openness and conscientiousness, stand out the most, as these two have explanatory power over various aspects of the behavior of Neobroker users. In particular, whether they buy different financial products than planned, the time they inform themselves about financial markets, the variety of financial products owned, and the reasons to use a Neobroker. Surprisingly, the risk tolerance of Neobroker users and the reasons to invest are not connected to any personal dimension. Whether a participant uses a trading app or a traditional broker to invest is respectively influenced by different personality traits.
Die Hauptzielsetzung der vorliegenden Arbeit besteht in der Erarbeitung von Möglichkeiten zur Optimierung der Bewirtschaftung der Riveristalsperre. Dazu werden zunächst alle relevanten Einflussgrößen und Gefahrenpotentiale des Systems aus dem Einzugsgebiet und der Talsperre analysiert und bewertet. Letztlich wird die Konzeption eines integrierten Bewirtschaftungsplanes für die Riveristalsperre auf der Basis einer neuen Pilotierungsanlage im SWT-Wasserwerk in Trier-Irsch dargestellt, diskutiert und auf Funktionsfähigkeit geprüft.
Mit einer aus ca. 90% des Einzugsgebiets bestehenden Waldfläche ist die Hauptsperre der Riveristalsperre durchschnittlich als eindeutig oligotroph eingestuft und das Rohwasser der Riveristalsperre von ausgezeichneter Qualität mit nur wenigen und beherrschbaren Gefahrenpotentialen.
Unter Berücksichtigung der Pilotierungsergebnisse war die In/Out, PES, UF- geeigneter als die Out/In, PVDF-Membran. Die Anordnung der UF-Anlage auf der Rohwasserseite nach der Flockung für die Abtrennung der partikulären Wasserinhaltsstoffe mit einer nachgeschalteten Wasseraufhärtung, pH-Wert-Anhebung und Entmanganung in einer CaCO3-Filterstufe und abschließenden Desinfektion durch eine UV-Bestrahlung stellte sich als ideal für die Aufbereitung des Rohwassers der Riveristalsperre heraus.
Die Ergebnisse der Pilotanlage sind in einer großtechnischen Trinkwasseraufbereitung im Wasserwerk in Trier-Irsch umgesetzt und seit 2013 offiziell in Betrieb genommen.
Abschließend werden Maßnahmen gegen eventuelle Minderwassermengen bei z.B. langanhaltenden Trockenwetterperioden (Klimawandel !) und für die allgemeine Erhöhung der Versorgungssicherheit diskutiert, wobei in Trier und in der Region schon seit langem sehr stark in die Verbundnetzsysteme investiert wird.
In machine learning, classification is the task of predicting a label for each point within a data set. When the class of each point in the labeled subset is already known, this information is used to recognize patterns and make predictions about the points in the remainder of the set, referred to as the unlabeled set. This scenario falls in the field of supervised learning.
However, the number of labeled points can be restricted, because, e.g., it is expensive to obtain this information. Besides, this subset may be biased, such as in the case of self-selection in a survey. Consequently, the classification performance for unlabeled points may be limited. To improve the reliability of the results, semi-supervised learning tackles the setting of labeled and unlabeled data. Moreover, in many cases, additional information about the size of each class can be available from undisclosed sources.
This cumulative thesis presents different studies to combine this external cardinality constraint information within three important algorithms for binary classification in the supervised context: support vector machines (SVM), classification trees, and random forests. From a mathematical point of view, we focus on mixed-integer programming (MIP) models for semi-supervised approaches that consider a cardinality constraint for each class for each algorithm.
Furthermore, since the proposed MIP models are computationally challenging, we also present techniques that simplify the process of solving these problems. In the SVM setting, we introduce a re-clustering method and further computational techniques to reduce the computational cost. In the context of classification trees, we provide correct values for certain bounds that play a crucial role for the solver performance. For the random forest model, we develop preprocessing techniques and an intuitive branching rule to reduce the solution time. For all three methods, our numerical results show that our approaches have better statistical performances for biased samples than the standard approach.