Refine
Year of publication
Document Type
- Doctoral Thesis (900) (remove)
Language
- German (505)
- English (384)
- Multiple languages (7)
- French (4)
Keywords
- Deutschland (38)
- Stress (37)
- Optimierung (22)
- Modellierung (19)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (16)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (182)
- Raum- und Umweltwissenschaften (148)
- Fachbereich 4 (73)
- Mathematik (64)
- Wirtschaftswissenschaften (61)
- Fachbereich 1 (34)
- Geschichte, mittlere und neuere (28)
- Informatik (28)
- Germanistik (26)
- Fachbereich 6 (23)
- Kunstgeschichte (22)
- Politikwissenschaft (18)
- Anglistik (17)
- Fachbereich 2 (16)
- Soziologie (16)
- Fachbereich 3 (12)
- Philosophie (9)
- Romanistik (9)
- Computerlinguistik und Digital Humanities (7)
- Medienwissenschaft (6)
- Geschichte, alte (5)
- Allgemeine Sprach- und Literaturwissenschaft (4)
- Fachbereich 5 (4)
- Klassische Philologie (4)
- Pädagogik (4)
- Ethnologie (3)
- Japanologie (3)
- Sinologie (3)
- Archäologie (2)
- Rechtswissenschaft (2)
- Bodenkunde (1)
- Phonetik (1)
- Slavistik (1)
- Universitätsbibliothek (1)
Convex Duality in Consumption-Portfolio Choice Problems with Epstein-Zin Recursive Preferences
(2025)
This thesis deals with consumption-investment allocation problems with Epstein-Zin recursive utility, building upon the dualization procedure introduced by [Matoussi and Xing, 2018]. While their work exclusively focuses on truly recursive utility, we extend their procedure to include time-additive utility using results from general convex analysis. The dual problem is expressed in terms of a backward stochastic differential equation (BSDE), for which existence and uniqueness results are established. In this regard, we close a gap left open in previous works, by extending results restricted to specific subsets of parameters to cover all parameter constellations within our duality setting.
Using duality theory, we analyze the utility loss of an investor with recursive preferences, that is, her difference in utility between acting suboptimally in a given market, compared to her best possible (optimal) consumption-investment behaviour. In particular, we derive universal power utility bounds, presenting a novel and tractable approximation of the investors’ optimal utility and her welfare loss associated to specific investment-consumption choices. To address quantitative shortcomings of those power utility bounds, we additionally introduce one-sided variational bounds that offer a more effective approximation for recursive utilities. The theoretical value of our power utility bounds is demonstrated through their application in a new existence and uniqueness result for the BSDE characterizing the dual problem.
Moreover, we propose two approximation approaches for consumption-investment optimization problems with Epstein-Zin recursive preferences. The first approach directly formalizes the classical concept of least favorable completion, providing an analytic approximation fully characterized by a system of ordinary differential equations. In the special case of power utility, this approach can be interpreted as a variation of the well-known Campbell-Shiller approximation, improving some of its qualitative shortcomings with respect to state dependence of the resulting approximate strategies. The second approach introduces a PDE-iteration scheme, by reinterpreting artificial completion as a dynamic game, where the investor and a dual opponent interact until reaching an equilibrium that corresponds to an approximate solution of the investors optimization problem. Despite the need for additional approximations within each iteration, this scheme is shown to be quantitatively and qualitatively accurate. Moreover, it is capable of approximating high dimensional optimization problems, essentially avoiding the curse of dimensionality and providing analytical results.
Globalization significantly transforms labor markets. Advances in production technologies, transportation, and political integration reshape how and where goods and services are produced. Local economic conditions and diverse policy responses create varying speeds of change, affecting regions' attractiveness for living and working -- and promoting mobility.
Competition for talent necessitates a deep understanding of why individuals choose specific destinations, how to ensure their effective labor market integration, and what workplace factors affect workers' well-being.
This thesis focuses on two crucial aspects of labor market change -- Migration and workplace technological change. It contributes to our understanding of the determinants of labor mobility, the factors facilitating migrant integration, and the role of workplace automation for worker well-being.
Chapter 2 investigates the relationship between minimum wages (MWs) and regional worker mobility in the EU. EU citizens are free to work anywhere in the common market, which allows them to take advantage of the significant variation in MWs across the EU. However, although MWs are set at the national level, it is also their local relevance that varies substantially -- depending on factors such as the share of affected workers or the extent to which they shift local compensation levels. These variations may attract workers from elsewhere, from within a country or from abroad.
Analyzing regional variations in the Kaitz index, a measure of local MW impact, reveals that higher MWs can significantly increase inflows of low-skilled EU workers, particularly in central Europe.
Chapter 3 examines the inequality in returns to skills experienced by immigrants, focusing on the role of linguistic proximity between migrants' origin and destination countries. Harmonized individual-level data from nine linguistically diverse migrant-hosting economies allows for an analysis of the wage gaps faced by immigrants from various origins, implicitly indicating how well they and their skills are integrated into the local labor markets. The analysis reveals that greater linguistic distance is associated with a higher wage penalty for highly skilled immigrants and a lower position in the wage distribution for those without tertiary education.
Chapter 4 investigates an institutional factor potentially relevant for the integration of immigrants -- the labor market impact of Confucius Institutes (CIs), Chinese government-sponsored institutions that promote Chinese language and culture abroad. CIs have been found to foster trade and cultural exchange, indicating their potential relevance in shaping attitudes and trust of natives towards China and Chinese individuals. Examining the relationship between local CI presence and the wages of Chinese immigrants in local labor markets of the United States, the analysis reveals that CIs associate with significantly reduced wages for nearby residing Chinese immigrants. An event study demonstrates that the mere announcement of a new CI negatively impacts local wages for Chinese immigrants, independent of the CI's actual opening.
Chapter 5 explores how working in automatable jobs affects life satisfaction in Germany. Following earlier literature, we classify occupations by potential for automation, and define the top third of occupations in this metric as \textit{automatable jobs}. We find workers in highly automatable jobs reporting a lower life satisfaction. Moreover, we detect a non-linearity, where workers in moderately automatable jobs (the second third of the distribution) experience a positive association with life satisfaction. Overall, the negative relationship of automation is most pronounced among younger and blue-collar workers, irrespective of the non-linearity.
Einige Forschungsergebnisse zeigen, dass emotionale Empfindungen kognitive Bereiche beeinflussen oder mit diesen im Zusammenhang stehen. Aufbauend auf den Ergebnissen wurden zwei Studien konzipiert. In Studie 1 wurde der Zusammenhang zwischen den Valenzen der dispositionalen emotionalen Empfindungen und der globalen Selbstbewertung des Gedächtnisses (Metagedächtnis) bei Lehramtsstudierenden (N = 218) untersucht. Die dispositionalen Empfindungen wurden mittels des deutschen Positive and Negativ Affect Schedule (PANAS) (Krohne, Egloff, Kohlmann & Tausch, 1996) und die globale Selbstbewertung des Gedächtnisses mit dem deutschen Squire Subjective Memory Questionnaire (SSMQ) (Wolf, 2017) erfasst. Angenommen wurde, dass die positive Valenz im Gegensatz zu der negativen Valenz im positiven Zusammenhang mit der höheren Gedächtniseinschätzung stehen. Die Ergebnisse bestätigen die Hypothesen. In Studie 2 wurde die aktuelle Valenz mittels des Open Affective Standardized Image Set (OASIS) (Kurdi, Lozano & Banaji, 2017) induziert, um Veränderungen des Metagedächtnisses und der tatsächlichen Gedächtnisleistung bei Lehramtsstudierenden (N = 44) zu untersuchen. Angenommen wurde, dass die positive Valenz positiv, die negative Valenz negativ und die neutrale Valenz nicht auf das Metagedächtnis und die Gedächtnisleistung wirkt. Weitere Zusammenhänge zwischen dem Metagedächtnis und der Gedächtnisleistung sowie der induzierten Valenz und der Gedächtnisleistung wurden angenommen. Die Messinstrumente aus Studie 1 blieben dieselben. Die Gedächtnisleistung wurde mittels eines sinnarmen Silbentests nach Ebbinghaus (1885) operationalisiert. Die Ergebnisse bestätigen die Hypothesen nicht. Die Emotionsinduktion hatte keinen Erfolg. Die Ergebnisse können damit nicht auf eine veränderte Valenz bezogen werden. Wie in Studie 1 zeigte sich ein Zusammenhang zwischen den dispositionalen Empfindungen und dem Metagedächtnis. Weitere explorative Ergebnisse, vor allem im Bezug auf das Geschlecht, wurden dargestellt. Die Ergebnisse sind bedeutsam für die Professionalisierung von Lehramtsstudierenden.
Die Abteilung Kunstschutz der deutschen Wehrmacht im besetzten Griechenland (1941-1944) bestand aus wehrpflichtigen deutschen Archäologen. Sie waren zunächst Stipendiaten oder Mitarbeiter des Archäologischen Instituts des Deutschen Reiches (AIDR) unter den Bedingungen des Nationalsozialismus, bevor sie im Zweiten Weltkrieg in der Uniform der Wehrmacht zurückkehrten. Ihre Biografien im Kontext der Abteilung Athen, deren Direktor Georg Karo bis 1936 war, sowie der Zentrale der Instituts, unter dem von 1932 bis 1936 amtierenden Präsidenten Theodor Wiegand, sind ein Untersuchungsgegenstand. Die außenpolitische Legitimation des NS-Regimes durch die Olympischen Spiele und der wichtigste wissenschaftspolitische Erfolg des Institutes, die Wiederaufnahme der Olympiagrabung, die Wiegand und Karo seit 1933 anstrebten und durch ihre politischen Netzwerke 1936 erreichten, werden in der Dissertation in ihrer wechselseitigen Bedingtheit aufgezeigt. Diese Anpassungsleistungen an das NS-Regime prägten den eigenen archäologischen Nachwuchs aber auch die griechische Gesellschaft.
Schutzmaßnahmen waren nur ein kleiner Tätigkeitsbereich der Kunstschützer aber ein wichtiger Teil der Wehrmachtspropaganda. Der Institutspräsident Martin Schede (1937 bis 1945) forderte Mitarbeitern vor allem für zwei AIDR-Projekte an: die Erstellung von Flugbildern von möglichst ganz Griechenland und Ausgrabungen auf Kreta. Bereits diese Zwischenergebnisse berechtigen zu dem Titel „Kunstschutz als Alibi“.
Die Dissertation versucht, die Frage zu beantworten, warum der archäologische Kunstschutz nicht mehr als ein Alibi sein konnte. Dies geschieht vor allem unter Berücksichtigung der politischen aber auch der militärischen Traditionslinien deutscher Archäologie in Griechenland und Deutschland.
The goal of this work is to compare operators that are defined on probably varying Hilbert spaces. Distance concepts for operators as well as convergence concepts for such operators are explained and examined. For distance concepts we present three main notions. All have in common that they use space-linking operators that connect the spaces. At first, we look at unitary maps and compare the unitary orbits of the operators. Then, we consider isometric embeddings, which is based on a concept of Joachim Weidmann. Then we look at contractions but with more norm equations in comparison. The latter idea is based on a concept of Olaf Post called quasi-unitary equivalence. Our main result is that the unitary and isometric distances are equal provided the operators are both self-adjoint and have 0 in their essential spectra. In the third chapter, we focus specifically on the investigation of these distance terms for compact operators or operators in p-Schatten classes. In this case, the interpretation of the spectra as null sequences allows further distance investigation. Chapter four deals mainly with convergence terms of operators on varying Hilbert spaces. The analyses in this work deal exclusively with concepts of norm resolvent convergence. The main conclusion of the chapter is that the generalisation for norm resolvent convergence of Joachim Weidmann and the generalisation of Olaf Post, called quasi-unitary equivalence, are equivalent to each other. In addition, we specify error bounds and deal with the convergence speed of both concepts. Two important implications of these convergence notions are that the approximation is spectrally exact, i.e., the spectra converge suitably, and that the convergence is transferred to the functional calculus of the bounded functions vanishing at infinity.
The new millennium has been characterized by rising digitalization, the proliferation of shadow banking, and significant advancements in machine learning and natural language processing. These trends present both challenges and opportunities, which my dissertation addresses. This cumulative dissertation investigates critical aspects of financial stability, monetary policy, and the transition towards cashless economies through three distinct but interrelated studies.
The first paper examines the risk-taking channel of monetary policy transmission within the euro area, focusing on shadow banks. Through vector autoregressive models, it assesses the impact of conventional and unconventional monetary policy shocks on shadow banks' asset growth and risk asset ratios. The results indicate that lower interest rates lead to a portfolio reallocation towards riskier assets and a general expansion of assets in shadow banks. In the case of conventional monetary policy shocks, both effects last three times as long as in the case of unconventional monetary policy shocks. Country-specific as well as sector-specific estimations confirm these findings. This study bridges gaps in the existing literature, especially in the eurozone, by highlighting the significant role shadow banks play in monetary policy transmission, suggesting implications for financial regulation and stability.
The second paper explores the influence of financial stability considerations on US monetary policy, particularly during the Great Recession. Utilizing natural language processing and machine learning techniques on congressional hearings, this study constructs indicators for financial stability sentiment expressed by the Federal Reserve Chairs. Empirical analysis is conducted using Taylor-rule models, revealing that negative financial stability sentiment is associated with a more accommodative monetary policy stance, even before the Great Recession. This work provides new insights into the integration of financial stability concerns into monetary policy frameworks, demonstrating the need for a balanced approach to economic stability. The article suggests that under a dual mandate, such as that of the Federal Reserve, financial stability can, to some extent, already be factored into monetary policy deliberations.
The third paper sheds new light on ``cash paradox'' by uncovering the factors of the cashless transition that has not been entirely understood so far. Using a comprehensive dataset across 65 countries, the study employs panel data models to explain the paradox (increasing demand for central bank money despite soaring digitalization), especially among technologically advanced countries, e.g., Japan. Empirical evidence suggests that digitalization is not significantly associated with higher reliance on physical cash. It uncovers a unique non-linear relationship between trust and cash usage (``Arch of Trust'') which holds after addressing potential endogeneity issues using 2SLS estimation. Opposed to the widespread misinterpretations of Keynes' (1937) reasons for holding cash, the findings highlight that distrust is the key factor unlocking two distinct puzzles in economics, linking cash hoarding with ``missing'' funds on capital markets and slower shift toward digital payments in low-trust societies. A key insight is the role of trust as a (social) insurance, cushion or safety net, dampening the perception of risk and reducing precautionary and transactionary demand for physical cash, while encouraging a shift towards riskier alternatives. This, in turn, is connected to the third puzzle, the ``paradox of prudence.'' A shift from riskier investments to safer assets, cash, may be prudent at the individual level but risky for the overall economy, a concern for macroprudential policymakers. Additionally, the research highlights the critical role of culture in driving the global movement towards cashless economies. Moreover, cultures that are more self-expression-oriented (which is the main cultural dimension) and culturally closer to Sweden are associated with less cash-intensive economies. These insights are vital for macroprudential regulators as well as for policymakers designing payment systems and CBDC in culturally diverse regions like the Eurozone.
Collectively, these papers contribute to a deeper understanding of monetary policy, financial stability, and the transition from cash-based to (nearly) cashless societies, offering significant theoretical and practical implications for academics, regulators and central bankers.
Although universality has fascinated over the last decades, there are still numerous open questions in this field that require further investigation. In this work, we will mainly focus on classes of functions whose Fourier series are universal in the sense that they allow us to approximate uniformly any continuous function defined on a suitable subset of the unit circle.
The structure of this thesis is as follows. In the first chapter, we will initially introduce the most important notation which is needed for our following discussion. Subsequently, after recalling the notion of universality in a general context, we will revisit significant results concerning universality of Taylor series. The focus here is particularly on universality with respect to uniform convergence and convergence in measure. By a result of Menshov, we will transition to universality of Fourier series which is the central object of study in this work.
In the second chapter, we recall spaces of holomorphic functions which are characterized by the growth of their coefficients. In this context, we will derive a relationship to functions on the unit circle via an application of the Fourier transform.
In the second part of the chapter, our attention is devoted to the $\mathcal{D}_{\textup{harm}}^p$ spaces which can be viewed as the set of harmonic functions contained in the $W^{1,p}(\D)$ Sobolev spaces. In this context, we will also recall the Bergman projection. Thanks to the intensive study of the latter in relation to Sobolev spaces, we can derive a decomposition of $\mathcal{D}_{\textup{harm}}^p$ spaces which may be seen as analogous to the Riesz projection for $L^p$ spaces. Owing to this result, we are able to provide a link between $\mathcal{D}_{\textup{harm}}^p$ spaces and spaces of holomorphic functions on $\mathbb{C}_\infty \setminus \s$ which turns out to be a crucial step in determining the dual of $\mathcal{D}_{\textup{harm}}^p$ spaces.
The last section of this chapter deals with the Cauchy dual which has a close connection to the Fantappié transform. As an application, we will determine the Cauchy dual of the spaces $D_\alpha$ and $D_{\textup{harm}}^p$, two results that will prove to be very helpful later on. Finally, we will provide a useful criterion that establishes a connection between the density of a set in the direct sum $X \oplus Y$ and the Cauchy dual of the intersection of the respective spaces.
The subsequent chapter will delve into the theory of capacities and, consequently, potential theory which will prove to be essential in formulating our universality results. In addition to introducing further necessary terminologies, we will define capacities in the first section following [16], however in the frame of separable metric spaces, and revisit the most important results about them.
Simultaneously, we make preparations that allow us to define the $\mathrm{Li}_\alpha$-capacity which will turn out to be equivalent to the classical Riesz $\alpha$-capacity. The $\mathrm{Li}_\alpha$-capacity proves to be more adapted to the $D_\alpha$ spaces. It becomes apparent in the course of our discussion that the $\mathrm{Li}_\alpha$-capacity is essential to prove uniqueness results for the class $D_\alpha$. This leads to the centerpiece of this chapter which forms the energy formula for the $\mathrm{Li}_\alpha$-capacity on the unit circle. More precisely, this identity establishes a connection between the energy of a measure and its corresponding Fourier coefficients. We will briefly deal with the complement-equivalence of capacities before we revisit the concept of Bessel and Riesz capacities, this time, however, in a much more general context, where we will mainly rely on [1]. Since we defined capacities on separable metric spaces in the first section, we can draw a connection between Bessel capacities and $\mathrm{Li}_\alpha$-capacities. To conclude this chapter, we would like to take a closer look at the geometric meaning of capacities. Here, we will point out a connection between the Hausdorff dimension and the polarity of a set, and transfer it to the $\mathrm{Li}_\alpha$-capacity. Another aspect will be the comparison of Bessel capacities across different dimensions, in which the theory of Wolff potentials crystallizes as a crucial auxiliary tool.
In the fourth chapter of this thesis, we will turn our focus to the theory of sets of uniqueness, a subject within the broader field of harmonic analysis. This theory has a close relationship with sets of universality, a connection that will be further elucidated in the upcoming chapter.
The initial section of this chapter will be dedicated to the notion of sets of uniqueness that is specifically adapted to our current context. Building on this concept, we will recall some of the fundamental results of this theory.
In the subsequent section, we will primarily rely on techniques from previous chapters to determine the closed sets of uniqueness for the class $\mathcal{D}_{\alpha}$. The proofs we will discuss are largely influenced by [16, p.\ 178] and [9, pp.\ 82].
One more time, it will become evident that the introduction of the $\mathrm{Li}_\alpha$-capacity in the third chapter and the closely associated energy formula on the unit circle, were the pivotal factors that enabled us to carry out these proofs.
In the final chapter of our discourse, we will present our results on universality. To begin, we will recall a version of the universality criterion which traces back to the work of Grosse-Erdmann (see [26]). Coupled with an outcome from the second chapter, we will prove a result that allows us to obtain the universality of a class using the technique of simultaneous approximation. This tool will play a key role in the proof of our universality results which will follow hereafter.
Our attention will first be directed toward the class $D_\alpha$ with $\alpha$ in the interval $(0,1]$. Here, we summarize that universality with respect to uniform convergence occurs on closed and $\alpha$-polar sets $E \subset \s$. Thanks to results of Carleson and further considerations, which particularly rely on the favorable behavior of the $\mathrm{Li}_\alpha$-kernel, we also find that this result is sharp. In particular, it may be seen as a generalization of the universality result for the harmonic Dirichlet space.
Following this, we will investigate the same class, however, this time for $\alpha \in [-1,0)$. In this case, it turns out that universality with respect to uniform convergence occurs on closed and $(-\alpha)$-complement-polar sets $E \subset \s$. In particular, these sets of universality can have positive arc measure. In the final section, we will focus on the class $D_{\textup{harm}}^p$. Here, we manage to prove that universality occurs on closed and $(1,p)$-polar sets $E \subset \s$. Through results of Twomey [68] combined with an observation by Girela and Pélaez [23], as well as the decomposition of $D_{\textup{harm}}^p$, we can deduce that the closed sets of universality with respect to uniform convergence of the class $D_{\textup{harm}}^p$ are characterized by $(1,p)$-polarity. We conclude our work with an application of the latter result to the class $D^p$. We will show that the closed sets of divergence for the class $D^p$ are given by the $(1,p)$-polar sets.
Ensuring fairness in machine learning models is crucial for ethical and unbiased automated decision-making. Classifications from fair machine learning models should not discriminate against sensitive variables such as sexual orientation and ethnicity. However, achieving fairness is complicated by biases inherent in training data, particularly when data is collected through group sampling, like stratified or cluster sampling as often occurs in social surveys. Unlike the standard assumption of independent observations in machine learning, clustered data introduces correlations that can amplify biases, especially when cluster assignment is linked to the target variable.
To address these challenges, this cumulative thesis focuses on developing methods to mitigate unfairness in machine learning models. We propose a fair mixed effects support vector machine algorithm, a Cluster-Regularized Logistic Regression and a fair Generalized Linear Mixed Model based on boosting, all of them are capable of handling both grouped data and fairness constraints simultaneously. Additionally, we introduce a Julia package, FairML.jl, which provides a comprehensive framework for addressing fairness issues. This package offers a preprocessing technique, based on resampling methods, to mitigate biases in the data, as well as a post-processing method, that seeks for a optimal cut-off selection.
To improve fairness in classifications both processes can be incorporated in any classification method available in the MLJ.jl package. Furthermore, FairML.jl incorporates in-processing approaches, such as optimization-based techniques for logistic regression and support vector machine, to directly address fairness during model training in regular and mixed models.
By accounting for data complexities and implementing various fairness-enhancing strategies, our work aims to contribute to the development of more equitable and reliable machine learning models.
This dissertation addresses the measurement and evaluation of the energy and resource efficiency of software systems. Studies show that the environmental impact of Information and Communications Technologies (ICT) is steadily increasing and is already estimated to be responsible for 3 % of the total greenhouse gas (GHG) emissions. Although it is the hardware that consumes natural resources and energy through its production, use, and disposal, software controls the hardware and therefore has a considerable influence on the used capacities. Accordingly, it should also be attributed a share of the environmental impact. To address this softwareinduced impact, the focus is on the continued development of a measurement and assessment model for energy and resource-efficient software. Furthermore, measurement and assessment methods from international research and practitioner communities were compared in order to develop a generic reference model for software resource and energy measurements. The next step was to derive a methodology and to define and operationalize criteria for evaluating and improving the environmental impact of software products. In addition, a key objective is to transfer the developed methodology and models to software systems that cause high consumption or offer optimization potential through economies of scale. These include, e. g., Cyber-Physical Systems (CPS) and mobile apps, as well as applications with high demands on computing power or data volumes, such as distributed systems and especially Artificial Intelligence (AI) systems.
In particular, factors influencing the consumption of software along its life cycle are considered. These factors include the location (cloud, edge, embedded) where the computing and storage services are provided, the role of the stakeholders, application scenarios, the configuration of the systems, the used data, its representation and transmission, or the design of the software architecture. Based on existing literature and previous experiments, distinct use cases were selected that address these factors. Comparative use cases include the implementation of a scenario in different programming languages, using varying algorithms, libraries, data structures, protocols, model topologies, hardware and software setups, etc. From the selection, experimental scenarios were devised for the use cases to compare the methods to be analyzed. During their execution, the energy and resource consumption was measured, and the results were assessed. Subtracting baseline measurements of the hardware setup without the software running from the scenario measurements makes the software-induced consumption measurable and thus transparent. Comparing the scenario measurements with each other allows the identification of the more energyefficient setup for the use case and, in turn, the improvement/optimization of the system as a whole. The calculated metrics were then also structured as indicators in a criteria catalog. These indicators represent empirically determinable variables that provide information about a matter that cannot be measured directly, such as the environmental impact of the software. Together with verification criteria that must be complied with and confirmed by the producers of the software, this creates a model with which the comparability of software systems can be established.
The gained knowledge from the experiments and assessments can then be used to forecast and optimize the energy and resource efficiency of software products. This enables developers, but also students, scientists and all other stakeholders involved in the life cycleof software, to continuously monitor and optimize the impact of their software on energy and resource consumption. The developed models, methods, and criteria were evaluated and validated by the scientific community at conferences and workshops. The central outcomes of this thesis, including a measurement reference model and the criteria catalog, were disseminated in academic journals. Furthermore, the transfer to society has been driven forward, e. g., through the publication of two book chapters, the development and presentation of exemplary best practices at developer conferences, collaboration with industry, and the establishment of the eco-label “Blue Angel” for resource and energy-efficient software products. In the long term, the objective is to effect a change in societal attitudes and ultimately to achieve significant resource savings through economies of scale by applying the methods in the development of software in general and AI systems in particular.
In most textbooks optimal sample allocation is tailored to rather theoretical examples. However, in practice we often face large-scale surveys with conflicting objectives and many restrictions on the quality and cost at population and subpopulation levels. This multiobjectiveness results in a multitude of efficient sample allocations, each giving different weight to a single survey purpose. Additionally, since the input data to the allocation problem often relies on supplementary information derived from estimation, historical data, or expert knowledge, allocations might be inefficient when specified for sampling.
This doctoral thesis presents a framework for optimal allocation to standard sampling schemes that allows for specifying the tradeoff between different objectives and analyzing their sensitivity to other problem components, aiming to support a decision-maker in identifying an at-most preferred sample allocation. It dedicates a full chapter to each of the following core questions: 1) How to efficiently incorporate quality and cost constraints for large-scale surveys, say, for thousands of strata with hundreds of precision and cost constraints? 2) How to handle vector-valued objectives with their components addressing different, possibly conflicting survey purposes? 3) How to consider uncertainty in the input data?
The techniques presented can be used separately or in combination as a general problem-solving framework for constrained multivariate and multidomain, possibly uncertain, sample allocation. The main problem is formulated in a way that highlights the different components of optimal sample allocation and can be taken as a gateway to develop solution strategies to each of the questions above, while shifting the focus between different problem aspects. The first question is addressed through a conic quadratic reformulation, which can be efficiently solved for large problem instances using interior-point methods. Based on this the second question is tackled using a weighted Chebyshev minimization, which provides insight into the sensitivity of the problem and enables a stepwise procedure for considering nonlinear decision functionals. Lastly, uncertainty in the input data is addressed through regularization, chance constraints and robust problem formulations.