Refine
Year of publication
Document Type
- Doctoral Thesis (900) (remove)
Language
- German (505)
- English (384)
- Multiple languages (7)
- French (4)
Keywords
- Deutschland (38)
- Stress (37)
- Optimierung (22)
- Modellierung (19)
- Fernerkundung (17)
- Hydrocortison (16)
- stress (16)
- Motivation (12)
- Stressreaktion (12)
- cortisol (12)
Institute
- Psychologie (182)
- Raum- und Umweltwissenschaften (148)
- Fachbereich 4 (73)
- Mathematik (64)
- Wirtschaftswissenschaften (61)
- Fachbereich 1 (34)
- Geschichte, mittlere und neuere (28)
- Informatik (28)
- Germanistik (26)
- Fachbereich 6 (23)
- Kunstgeschichte (22)
- Politikwissenschaft (18)
- Anglistik (17)
- Fachbereich 2 (16)
- Soziologie (16)
- Fachbereich 3 (12)
- Philosophie (9)
- Romanistik (9)
- Computerlinguistik und Digital Humanities (7)
- Medienwissenschaft (6)
- Geschichte, alte (5)
- Allgemeine Sprach- und Literaturwissenschaft (4)
- Fachbereich 5 (4)
- Klassische Philologie (4)
- Pädagogik (4)
- Ethnologie (3)
- Japanologie (3)
- Sinologie (3)
- Archäologie (2)
- Rechtswissenschaft (2)
- Bodenkunde (1)
- Phonetik (1)
- Slavistik (1)
- Universitätsbibliothek (1)
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Modern decision making in the digital age is highly driven by the massive amount of
data collected from different technologies and thus affects both individuals as well as
economic businesses. The benefit of using these data and turning them into knowledge
requires appropriate statistical models that describe the underlying observations well.
Imposing a certain parametric statistical model goes along with the need of finding
optimal parameters such that the model describes the data best. This often results in
challenging mathematical optimization problems with respect to the model’s parameters
which potentially involve covariance matrices. Positive definiteness of covariance matrices
is required for many advanced statistical models and these constraints must be imposed
for standard Euclidean nonlinear optimization methods which often results in a high
computational effort. As Riemannian optimization techniques proved efficient to handle
difficult matrix-valued geometric constraints, we consider optimization over the manifold
of positive definite matrices to estimate parameters of statistical models. The statistical
models treated in this thesis assume that the underlying data sets used for parameter
fitting have a clustering structure which results in complex optimization problems. This
motivates to use the intrinsic geometric structure of the parameter space. In this thesis,
we analyze the appropriateness of Riemannian optimization over the manifold of positive
definite matrices on two advanced statistical models. We establish important problem-
specific Riemannian characteristics of the two problems and demonstrate the importance
of exploiting the Riemannian geometry of covariance matrices based on numerical studies.
Behavioural traces from interactions with digital technologies are diverse and abundant. Yet, their capacity for theory-driven research is still to be constituted. In the present cumulative dissertation project, I deliberate the caveats and potentials of digital behavioural trace data in behavioural and social science research. One use case is online radicalisation research. The three studies included, set out to discern the state-of-the-art of methods and constructs employed in radicalization research, at the intersection of traditional methods and digital behavioural trace data. Firstly, I display, based on a systematic literature review of empirical work, the prevalence of digital behavioural trace data across different research strands and discern determinants and outcomes of radicalisation constructs. Secondly, I extract, based on this literature review, hypotheses and constructs and integrate them to a framework from network theory. This graph of hypotheses, in turn, makes the relative importance of theoretical considerations explicit. One implication of visualising the assumptions in the field is to systematise bottlenecks for the analysis of digital behavioural trace data and to provide the grounds for the genesis of new hypotheses. Thirdly, I provide a proof-of-concept for incorporating a theoretical framework from conspiracy theory research (as a specific form of radicalisation) and digital behavioural traces. I argue for marrying theoretical assumptions derived from temporal signals of posting behaviour and semantic meaning from textual content that rests on a framework from evolutionary psychology. In the light of these findings, I conclude by discussing important potential biases at different stages in the research cycle and practical implications.
Wasserbezogene regulierende und versorgende Ökosystemdienstleistungen (ÖSDL) wurden im Hinblick auf das Abflussregime und die Grundwasserneubildung im Biosphärenreservat Pfälzerwald im Südwesten Deutschlands anhand hydrologischer Modellierung unter Verwendung des Soil and Water Assessment Tool (SWAT+) untersucht. Dabei wurde ein holistischer Ansatz verfolgt, wonach den ÖSDL Indikatoren für funktionale und strukturelle ökologische Prozesse zugeordnet werden. Potenzielle Risikofaktoren für die Verschlechterung von wasserbedingten ÖSDL des Waldes, wie Bodenverdichtung durch Befahren mit schweren Maschinen im Zuge von Holzerntearbeiten, Schadflächen mit Verjüngung, entweder durch waldbauliche Bewirtschaftungspraktiken oder durch Windwurf, Schädlinge und Kalamitäten im Zuge des Klimawandels, sowie der Kli-mawandel selbst als wesentlicher Stressor für Waldökosysteme wurden hinsichtlich ihrer Auswirkungen auf hydrologische Prozesse analysiert. Für jeden dieser Einflussfaktoren wurden separate SWAT+-Modellszenarien erstellt und mit dem kalibrierten Basismodell verglichen, das die aktuellen Wassereinzugsgebietsbedingungen basierend auf Felddaten repräsentierte. Die Simulationen bestätigten günstige Bedingungen für die Grundwasserneubildung im Pfälzerwald. Im Zusammenhang mit der hohen Versickerungskapazität der Bodensubstrate der Buntsandsteinverwitterung, sowie dem verzögernden und puffernden Einfluss der Baumkronen auf das Niederschlagswasser, wurde eine signifikante Minderungswirkung auf die Oberflächenabflussbildung und ein ausgeprägtes räumliches und zeitliches Rückhaltepotential im Einzugsgebiet simuliert. Dabei wurde festgestellt, dass erhöhte Niederschlagsmengen, die die Versickerungskapazität der sandigen Böden übersteigen, zu einer kurz geschlossenen Abflussreaktion mit ausgeprägten Oberflächenabflussspitzen führen. Die Simulationen zeigten Wechselwirkungen zwischen Wald und Wasserkreislauf sowie die hydrologische Wirksamkeit des Klimawandels, verschlechterter Bodenfunktionen und altersbezogener Bestandesstrukturen im Zusammenhang mit Unterschieden in der Baumkronenausprägung. Zukunfts-Klimaprojektionen, die mit BIAS-bereinigten REKLIES- und EURO-CORDEX-Regionalklimamodellen (RCM) simuliert wurden, prognostizierten einen höheren Verdunstungsbedarf und eine Verlängerung der Vegetationsperiode bei gleichzeitig häufiger auftretenden Dürreperioden innerhalb der Vegetationszeit, was eine Verkürzung der Periode für die Grundwasserneubildung induzierte, und folglich zu einem prognostizierten Rückgang der Grundwasserneubildungsrate bis zur Mitte des Jahrhunderts führte. Aufgrund der starken Korrelation mit Niederschlagsintensitäten und der Dauer von Niederschlagsereignissen, bei allen Unsicherheiten in ihrer Vorhersage, wurde für die Oberflächenabflussgenese eine Steigerung bis zum Ende des Jahrhunderts prognostiziert.
Für die Simulation der Bodenverdichtung wurden die Trockenrohdichte des Bodens und die SCS Curve Number in SWAT+ gemäß Daten aus Befahrungsversuchen im Gebiet angepasst. Die günstigen Infiltrationsbedingungen und die relativ geringe Anfälligkeit für Bodenverdichtung der grobkörnigen Buntsandsteinverwitterung dominierten die hydrologischen Auswirkungen auf Wassereinzugsgebietsebene, sodass lediglich moderate Verschlechterungen wasserbezogener ÖSDL angezeigt wurden. Die Simulationen zeigten weiterhin einen deutlichen Einfluss der Bodenart auf die hydrologische Reaktion nach Bodenverdichtung auf Rückegassen und stützen damit die Annahme, dass die Anfälligkeit von Böden gegenüber Verdichtung mit dem Anteil an Schluff- und Tonbodenpartikeln zunimmt. Eine erhöhte Oberflächenabflussgenese ergab sich durch das Wegenetz im Gesamtgebiet.
Schadflächen mit Bestandesverjüngung wurden anhand eines artifiziellen Modells innerhalb eines Teileinzugsgebiets unter der Annahme von 3-jährigen Baumsetzlingen in einem Entwicklungszeitraum von 10 Jahren simuliert und hinsichtlich spezifischer Was-serhaushaltskomponenten mit Altbeständen (30 bis 80 Jahre) verglichen. Die Simulation ließ darauf schließen, dass bei fehlender Kronenüberschirmung die hydrologisch verzögernde Wirkung der Bestände beeinträchtigt wird, was die Entstehung von Oberflächenabfluss begünstigt und eine quantitativ geringfügig höhere Tiefensickerung fördert. Hydrologische Unterschiede zwischen dem geschlossenem Kronendach der Altbestände und Jungbeständen mit annähernden Freilandniederschlagsbedingungen wurden durch die dominierenden Faktoren atmosphärischer Verdunstungsanstoß, Niederschlagsmengen und Kronenüberschirmungsgrad bestimmt. Je weniger entwickelt das Kronendach von verjüngten Waldbeständen im Vergleich zu Altbeständen, je höher der atmosphärische Verdunstungsanstoß und je geringer die eingetragenen Niederschlagsmengen, desto größer war der hydrologische Unterschied zwischen den Bestandestypen.
Verbesserungsmaßnahmen für den dezentralen Hochwasserschutz sollten folglich kritische Bereiche für die Abflussbildung im Wald (CSA) berücksichtigen. Die hohe Sensibilität und Anfälligkeit der Wälder gegenüber Verschlechterungen der Ökosystembedingungen legen nahe, dass die Erhaltung des komplexen Gefüges und von intakten Wechselbeziehungen, insbesondere unter der gegebenen Herausforderung des Klimawandels, sorgfältig angepasste Schutzmaßnahmen, Anstrengungen bei der Identifizierung von CSA sowie die Erhaltung und Wiederherstellung der hydrologischen Kontinuität in Waldbeständen erfordern.
Traditional workflow management systems support process participants in fulfilling business tasks through guidance along a predefined workflow model.
Flexibility has gained a lot of attention in recent decades through a shift from mass production to customization. Various approaches to workflow flexibility exist that either require extensive knowledge acquisition and modelling effort or an active intervention during execution and re-modelling of deviating behaviour. The pursuit of flexibility by deviation is to compensate both of these disadvantages through allowing alternative unforeseen execution paths at run time without demanding the process participant to adapt the workflow model. However, the implementation of this approach has been little researched so far.
This work proposes a novel approach to flexibility by deviation. The approach aims at supporting process participants during the execution of a workflow through suggesting work items based on predefined strategies or experiential knowledge even in case of deviations. The developed concepts combine two renowned methods from the field of artificial intelligence - constraint satisfaction problem solving with process-oriented case-based reasoning. This mainly consists of a constraint-based workflow engine in combination with a case-based deviation management. The declarative representation of workflows through constraints allows for implicit flexibility and a simple possibility to restore consistency in case of deviations. Furthermore, the combined model, integrating procedural with declarative structures through a transformation function, increases the capabilities for flexibility. For an adequate handling of deviations the methodology of case-based reasoning fits perfectly, through its approach that similar problems have similar solutions. Thus, previous made experiences are transferred to currently regarded problems, under the assumption that a similar deviation has been handled successfully in the past.
Necessary foundations from the field of workflow management with a focus on flexibility are presented first.
As formal foundation, a constraint-based workflow model was developed that allows for a declarative specification of foremost sequential dependencies of tasks. Procedural and declarative models can be combined in the approach, as a transformation function was specified that converts procedural workflow models to declarative constraints.
One main component of the approach is the constraint-based workflow engine that utilizes this declarative model as input for a constraint solving algorithm. This algorithm computes the worklist, which is proposed to the process participant during workflow execution. With predefined deviation handling strategies that determine how the constraint model is modified in order to restore consistency, the support is continuous even in case of deviations.
The second major component of the proposed approach constitutes the case-based deviation management, which aims at improving the support of process participants on the basis of experiential knowledge. For the retrieve phase, a sophisticated similarity measure was developed that integrates specific characteristics of deviating workflows and combines several sequence similarity measures. Two alternative methods for the reuse phase were developed, a null adaptation and a generative adaptation. The null adaptation simply proposes tasks from the most similar workflow as work items, whereas the generative adaptation modifies the constraint-based workflow model based on the most similar workflow in order to re-enable the constraint-based workflow engine to suggest work items.
The experimental evaluation of the approach consisted of a simulation of several types of process participants in the exemplary domain of deficiency management in construction. The results showed high utility values and a promising potential for an investigation of the transfer on other domains and the applicability in practice, which is part of future work.
Concluding, the contributions are summarized and research perspectives are pointed out.
Energy transport networks are one of the most important infrastructures for the planned energy transition. They form the interface between energy producers and consumers and their features make them good candidates for the tools that mathematical optimization can offer. Nevertheless, the operation of energy networks comes with two major challenges. First, the nonconvexity of the equations that model the physics in the network render the resulting problems extremely hard to solve for large-scale networks. Second, the uncertainty associated to the behavior of the different agents involved, the production of energy, and the consumption of energy make the resulting problems hard to solve if a representative description of uncertainty is to be considered.
In this cumulative dissertation we study adaptive refinement algorithms designed to cope with the nonconvexity and stochasticity of equations arising in energy networks. Adaptive refinement algorithms approximate the original problem by sequentially refining the model of a simpler optimization problem. More specifically, in this thesis, the focus of the adaptive algorithm is on adapting the discretization and description of a set of constraints.
In the first part of this thesis, we propose a generalization of the different adaptive refinement ideas that we study. We sequentially describe model catalogs, error measures, marking strategies, and switching strategies that are used to set up the adaptive refinement algorithm. Afterward, the effect of the adaptive refinement algorithm on two energy network applications is studied. The first application treats the stationary operation of district heating networks. Here, the strength of adaptive refinement algorithms for approximating the ordinary differential equation that describes the transport of energy is highlighted. We introduce the resulting nonlinear problem, consider network expansion, and obtain realistic controls by applying the adaptive refinement algorithm. The second application concerns quantile-constrained optimization problems and highlights the ability of the adaptive refinement algorithm to cope with large scenario sets via clustering. We introduce the resulting mixed-integer linear problem, discuss generic solution techniques, make the link with the generalized framework, and measure the impact of the proposed solution techniques.
The second part of this thesis assembles the papers that inspired the contents of the first part of this thesis. Hence, they describe in detail the topics that are covered and will be referenced throughout the first part.
THE NONLOCAL NEUMANN PROBLEM
(2023)
Instead of presuming only local interaction, we assume nonlocal interactions. By doing so, mass
at a point in space does not only interact with an arbitrarily small neighborhood surrounding it,
but it can also interact with mass somewhere far, far away. Thus, mass jumping from one point to
another is also a possibility we can consider in our models. So, if we consider a region in space, this
region interacts in a local model at most with its closure. While in a nonlocal model this region may
interact with the whole space. Therefore, in the formulation of nonlocal boundary value problems
the enforcement of boundary conditions on the topological boundary may not suffice. Furthermore,
choosing the complement as nonlocal boundary may work for Dirichlet boundary conditions, but
in the case of Neumann boundary conditions this may lead to an overfitted model.
In this thesis, we introduce a nonlocal boundary and study the well-posedness of a nonlocal Neu-
mann problem. We present sufficient assumptions which guarantee the existence of a weak solution.
As in a local model our weak formulation is derived from an integration by parts formula. However,
we also study a different weak formulation where the nonlocal boundary conditions are incorporated
into the nonlocal diffusion-convection operator.
After studying the well-posedness of our nonlocal Neumann problem, we consider some applications
of this problem. For example, we take a look at a system of coupled Neumann problems and analyze
the difference between a local coupled Neumann problems and a nonlocal one. Furthermore, we let
our Neumann problem be the state equation of an optimal control problem which we then study. We
also add a time component to our Neumann problem and analyze this nonlocal parabolic evolution
equation.
As mentioned before, in a local model mass at a point in space only interacts with an arbitrarily
small neighborhood surrounding it. We analyze what happens if we consider a family of nonlocal
models where the interaction shrinks so that, in limit, mass at a point in space only interacts with
an arbitrarily small neighborhood surrounding it.
Intensiv diskutierte Aspekte der Politikwissenschaft heben zunehmend die Bedeutung von Strategiefähigkeit zur erfolgreichen Durchführung von Wahlkämpfen für Parteien hervor. Der Widerspruch der mit den Implikationen der modernen Mediengesellschaft eingehergehenden unterstellten Akteursfähigkeit der Parteien und ihrer kollektiven heterogenen Interessens- und Organisationsvielfalt bleibt dabei bestehen. Die Fokussierung der Parteien auf das Ziel der Stimmenmaximierung bringt unter den sich wandelnden Rahmenbedingungen Veränderungen der Binnenstrukturen mit sich. So diskutieren Parteienforscher seit Längerem die Notwendigkeit eines vierten Parteitypus als Nachfolger von Kirchheimers Volkspartei (1965). Verschiedene dieser Ansätze berücksichtigen primär die Wahlkampffokussierung der Parteien, während andere vor allem auf den gesteigerten Strategiebedarf abzielen. Auch die Wechselwirkungen mit den Erfordernissen der Mediengesellschaft sowie Auswirkungen des gesellschaftlichen Wandels stehen im Vordergrund zahlreicher Untersuchungen. Die Arbeit von Uwe Jun (2004), der mit dem Modell der professionalisierten Medienkommunikationspartei auch die organisatorischen und programmatischen Transformationsaspekte des Parteiwandels beleuchtet, liefert einen bemerkenswerten Beitrag zur Party-Change-Debatte und bietet durch die angeschlossene vergleichende exemplarische Fallstudie eine praxisnahe Einordnung. Die geringe empirische Relevanz, die Jun seinem Parteityp anhand der Untersuchung von SPD und New Labor zwischen 1995 und 2005 bestätigt, soll in dieser Arbeit versucht werden zu relativieren, in dem der Parteiwandel der deutschen Großparteien seit der Wiedervereinigung durch die Untersuchung ihrer Wahlkampffähigkeit aufgezeigt wird. Anhand eines längsschnittlichen Vergleiches der Bundestagswahlkämpfe von SPD und CDU zwischen 1990 und 2013 soll die Plausibilität dieses vierten Parteitypus überprüft werden. Hierdurch soll die Entwicklung der Strategie- und Wahlkampffähigkeit beider Großparteien in den Bundestagswahlkämpfen seit 1990 untersucht und die Ergebnisse miteinander verglichen und in Bezug auf den Parteiwandel eingeordnet werden.
Dass sich Parteien genau wie ihre gesellschaftliche und politische Umwelt im Wandel befinden, ist nicht zu bestreiten und seit Langem viel diskutierter Gegenstand der Parteienforschung. „Niedergangsdiskussion“, Mitgliederschwund, Nicht- und Wechselwähler, Politik- und Parteienverdrossenheit, Kartellisierung und Institutionalisierung von Parteien sind nur einige der in diesem Kontext geläufigen Schlagwörter. Prozesse der Individualisierung, Globalisierung und Mediatisierung führen zu veränderten Rahmenbedingungen, unter denen Parteien sich behaupten müssen. Diese Veränderungen in der äußeren Umwelt wirken sich nachhaltig auf das parteipolitische Binnenleben, auf Organisationsstrukturen und Programmatik aus. Die Parteienforschung hat daher schon vor zwanzig Jahren begonnen, ein typologisches Nachfolgemodell der Volkspartei zu diskutieren, das diesen Wandel berücksichtigt. Verschiedene typologische Konstruktionen von z. B. Panebianco (1988), Katz und Mair (1995) oder von Beyme erfassen (2000) wichtige Facetten des Strukturwandels politischer Parteien und stellen mehrheitlich plausible typologische Konzepte vor, die die Parteien in ihrem Streben nach Wählerstimmen und Regierungsmacht zutreffend charakterisieren. Die Parteienforschung stimmt bezüglich des Endes der Volksparteiära mehrheitlich überein. Bezüglich der Nachfolge konnte sich unter den neueren vorgeschlagenen Typen jedoch kein vierter Typ als verbindliches Leitmodell etablieren. Bei genauerer Betrachtung weichen die in den verschiedenen Ansätzen für einen vierten Parteitypen hervorgehobenen Merkmale (namentlich Professionalisierung des Parteiapparates, die Berufspolitikerdominanz, Verstaatlichung und Kartellbildung sowie die Fixierung auf die Medien) wenig von jüngeren Modellvorschlägen ab und bedürfen daher mehr einer Ergänzung. Die in der Regel mehrdimensionalen entwicklungstypologischen Verlaufstypen setzten seit den 1980er Jahren unterschiedliche Schwerpunkte und warten mit vielen Vorschlägen der Einordnung auf. Einer der jüngsten Ansätze von Uwe Jun aus dem Jahr 2004, der das typologische Konzept der professionalisierten Medienkommunikationspartei einführt, macht deutlich, dass die Diskussion um Gestalt und Ausprägungen des vierten Parteityps noch in vollem Gang und für weitere Vorschläge offen ist – der „richtige“ Typ also noch nicht gefunden wurde. Jun bleibt in seiner Untersuchung den zentralen Transformationsleitfragen nach der Ausgestaltung der Parteiorganisation, der ideologisch-programmatischen Orientierung und der strategisch-elektoralen Wählerorientierung verhaftet und setzt diese Elemente in den Fokus sich wandelnder Kommunikationsstrategien. Die bisher in parteitypologischen Arbeiten mitunter vernachlässigte Komponente der strukturellen Strategiefähigkeit als Grundlage zur Entwicklung ebensolcher Reaktionsstrategien wird bei Jun angestoßen und soll in dieser Arbeit aufgegriffen und vertieft werden.
Der aktuellen Partychange-Diskussion zum Trotz scheint die Annahme, dass Parteien, die sich verstärkt der Handlungslogik der Massenmedien unterwerfen, deren strategischen Anforderungen durch interne Adaptionsverfahren auch dauerhaft gerecht zu werden vermögen, nicht immer zutreffend. Die Veränderungen der Kommunikationsstrategien als Reaktion auf gesamtgesellschaftliche Wandlungsprozesse stehen zwar im Zentrum der Professionalisierungsbemühungen der politischen Akteure, bleiben aber in ihrer Wirkung eingeschränkt. Wenngleich das Wissen in den Parteien um die Notwendigkeiten (medialer) Strategiefähigkeit besteht und die Parteien hierauf mit Professionalisierung, organisatorischen und programmatischen Anpassungsleistungen und der Herausbildung strategischer Zentren reagieren, so ist mediengerechtes strategisches Agieren noch lange keine natürliche Kernkompetenz der Parteien. Vor allem in Wahlkampfzeiten, die aufgrund abnehmender Parteibindungen und zunehmender Wählervolatilität für die Parteien zum eigentlich zentralen Moment der Parteiendemokratie werden, wird mediengerechtes Handeln zum wesentlichen Erfolgsfaktor. Strategiefähigkeit wird hierbei zur entscheidenden Voraussetzung und scheint zudem in diesen Phasen von den Parteien erfolgreicher umgesetzt zu werden als im normalen politischen Alltag. Die wahlstrategische Komponente findet in Juns typologischer Konstruktion wenig Beachtung und soll in dieser Arbeit daher als ergänzendes Element hinzugefügt werden. Arbeitshypothese Die beiden deutschen Großparteien berufen sich auf unterschiedliche Entstehungsgeschichten, die sich bis in die Gegenwart auf die Mitglieder-, Issue- und Organisationsstrukturen von SPD und CDU auswirken und die Parteien in ihren Anpassungsleistungen an die sich wandelnde Gesellschaft beeinflussen. Beide Parteien versuchen, auf die veränderten sozialen und politischen Rahmenbedingungen und den daraus resultierenden Bedeutungszuwachs von politischer Kommunikationsplanung mit einem erhöhten Maß an Strategiefähigkeit und kommunikativer Kompetenz zu reagieren. Diese Entwicklung tritt seit der deutschen Wiedervereinigung umso stärker in Augenschein, als dass nach 1990 die Bindekraft der Volksparteien nochmals nachließ, sodass die Parteien sich zunehmend gezwungen sehen, die „lose verkoppelten Anarchien“ in wahlstrategische Medienkommunikationsparteien zu transformieren. Diesen vierten Parteityp kennzeichnet vor allem die zunehmende Bemühung um Strategiefähigkeit, die mittels Organisationsstrukturen und programmatischer Anpassungsleistungen die Effizienz der elektoralen Ausrichtung verbessern soll. Insgesamt geht die Party-Change-Forschung davon aus, dass die Parteien sich zunehmend angleichen. Dies gilt es in dieser Studie zu überprüfen. Unter Berücksichtigung unterschiedlicher Entwicklungspfade kann vermutet werden, dass auch die Transformationsprozesse bei SPD und CDU in unterschiedlicher Weise verlaufen. Wenngleich die SPD über einen höheren Strategiebedarf und die größere Innovationsbereitschaft zu verfügen scheint, werden auf Seiten der Union potentiell strategiefähigere Strukturen vermutet, die die erfolgreiche Umsetzung von Wahlkampfstrategien erleichtern. Die historische Entwicklung und der Aspekt der Historizität spielen in diesem Kontext eine Rolle.
Zusätzlich spielen individuelle Führungspersönlichkeiten eine zentrale Rolle in innerparteilichen Transformationsprozessen, welche für die Ausprägung strategiefähiger Strukturen oftmals von größerer Bedeutung sind als institutionalisierte Strukturen. Im Vordergrund steht die Untersuchung des Parteiwandels anhand der Veränderung der Kommunikationsstrategien der Parteien im Allgemeinen sowie der Strategiefähigkeit in Wahlkämpfen im Besonderen, da diese als zentrale Merkmale für den vierten Parteityp in Anlehnung an die Professionelle Medienkommunikationspartei (Jun 2004) gewertet werden sollen. Strategiefähigkeit soll dabei anhand der Kriterien des Umgangs der Parteien mit Programmatik, Organisation und externen Einflussfaktoren in Wahlkämpfen operationalisiert werden. Die Analyse untersucht sowohl das Handeln einzelner Personen wie auch die Rolle der Partei als Gesamtorganisation. Die Arbeit besteht aus zehn Kapiteln und gliedert sich in zwei Blöcke: einen theoretisch konzeptionellen Teil, der die in der Perspektive dieser Arbeit zentralen Grundlagen und Rahmenbedingungen zusammenführt sowie die sich daran anschließende Untersuchung der Konzeption und Implementation von Kommunikationskampagnen im Wahlkampf seit 1990. Das aktuell in die politikwissenschaftliche Diskussion eingebrachte Feld der politischen Strategiefähigkeit (Raschke/Tils 2007) wird in ausführlicher theoretischer Grundlegung bisher zwar mit den Implikationen der Medienkommunikation und damit einhergehend auch den organisatorischen und programmatischen Strukturmerkmalen der Parteien verknüpft, diese erfolgte allerdings oft ohne vertiefte Berücksichtigung des Parteiwandels. Dies soll in diesem Beitrag daher versucht werden. Der Diskursanalyse des Strategiebegriffes in Wahlkampfsituationen folgt die detaillierte Darstellung der drei Operationalisierungsparameter, die in die Festlegung des Parteityps münden. Die Diskussion idealtypischer Wahlkampfmodelle als theoretischer Bezugsrahmen für die Bewertung der Wahlkampagnen ergänzt den theoretisch-konzeptionellen Bezugsrahmen. Die insgesamt in der Literatur in ihren Ausführungen oftmals normativ gestalteten Darstellungen idealtypischer politischer Strategie sollen im letzten Teil der Arbeit auf ihre Umsetzbarkeit im parteipolitischen Alltag überprüft werden und dies nicht nur anhand einzelner, mit einander nicht in Zusammenhang stehender Ereignisse, sondern anhand der sich periodisch unter vergleichbaren Bedingungen wiederholenden Wahlkämpfe. Dafür werden die jeweiligen Ausgangs- und Rahmenbedingungen der einzelnen Wahlkämpfe sowie die zuvor dargelegten Elemente professionalisierter Wahlkampagnen für die Wahlkampagnen von SPD und CDU seit 1990 dargestellt. Aus diesen Gegenüberstellungen soll im Anschluss der längsschnittliche Vergleich der Strategiefähigkeit und Kommunikationskompetenz von SPD und CDU abgeleitet werden
Anmerkung: Es handelt sich um die 1. Auflage der Dissertation.
2. überarbeitete Auflage siehe:
"https://ubt.opus.hbz-nrw.de/frontdoor/index/index/docId/2166".
Ausgangspunkt der politisch-ikonographischen Untersuchung, in deren Zentrum zwei Staatsporträts König Maximilians II. von Bayern stehen, ist die Beobachtung, dass diese beiden Bildnisse grundsätzlich unterschiedliche Inszenierungsformen wählen. Das erste von Max Hailer gefertigte Werk zeigt Maximilian II. im vollen bayerischen Krönungsornat und greift eine tradierte Darstellungsweise im Staatsporträt auf. Es entstand zwei Jahre nach Maximilians II. Thronbesteigung und damit nach den revolutionären Unruhen der Jahre 1848/49 im Jahr 1850. Das zweite wurde von Joseph Bernhardt 1857 bis 1858 gemalt und im Jahr 1858 zum zehnjährigen Thronjubiläum des Monarchen erstmals präsentiert. Die Inszenierung ändert sich im zweiten Bildnis: Das bayerische Krönungsornat ist der Generalsuniform gewichen, ebenso weitere Details, die sich noch in der ersten Darstellung finden: Draperie und Wappen fehlen, der übliche bayerisch-königliche Thronsessel ist durch einen anderen ersetzt. In den Hintergrund gedrängt ist die Verfassung, immerhin seit 1818 staatliche Rechtsgrundlage des bayerischen Königreichs. Die beiden Staatsporträts Maximilians II. leiten offensichtlich von den Herrscherbildnissen im vollen bayerischen Krönungsornat seines Großvaters Maximilian I. und Vaters Ludwig I. über zu einer solchen in Uniform mit Krönungsmantel wie sie sich bei Napoleon III. und Friedrich Wilhelm IV. finden und wie sie sein Sohn Ludwig II. weiterführte. Es stellt sich somit die Frage, welche Faktoren zu diesem prägnanten Wandel in der Inszenierung Maximilians II. als König von Bayern führten. Die Arbeit geht der These nach, dass beide Darstellungen grundlegend auf eine reaktionäre, gegen die Revolution 1848/49 gerichtete Politik ausgelegt sind, wobei dieser reaktionäre Charakter in Maximilians II. Bildnis von 1858 noch eine Steigerung im Vergleich zu derjenigen von 1850 erfährt. Zudem wandelt sich die innenpolitisch-historische Ausrichtung des ersten Porträts bei der zweiten Darstellung des bayerischen Monarchen in eine außenpolitisch-progressive. Die Legitimation Maximilians II. begründet sich nicht mehr, wie bei ersterem, in der Geschichte und der Herrschaft der Wittelsbacher, sondern in seinen eigenen Errungenschaften und seiner eigenen Herrschaft. Dieser Wechsel der politischen Bildaussage fußt sowohl auf den politischen Veränderungen und Entwicklungen innerhalb und außerhalb Bayerns als auch auf der Entwicklung des Staatsporträts in der Mitte des 19. Jahrhunderts. Nach nur zehn Jahren wird so eine veränderte Botschaft über Maximilians II. Position und Machtanspruch ausgesendet.
Die Dissertation beschäftigt sich mit einer neuartigen Art von Branch-and-Bound Algorithmen, deren Unterschied zu klassischen Branch-and-Bound Algorithmen darin besteht, dass
das Branching durch die Addition von nicht-negativen Straftermen zur Zielfunktion erfolgt
anstatt durch das Hinzufügen weiterer Nebenbedingungen. Die Arbeit zeigt die theoretische Korrektheit des Algorithmusprinzips für verschiedene allgemeine Klassen von Problemen und evaluiert die Methode für verschiedene konkrete Problemklassen. Für diese Problemklassen, genauer Monotone und Nicht-Monotone Gemischtganzzahlige Lineare Komplementaritätsprobleme und Gemischtganzzahlige Lineare Probleme, präsentiert die Arbeit
verschiedene problemspezifische Verbesserungsmöglichkeiten und evaluiert diese numerisch.
Weiterhin vergleicht die Arbeit die neue Methode mit verschiedenen Benchmark-Methoden
mit größtenteils guten Ergebnissen und gibt einen Ausblick auf weitere Anwendungsgebiete
und zu beantwortende Forschungsfragen.
Debatten führen nicht immer zu einem Konsens. Selbst die Vorlage von Beweisen bewirkt nicht immer eine Überzeugung der Gegenseite. Dies zeigt sich nicht nur in der Geschichte der Wissenschaften (vgl. Ludwik Fleck, Bruno Latour), sondern auch in der in unterschiedlichen Disziplinen geführten zeitgenössischen Debatte unter dem Label ‚science wars‘ zwischen einem Realismus und Konstruktivismus beziehungsweise Relativismus. Unterschiede in ihren Legitimierungen zeigen systematisch verschiedene Wirklichkeits- und Wahrheitsverständnisse, die sich aus den vom Seinsstandort der Perspektive abhängigen Grundannahmen konstituieren. Über einen wissenssoziologischen Zugriff wird es möglich die (sozio-)strukturlogische Konstitution von Perspektivität zu analysieren, die eine epistemologisch vorstrukturierte Revolvierung untereinander inkommensurabler Beiträge in der Debatte aufdeckt, was als Erklärung für ungelöste Debatten in Wissenschaft, Politik und Alltag überhaupt fungieren kann.
Die vorliegende Arbeit orientiert sich in ihrem Vorgehen an dem von Paul Boghossian veröffentlichten Werk ‚Angst vor der Wahrheit‘ als zeitgenössischen Vertreter eines Neuen Realismus. Hierbei werden zum einen den direkten Bezügen von Boghossian die Aussagen der kritisierten Perspektiven (v.a. Latour und Goodman) gegenübergestellt, als auch zum anderen weitere Spielarten eines Konstruktivismus (kognitionstheoretischer Konstruktivismus nach Maturana und Varela, soziologischer Konstruktivismus nach Berger und Luckmann, Wissenschaftssoziologie am Beispiel von Bloor und Latour, die Systemtheorie von Luhmann sowie postkonstruktivistische Positionen) in den Dimensionen ‚Wissensverständnis‘, ‚Subjektrelevanz‘ und ‚Einstellung zu einer naturalistischen Grundlage‘ vorgestellt. Es wird eine systematische und beidseitige Fehlinterpretation in der Debatte zwischen Realismus und Konstruktivismus sichtbar. Diese wird auf die Seinsgebundenheit von Perspektiven nach dem Verständnis einer mannheimschen Wissenssoziologie zurückgeführt. Anhand einer Rekonstruktion der Erkenntnistheorie des frühen Mannheims (1922: ‚Strukturanalyse der Erkenntnistheorie‘) wird die (sozio-)strukturlogische Konstitution erkenntnistheoretischer Elemente von Grundwissenschaften herausgearbeitet, wodurch denkstilgemäße Objektivierungen (und damit Wahrheitsverständnisse) unterschieden werden können. Diese Unterschiede erklären nicht nur die Inkommensurabilität von heterogenen Perspektiven in Debatten, sondern zeigen auf, dass das Aufeinandertreffen der Debattierenden vorstrukturiert sind. Der Ablauf einer Debatte ist soziostrukturell determiniert. Abschließend wird in der vorliegenden Arbeit diskutiert, inwiefern der verfahrenen Situation einer Debatte entgegengewirkt werden kann und auf welche Weise eine wissenssoziologische Analyse zu einem gegenseitigen Verständnis zwischen debattierenden Parteien beitragen kann.
Diese Dissertation beschäftigt sich mit der Fragestellung, ob und wie Intersektionalität als analytische Perspektive für literarische Texte eine nützliche Ergänzung für ethnisch geordnete Literaturfelder darstellt. Diese Fragestellung wird anhand der Analyse dreier zeitgenössischer chinesisch-kanadischer Romane untersucht.
In der Einleitung wird die Relevanz der Themenbereiche Intersektionalität und asiatisch-kanadische Literatur erörtert. Das darauffolgende Kapitel bietet einen historischen Überblick über die chinesisch-kanadische Einwanderung und geht detailliert auf die literarischen Produktionen ein. Es wird aufgezeigt, dass, obwohl kulturelle Güter auch zur Artikulation von Ungleichheitsverhältnissen aufgrund von zugeschriebener ethnischer Zugehörigkeit entstehen, ein Diversifizierungsbestreben innerhalb der literarischen Gemeinschaft von chinesisch-kanadischen Autor:innen identifiziert werden kann. Das dritte Kapitel widmet sich dem Begriff „Intersektionalität“ und stellt, nach einer historischen Einordnung des Konzeptes mit seinen Ursprüngen im Black Feminism, Intersektionalität als bindendes Element zwischen Postkolonialismus, Diversität und Empowerment dar – Konzepte, die für die Analyse (kanadischer) Literatur in dieser Dissertation von besonderer Relevanz sind. Anschließend wird die Rolle von Intersektionalität in der Literaturwissenschaft aufgegriffen. Die darauffolgenden exemplarischen Analysen von Kim Fus For Today I Am a Boy, Wayson Choys The Jade Peony und Yan Lis Lily in the Snow veranschaulichen die vorangegangen methodischen Überlegungen. Allen drei Romanen vorangestellt ist die Kontextualisierung des jeweiligen Werkes als chinesisch-kanadisch, aber auch bisher vorgenommene Überlegungen, die diese Einordnung infrage stellen. Nach einer Zusammenfassung des Inhalts folgt eine intersektionale Analyse auf der inhaltlichen Ebene, die in den familiären und weiteren sozialen Bereich unterteilt ist, da sich die Hierarchiemechanismen innerhalb dieser Bereiche unterscheiden oder gegenseitig verstärken, wie aus den Analysen hervorgeht. Anschließend wird die formale Analyse mit einem intersektionalen Schwerpunkt in einem separaten Unterkapitel näher beleuchtet. Ein drittes Unterkapitel widmet sich einem dem jeweiligen Roman spezifischen Aspekt, der im Zusammenhang mit einer intersektionalen Analyse von besonderer Relevanz ist. Die Arbeit schließt mit einem übergreifenden Fazit, welches die wichtigsten Ergebnisse aus der Analyse zusammenfasst und mit weiteren Überlegungen zu den Implikationen dieser Dissertation, vor allem im Hinblick auf sogenannte kanadische „master narratives“, die eine weitreichende, kontextuelle Relevanz für das Arbeiten mit literarischen Texten aufweisen und durch einen intersektionalen literarischen Ansatz in Zukunft gegebenenfalls gewinnbringend ergänzt werden können.
In recent years, the establishment of new makerspaces in Germany has increased significantly. The underlying phenomenon of the Maker Movement is a cultural and technological movement focused on making physical and digital products using open source principles, collaborative production, and individual empowerment. Because of its potential to democratize the innovation and production process, empower individuals and communities, and enable innovators to solve problems at the local level, the Maker Movement has received considerable attention in recent years. Despite numerous indicators, little is known about the phenomenon and its individual members, especially in Germany. Initial research suggests that the Maker Movement holds great potential for innovation and entrepreneurship. However, there is still a gap in understanding how Makers discover, evaluate and exploit entrepreneurial opportunities. Moreover, there is still controversy - both among policy makers and within the maker community itself - about the impact the maker movement has and can have on innovation and entrepreneurship in the future. This dissertation uses a mixed-methods approach to explore these questions. In addition to a quantitative analysis of maker characteristics, the results show that social impact, market size, and property rights have significant effects on the evaluation of entrepreneurial opportunities. The findings within this dissertation expand research in the field of the Maker Movement and offer multiple implications for practice. This dissertation provides the first quantitative data on makers in makerspaces in Germany, their characteristics and motivations. In particular, the relationship between the Maker Movement and entrepreneurship is explored in depth for the first time. This is complemented by the presentation of different identity profiles of the individuals involved. In this way, policy-makers can develop a better understanding of the movement, its personalities and values, and consider them in initiatives and formats.
This thesis deals with REITs, their capital structure and the effects on leverage that regulatory requirements might have. The data used results from a combination of Thomson Reuters data with hand-collected data regarding the REIT status, regulatory information and law variables. Overall, leverage is analysed across 20 countries in the years 2007 to 2018. Country specific data, manually extracted from yearly EPRA reportings, is merged with company data in order to analyse the influence of different REIT restrictions on a firm's leverage.
Observing statistically significant differences in means across NON-REITs and REITs, causes motivation for further investigations. My results show that variables beyond traditional capital structure determinants impact the leverage of REITs. I find that explicit restrictions on leverage and the distribution of profits have a significant effect on leverage decisions. This supports the notion that the restrictions from EPRA reportings are mandatory. I test for various combinations of regulatory variables that show both in isolation as well as in combination significant effects on leverage.
My main result is the following: Firms that operate under regulation that specifies a maximum leverage ratio, in addition to mandatory high dividend distributions, have on average lower leverage ratios. Further the existence of sanctions has a negative effect on REITs' leverage ratios, indicating that regulation is binding. The analysis clearly shows that traditional capital structure determinants are of second order relevance. This relationship highlights the impact on leverage and financing decisions caused by regulation. These effects are supported by further analysis. Results based on an event study show that REITs have statistically lower leverage ratios compared to NON-REITs. Based on a structural break model, the following effect becomes apparent: REITs increase their leverage ratios in years prior REIT status. As a consequence, the ex ante time frame is characterised by a bunker and adaption process, followed by the transformation in the event. Using an event study and a structural break model, the analysis highlights the dominance of country-specific regulation.
Striving for sustainable development by combating climate change and creating a more social world is one of the most pressing issues of our time. Growing legal requirements and customer expectations require also Mittelstand firms to address sustainability issues such as climate change. This dissertation contributes to a better understanding of sustainability in the Mittelstand context by examining different Mittelstand actors and the three dimensions of sustainability - social, economic, and environmental sustainability - in four quantitative studies. The first two studies focus on the social relevance and economic performance of hidden champions, a niche market leading subgroup of Mittelstand firms. At the regional level, the impact of 1,645 hidden champions located in Germany on various dimensions of regional development is examined. A higher concentration of hidden champions has a positive effect on regional employment, median income, and patents. At the firm level, analyses of a panel dataset of 4,677 German manufacturing firms, including 617 hidden champions, show that the latter have a higher return on assets than other Mittelstand firms. The following two chapters deal with environmental strategies and thus contribute to the exploration of the environmental dimension of sustainability. First, the consideration of climate aspects in investment decisions is compared using survey data from 468 European venture capital and private equity investors. While private equity firms respond to external stakeholders and portfolio performance and pursue an active ownership strategy, venture capital firms are motivated by product differentiation and make impact investments. Finally, based on survey data from 443 medium-sized manufacturing firms in Germany, 54% of which are family-owned, the impact of stakeholder pressures on their decarbonization strategies is analyzed. A distinction is made between symbolic (compensation of CO₂-emissions) and substantive decarbonization strategies (reduction of CO₂-emissions). Stakeholder pressures lead to a proactive pursuit of decarbonization strategies, with internal and external stakeholders varying in their influence on symbolic and substantial decarbonization strategies, and the relationship influenced by family ownership.
The German Mittelstand is closely linked to the success of the German economy. Mittelstand firms, thereof numerous Hidden Champions, significantly contribute to Germany’s economic performance, innovation, and export strength. However, the advancing digitalization poses complex challenges for Mittelstand firms. To benefit from the manifold opportunities offered by digital technologies and to defend or even expand existing market positions, Mittelstand firms must transform themselves and their business models. This dissertation uses quantitative methods and contributes to a deeper understanding of the distinct needs and influencing factors of the digital transformation of Mittelstand firms. The results of the empirical analyses of a unique database of 525 mid-sized German manufacturing firms, comprising both firm-related information and survey data, show that organizational capabilities and characteristics significantly influence the digital transformation of Mittelstand firms. The results support the assumption that dynamic capabilities promote the digital transformation of such firms and underline the important role of ownership structure, especially regarding family influence, for the digital transformation of the business model and the pursuit of growth goals with digitalization. In addition to the digital transformation of German Mittelstand firms, this dissertation examines the economic success and regional impact of Hidden Champions and hence, contributes to a better understanding of the Hidden Champion phenomenon. Using quantitative methods, it can be empirically proven that Hidden Champions outperform other mid-sized firms in financial terms and promote regional development. Consequently, the results of this dissertation provide valuable research contributions and offer various practical implications for firm managers and owners as well as policy makers.
Diese Dissertationsschrift befasst sich mit der Erforschung des motorischen Gedächtnisses. Wir gehen der Frage nach, ob sich dort Analogien zu im deklarativen Gedächtnis bekannten kontextuellen und inhibitorischen Effekten finden lassen.
Der erste von drei peer reviewed Artikeln setzt sich mit der generellen Bedeutung von externen Kontextmerkmalen für einen motorischen Gedächtnisabruf auseinander. Wir veränderten zwei verschiedene Sätze motorischer Sequenzen entlang einer hohen Zahl entsprechender Merkmale. Signifikant unterschiedliche Erinnerungsleistungen wiesen auf eine Kontextabhängigkeit motorischer Inhalte hin. Die Erinnerungsleistung variierte entlang der seriellen Output-Position. Bei einem Kontextwechsel blieb die Erinnerungsleistung über den Abrufverlauf nahezu stabil, bei Kontextbeibehaltung fiel diese schnell signifikant ab.
Beide weiteren peer reviewed Artikel wenden sich dann der Inhibition motorischer Sequenzen zu. Im zweiten Artikel begutachten wir drei Sätze motorischer Sequenzen, die wir mit verschiedenen Händen ausführen ließen, auf ein selektives gerichtetes Vergessen. Die Vergessen-Gruppe zeigte dies nur, wenn für Satz Zwei und Drei dieselbe Hand benutzt wurde und somit ein hohes Interferenzpotenzial zwischen diesen Listen bestand. War dieses im Vergleich niedrig, indem beide Sätze mit verschiedenen Händen auszuführen waren, trat kein selektives gerichtetes Vergessen auf. Das deutet auf kognitive Inhibition als wirkursächlichen Prozess.
Im dritten Artikel schließlich untersuchen wir Effekte willentlicher kognitiver Unterdrückung sowohl des Gedächtnisabrufs als auch des Ausführens in einer motorischen Adaptation des TNT (think/no-think) – Paradigmas (Anderson & Green, 2001). Waren die Sequenzen in Experiment 1 anfänglich stärker trainiert worden, so zeigten willentlich unterdrückte (no-think) motorische Repräsentationen eine deutliche Verlangsamung in deren Zugänglichkeit und tendenziell auch in der Ausführung, - im Vergleich zu Basisraten-Sequenzen. Waren die Sequenzen in Experiment 2 dagegen nur moderat trainiert, wurden diese auch schlechter erinnert und deutlich verlangsamt ausgeführt. Willentliche kognitive Unterdrückung kann motorische Gedächtnisrepräsentation und deren Ausführung beeinflussen.
Unsere drei Artikel bestätigen motorische Analogien bekannter Kontext- und Inhibitionseffekte im deklarativen Gedächtnis. Wir führen ein selektives gerichtetes Vergessen motorischer Inhalte eindeutig auf Inhibition zurück und bestätigen darüber hinaus Effekte der willentlichen Unterdrückung motorischer Gedächtnisrepräsentation.
While humans find it easy to process visual information from the real world, machines struggle with this task due to the unstructured and complex nature of the information. Computer vision (CV) is the approach of artificial intelligence that attempts to automatically analyze, interpret, and extract such information. Recent CV approaches mainly use deep learning (DL) due to its very high accuracy. DL extracts useful features from unstructured images in a training dataset to use them for specific real-world tasks. However, DL requires a large number of parameters, computational power, and meaningful training data, which can be noisy, sparse, and incomplete for specific domains. Furthermore, DL tends to learn correlations from the training data that do not occur in reality, making DNNs poorly generalizable and error-prone.
Therefore, the field of visual transfer learning is seeking methods that are less dependent on training data and are thus more applicable in the constantly changing world. One idea is to enrich DL with prior knowledge. Knowledge graphs (KG) serve as a powerful tool for this purpose because they can formalize and organize prior knowledge based on an underlying ontological schema. They contain symbolic operations such as logic, rules, and reasoning, and can be created, adapted, and interpreted by domain experts. Due to the abstraction potential of symbols, KGs provide good prerequisites for generalizing their knowledge. To take advantage of the generalization properties of KG and the ability of DL to learn from large-scale unstructured data, attempts have long been made to combine explicit graph and implicit vector representations. However, with the recent development of knowledge graph embedding methods, where a graph is transferred into a vector space, new perspectives for a combination in vector space are opening up.
In this work, we attempt to combine prior knowledge from a KG with DL to improve visual transfer learning using the following steps: First, we explore the potential benefits of using prior knowledge encoded in a KG for DL-based visual transfer learning. Second, we investigate approaches that already combine KG and DL and create a categorization based on their general idea of knowledge integration. Third, we propose a novel method for the specific category of using the knowledge graph as a trainer, where a DNN is trained to adapt to a representation given by prior knowledge of a KG. Fourth, we extend the proposed method by extracting relevant context in the form of a subgraph of the KG to investigate the relationship between prior knowledge and performance on a specific CV task. In summary, this work provides deep insights into the combination of KG and DL, with the goal of making DL approaches more generalizable, more efficient, and more interpretable through prior knowledge.
There is no longer any doubt about the general effectiveness of psychotherapy. However, up to 40% of patients do not respond to treatment. Despite efforts to develop new treatments, overall effectiveness has not improved. Consequently, practice-oriented research has emerged to make research results more relevant to practitioners. Within this context, patient-focused research (PFR) focuses on the question of whether a particular treatment works for a specific patient. Finally, PFR gave rise to the precision mental health research movement that is trying to tailor treatments to individual patients by making data-driven and algorithm-based predictions. These predictions are intended to support therapists in their clinical decisions, such as the selection of treatment strategies and adaptation of treatment. The present work summarizes three studies that aim to generate different prediction models for treatment personalization that can be applied to practice. The goal of Study I was to develop a model for dropout prediction using data assessed prior to the first session (N = 2543). The usefulness of various machine learning (ML) algorithms and ensembles was assessed. The best model was an ensemble utilizing random forest and nearest neighbor modeling. It significantly outperformed generalized linear modeling, correctly identifying 63.4% of all cases and uncovering seven key predictors. The findings illustrated the potential of ML to enhance dropout predictions, but also highlighted that not all ML algorithms are equally suitable for this purpose. Study II utilized Study I’s findings to enhance the prediction of dropout rates. Data from the initial two sessions and observer ratings of therapist interventions and skills were employed to develop a model using an elastic net (EN) algorithm. The findings demonstrated that the model was significantly more effective at predicting dropout when using observer ratings with a Cohen’s d of up to .65 and more effective than the model in Study I, despite the smaller sample (N = 259). These results indicated that generating models could be improved by employing various data sources, which provide better foundations for model development. Finally, Study III generated a model to predict therapy outcome after a sudden gain (SG) in order to identify crucial predictors of the upward spiral. EN was used to generate the model using data from 794 cases that experienced a SG. A control group of the same size was also used to quantify and relativize the identified predictors by their general influence on therapy outcomes. The results indicated that there are seven key predictors that have varying effect sizes on therapy outcome, with Cohen's d ranging from 1.08 to 12.48. The findings suggested that a directive approach is more likely to lead to better outcomes after an SG, and that alliance ruptures can be effectively compensated for. However, these effects
were reversed in the control group. The results of the three studies are discussed regarding their usefulness to support clinical decision-making and their implications for the implementation of precision mental health.
Some of the largest firms in the DACH region (Germany, Austria, Switzerland) are (partially) owned by a foundation and/or a family office, such as Aldi, Bosch, or Rolex. Despite their growing importance, prior research neglected to analyze the impact of these intermediaries on the firms they own. This dissertation closes this research gap by contributing to a deeper understanding of two increasingly used family firm succession vehicles, through four empirical quantitative studies. The first study focuses on the heterogeneity in foundation-owned firms (FOFs) by applying a descriptive analysis to a sample of 169 German FOFs. The results indicate that the family as a central stakeholder in a family foundation fosters governance that promotes performance and growth. The second study examines the firm growth of 204 FOFs compared to matched non-FOFs from the DACH region. The findings suggest that FOFs grow significantly less in terms of sales but not with regard to employees. In addition, it seems that this negative effect is stronger for the upper than for the middle or lower quantiles of the growth distribution. Study three adopts an agency perspective and investigates the acquisition behavior within the group of 164 FOFs. The results reveal that firms with charitable foundations as owners are more likely to undertake acquisitions and acquire targets that are geographically and culturally more distant than firms with a family foundation as owner. At the same time, they favor target companies from the same or related industries. Finally, the fourth study scrutinizes the capital structure of firms owned by single family-offices (SFOs). Drawing on a hand-collected sample of 173 SFO-owned firms in the DACH region, the results show that SFO-owned firms display a higher long-term debt ratio than family-owned firms, indicating that SFO-owned firms follow trade-off theory, similar to private equity-owned firms. Additional analyses show that this effect is stronger for SFOs that sold their original family firm. In conclusion, the outcomes of this dissertation furnish valuable research contributions and offer practical insights for families navigating such intermediaries or succession vehicles in the long term.
Family firms play a crucial role in the DACH region (Germany, Austria, Switzerland). They are characterized by a long tradition, a strong connection to the region, and a well-established network. However, family firms also face challenges, especially in finding a suitable successor. Wealthy entrepreneurial families are increasingly opting to establish Single Family Offices (SFOs) as a solution to this challenge. An SFO takes on the management and protection of family wealth. Its goal is to secure and grow the wealth over generations. In Germany alone, there are an estimated 350 to 450 SFOs, with 70% of them being established after the year 2000. However, research on SFOs is still in its early stages, particularly regarding the role of SFOs as firm owners. This dissertation delves into an exploration of SFOs through four quantitative empirical studies. The first study provides a descriptive overview of 216 SFOs from the DACH-region. Findings reveal that SFOs exhibit a preference for investing in established companies and real estate. Notably, only about a third of SFOs engage in investments in start-ups. Moreover, SFOs as a group are heterogeneous. Categorizing them into three groups based on their relationship with the entrepreneurial family and the original family firm reveals significant differences in their asset allocation strategies. Subsequent studies in this dissertation leverage a hand-collected sample of 173 SFO-owned firms from the DACH region, meticulously matched with 684 family-owned firms from the same region. The second study focusing on financial performance indicates that SFO-owned firms tend to exhibit comparatively poorer financial performance than family-owned firms. However, when members of the SFO-owning family hold positions on the supervisory or executive board of the firm, there's a notable improvement. The third study, concerning cash holdings, reveals that SFO-owned firms maintain a higher cash holding ratio compared to family-owned firms. Notably, this effect is magnified when the SFO has divested its initial family firms. Lastly, the fourth study regarding capital structure highlights that SFO-owned firms tend to display a higher long-term debt ratio than family-owned firms. This suggests that SFO-owned firms operate within a trade-off theory framework, like private equity-owned firms. Furthermore, this effect is stronger for SFOs that sold their original family firm. The outcomes of this research are poised to provide entrepreneurial families with a practical guide for effectively managing and leveraging SFOs as a strategic long-term instrument for succession and investment planning.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.
Die chinesische und westliche Forschung, die sich mit der Beziehung zwischen chinesischer Kultur und katholischer Kirche befasst, konzentriert sich in der Regel auf die katholische Kirche in China vor dem Verbot des Christentums. Die einzigartige Perspektive dieser Arbeit besteht darin, die Veränderungen in der Beziehung zwischen den beiden vom Ende der Ming-Dynastie bis zur ersten Hälfte des 20. Jahrhunderts zu untersuchen. Vor dem Verbot nährten die katholischen Missionare den konfuzianischen Gelehrten und verbanden die katholische Lehre mit dem Konfuzianismus, um ihren Einfluss in der Oberschicht der chinesischen Gesellschaft auszuüben. Nach dem Verbot achteten die katholischen Missionare nicht so sehr auf ihre Beziehung zur chinesischen Kultur wie ihre Vorgänger im 17. und 18. Jahrhundert. Einige Missionare sowie chinesische Katholiken wollten die Situation ändern und förderten gemeinsam die Gründung der Fu-Jen-Universität, die großen Wert auf die chinesische Kultur legte und die Beziehung zwischen der Katholischen Kirche und der chinesischen Kultur Anfang des 20. Jahrhunderts widerspiegeln konnte. Die Professoren der Abteilung Chinesisch und Geschichte leisteten den größten Beitrag zur Forschung der chinesischen Kultur an der Universität. Im Vergleich zu anderen wichtigen Universitäten in Peking, wo die chinesische Literatur im Fachbereich Chinesisch eine zentrale Stellung einnahm, legte die Fu-Jen-Universität mehr Wert auf die chinesische Sprache und Schriftzeichen. Anfang des 20. Jahrhunderts erlangten Frauen unter dem Einfluss der globalen feministischen Bewegung das Recht auf Hochschulbildung. Bis 1920 waren jedoch die katholischen Universitäten in Bezug auf die Hochschulbildung von Frauen Jahrzehnte hinter den protestantischen und nichtkirchlichen Universitäten zurückgefallen. Die Fu-Jen-Universität verbesserte diese Situation, indem sie nicht nur eine große Anzahl von Studentinnen annahm, sondern ihnen eine Vielzahl von Fächern einschließlich Chinesisch und Geschichte anbot. Im Allgemeinen konnte die Universität als Verbindung zwischen dem Katholizismus und der chinesischen Kultur in der ersten Hälfte des 20. Jahrhunderts angesehen werden. Sie spielte eine wichtige Rolle nicht nur bei der Erforschung und Verbreitung der chinesischen Kultur, sondern auch bei der Ausweitung des Einflusses der katholischen Kirche zu dieser Zeit.
Building Fortress Europe Economic realism, China, and Europe’s investment screening mechanisms
(2023)
This thesis deals with the construction of investment screening mechanisms across the major economic powers in Europe and at the supranational level during the post-2015 period. The core puzzle at the heart of this research is how, in a traditional bastion of economic liberalism such as Europe, could a protectionist tool such as investment screening be erected in such a rapid manner. Within a few years, Europe went from a position of being highly welcoming towards foreign investment to increasingly implementing controls on it, with the focus on China. How are we to understand this shift in Europe? I posit that Europe’s increasingly protectionist shift on inward investment can be fruitfully understood using an economic realist approach, where the introduction of investment screening can be seen as part of a process of ‘balancing’ China’s economic rise and reasserting European competitiveness. China has moved from being the ‘workshop of the world’ to becoming an innovation-driven economy at the global technological frontier. As China has become more competitive, Europe, still a global economic leader, broadly situated at the technological frontier, has begun to sense a threat to its position, especially in the context of the fourth industrial revolution. A ‘balancing’ process has been set in motion, in which Europe seeks to halt and even reverse the narrowing competitiveness gap between it and China. The introduction of investment screening measures is part of this process.
Die Hauptzielsetzung der vorliegenden Arbeit besteht in der Erarbeitung von Möglichkeiten zur Optimierung der Bewirtschaftung der Riveristalsperre. Dazu werden zunächst alle relevanten Einflussgrößen und Gefahrenpotentiale des Systems aus dem Einzugsgebiet und der Talsperre analysiert und bewertet. Letztlich wird die Konzeption eines integrierten Bewirtschaftungsplanes für die Riveristalsperre auf der Basis einer neuen Pilotierungsanlage im SWT-Wasserwerk in Trier-Irsch dargestellt, diskutiert und auf Funktionsfähigkeit geprüft.
Mit einer aus ca. 90% des Einzugsgebiets bestehenden Waldfläche ist die Hauptsperre der Riveristalsperre durchschnittlich als eindeutig oligotroph eingestuft und das Rohwasser der Riveristalsperre von ausgezeichneter Qualität mit nur wenigen und beherrschbaren Gefahrenpotentialen.
Unter Berücksichtigung der Pilotierungsergebnisse war die In/Out, PES, UF- geeigneter als die Out/In, PVDF-Membran. Die Anordnung der UF-Anlage auf der Rohwasserseite nach der Flockung für die Abtrennung der partikulären Wasserinhaltsstoffe mit einer nachgeschalteten Wasseraufhärtung, pH-Wert-Anhebung und Entmanganung in einer CaCO3-Filterstufe und abschließenden Desinfektion durch eine UV-Bestrahlung stellte sich als ideal für die Aufbereitung des Rohwassers der Riveristalsperre heraus.
Die Ergebnisse der Pilotanlage sind in einer großtechnischen Trinkwasseraufbereitung im Wasserwerk in Trier-Irsch umgesetzt und seit 2013 offiziell in Betrieb genommen.
Abschließend werden Maßnahmen gegen eventuelle Minderwassermengen bei z.B. langanhaltenden Trockenwetterperioden (Klimawandel !) und für die allgemeine Erhöhung der Versorgungssicherheit diskutiert, wobei in Trier und in der Region schon seit langem sehr stark in die Verbundnetzsysteme investiert wird.
This thesis contains three parts that are all connected by their contribution to research about the effects of trading apps on investment behavior. The primary motivation for this study is to investigate the previously undetermined consequences and effects of trading apps, which are a new phenomenon in the broker market, on the investment and risk behavior of Neobroker users.
Chapter 2 addresses the characteristics of a typical Neobroker user and a former Neobroker user and the impact of trading apps on the investment and risk behavior of their users. The results show that Neobroker users are significantly more risk tolerant than the general German population and are influenced by trading apps regarding their investment and risk behavior. Low trading fees and the low minimum investment amount are the main reasons for the use of trading apps. Investors who stop using trading apps mostly stop investing altogether. Another worrying result is that financial literacy among all groups is low and most Neobroker users have wrong conceptions about how trading apps earn money. In general, the financial literacy of all groups considered in this chapter is surprisingly low.
The third chapter investigates the effects of trading apps on investment behavior over time and compares the investment and risk behavior of Neobroker users and general investors. By using representative data of German Neobroker users, who were surveyed repeatedly over a 8-month time interval, it becomes possible to determine causal effects of the use of trading apps over time. In total, the financial literacy of Neobroker users increases with the longer use of a trading app. A worrying result is that the risk tolerance of Neobroker users rises significantly over time. Male Neobroker users gain a higher annual return (non-risk-adjusted) than female Neobroker users. In comparison to general investors, Neobroker users are significantly younger, more risk tolerant, more likely to buy derivatives and gain a higher annual return (non-risk-adjusted).
The fourth chapter analyses the impact of personality traits on the investment and risk behavior of Neobroker users. The results show that the BIG-5 personality traits have an impact on the investment behavior of Neobroker users. Two personality traits, openness and conscientiousness, stand out the most, as these two have explanatory power over various aspects of the behavior of Neobroker users. In particular, whether they buy different financial products than planned, the time they inform themselves about financial markets, the variety of financial products owned, and the reasons to use a Neobroker. Surprisingly, the risk tolerance of Neobroker users and the reasons to invest are not connected to any personal dimension. Whether a participant uses a trading app or a traditional broker to invest is respectively influenced by different personality traits.
Anmerkung: Es handelt sich um die 2. überarbeitete Auflage der Dissertation.
1. Auflage siehe:
"https://ubt.opus.hbz-nrw.de/frontdoor/index/index/docId/2083".
Ausgangspunkt der politisch-ikonographischen Untersuchung, in deren
Zentrum zwei Staatsporträts König Maximilians II. von Bayern stehen, ist die Beobachtung, dass diese beiden Bildnisse grundsätzlich unterschiedliche Inszenierungsformen wählen. Das erste von Max Hailer gefertigte Werk zeigt Maximilian II. im vollen bayerischen Krönungsornat und greift eine tradierte Darstellungsweise im Staatsporträt auf. Es entstand zwei Jahre nach Maximilians II. Thronbesteigung und damit nach den revolutionären Unruhen der Jahre 1848/49 im Jahr 1850. Das zweite wurde von Joseph Bernhardt 1857 bis 1858 gemalt und im Jahr 1858 zum zehnjährigen Thronjubiläum des Monarchen erstmals präsentiert. Die Inszenierung ändert sich im zweiten Bildnis: Das bayerische Krönungsornat ist der Generalsuniform gewichen, ebenso weitere Details, die sich noch in der ersten Darstellung finden: Draperie und Wappen fehlen, der übliche bayerisch-königliche Thronsessel ist durch einen anderen ersetzt. In den Hintergrund gedrängt ist die Verfassung, immerhin seit 1818 staatliche Rechtsgrundlage des bayerischen Königreichs. Die beiden Staatsporträts Maximilians II. leiten offensichtlich von den Herrscherbildnissen im vollen bayerischen Krönungsornat seines Großvaters Maximilian I. und Vaters Ludwig I. über zu einer solchen in Uniform mit Krönungsmantel wie sie sich bei Napoleon III. und Friedrich Wilhelm IV. finden und wie sie sein Sohn Ludwig II. weiterführte. Es stellt sich somit die Frage, welche Faktoren zu diesem prägnanten Wandel in der Inszenierung Maximilians II. als König von Bayern führten. Die Arbeit geht der These nach, dass beide Darstellungen grundlegend auf eine reaktionäre, gegen die Revolution 1848/49 gerichtete Politik ausgelegt sind, wobei dieser reaktionäre Charakter in Maximilians II. Bildnis von 1858 noch eine Steigerung im Vergleich zu derjenigen von 1850 erfährt. Zudem wandelt sich die innenpolitisch-historische Ausrichtung des ersten Porträts bei der zweiten Darstellung des bayerischen Monarchen in eine außenpolitisch-progressive. Die Legitimation Maximilians II. begründet sich nicht mehr, wie bei ersterem, in der Geschichte und der Herrschaft der Wittelsbacher, sondern in seinen eigenen Errungenschaften und seiner eigenen Herrschaft. Dieser Wechsel der politischen Bildaussage fußt sowohl auf den politischen Veränderungen und Entwicklungen innerhalb und außerhalb Bayerns als auch auf der Entwicklung des Staatsporträts in der Mitte des 19. Jahrhunderts. Nach nur zehn Jahren wird so eine veränderte Botschaft über Maximilians II. Position und Machtanspruch ausgesendet.
The positive consequences of performance pay on the wages and productivity have been well documented in the last decades. Yet, the increased pressure and work commitment associated with performance pay suggest that performance pay may have unintended negative consequences on worker’s health and well-being. As firms increasingly use performance pay worldwide, it becomes crucial to evaluate positive and negative consequences of performance pay. Thus, Chapters 2 – 4 of this doctoral thesis investigate the unintended adverse consequences of performance pay on stress, alcohol consumption, and loneliness, respectively. Chapter 5 investigates the positive role of performance pay on mitigating the overeducation wage penalty and enhancing labor market position of overeducated workers.
In Chapter 2, together with John S. Heywood and Uwe Jirjahn, I examine the hypothesis that performance pay is positively associated with employee stress. Using unique survey data from the German Socio-Economic Panel, I find performance pay consistently and importantly associates with greater stress even controlling for a long list of economic, social, and personality characteristics. The finding also holds in instrumental variable estimations accounting for the potential endogeneity of performance pay. Moreover, I show that risk tolerance and locus of control moderate the relationship between performance pay and stress. Among workers receiving performance pay, the risk tolerant and those believing they can control their environment suffer to a lesser degree from stress.
Chapter 3 examines the relationship between performance pay and alcohol use. Together with John S. Heywood and Uwe Jirjahn, I examine the hypothesis that alcohol use as “self-medication” is a natural response to the stress and uncertainty associated with performance pay. Using data from the German Socio-Economic Panel, I find that the likelihood of consuming each of four types of alcohol (beer, wine, spirits, and mixed drinks) is higher for those receiving performance pay even controlling for a long list of economic, social, and personality characteristics and in sensible instrumental variable estimates. I also show that the number of types of alcohol consumed is larger for those receiving performance pay and that the intensity of consumption increases. Moreover, I find that risk tolerance and gender moderate the relationship between performance pay and alcohol use.
In Chapter 4, I examine the hypothesis that performance pay increases the risk of employee loneliness due to increased stress, job commitment, and uncooperativeness associated with performance pay. Using the German Socio-Economic Panel, I find that performance pay is positively associated with both the incidence and intensity of loneliness. Correspondingly, performance pay decreases the social life satisfaction of workers. The findings also hold in instrumental variable estimations addressing the potential endogeneity of performance pay and in various robustness checks. Interestingly, investigating the potential role of moderating factors reveals that the association between performance pay and loneliness is particularly large for private sector employees.
Finally, in Chapter 5, I study the association between overeducation, performance pay, and wages. Overeducated workers are more productive and have higher wages in comparison to their adequately educated coworkers in the same jobs. However, they face a series of challenges in the labor market, including lower wages in comparison to their similarly educated peers who are in correctly matched jobs. Yet, less consensus exists over the adjustment mechanisms to overcome the negative consequences of overeducation. In this study, I examine the hypotheses that overeducated workers sort into performance pay jobs as an adjustment mechanism and that performance pay enhances their wages. Using the German Socio-Economic Panel, I show that overeducation associates with a higher likelihood of sorting into performance pay jobs and that performance pay moderates the wages of overeducated workers positively. It also holds in endogenous switching regressions accounting for the potential endogeneity of performance pay. Importantly, I show that the positive role of performance pay is particularly larger for the wages of overeducated women.
Strategien der Komik im Internet-Meme - Ambivalente Funktionen einer internationalen Populärkultur
(2024)
Internet-Memes sind ein globales, populäres Medium, oft und zumeist unproblematisiert rezipiert und in ihrer Komik meist nur fragmentarisch, auf bestimmte Aspekte fokussiert analysiert. Die vorliegende Arbeit bemüht sich um eine möglichst umfassende Darstellung der Komik in Memes basierend auf klassischen und modernen Komikkategorien. Auf Grundlage einer umfassend-kritischen Synthese der vorliegenden Fachliteratur und eines präzisen Analysemodells kann so eine begründete Diskussion über memetische Komik, ihre Funktionen und ihre positiven wie problematischen Aspekte geführt werden.
Information in der vorvertraglichen Phase – das heißt, Informationspflichten sowie Rechtsfolgen von Informationserteilung und -nichterteilung – in Bezug auf Kaufvertrag und Wahl des optionalen Instruments hat im Vorschlag der Europäischen Kommission für ein Gemeinsames Europäisches Kaufrecht (GEK; KOM(2011) 635) vielfältige Regelungen erfahren. Die vorliegende Arbeit betrachtet diese Regelungen auch in ihrem Verhältnis zu den Textstufen des Europäischen Privatrechts – Modellregeln und verbraucherschützende EU-Richtlinien – und misst sie an ökonomischen Rahmenbedingungen, die die Effizienz von Transaktionen gebieten und Grenzen des Nutzens von (Pflicht-)Informationen aufzeigen.
Vom Grundsatz der Vertragsfreiheit ausgehend ist jeder Partei das Risiko zugewiesen, unzureichend informiert zu sein, während die Gegenseite nur punktuell zur Information verpflichtet ist. Zwischen Unternehmern bleibt es auch nach dem GEK hierbei, doch zwischen Unternehmer und Verbraucher wird dieses Verhältnis umgekehrt. Dort gelten, mit Differenzierung nach Vertragsschlusssituationen, umfassende Kataloge von Informationspflichten hinsichtlich des Kaufvertrags. Als Konzept ist dies grundsätzlich sinnvoll; die Pflichten dienen dem Verbraucherschutz, insbesondere der Informiertheit und Transparenz vor der Entscheidung über den Vertragsschluss. Teilweise gehen die Pflichten aber zu weit. Die Beeinträchtigung der Vertragsfreiheit des Unternehmers durch die Pflichten und die Folgen ihrer Verletzung lässt sich nicht vollständig mit dem Ziel des Verbraucherschutzes rechtfertigen. Durch das Übermaß an Information fördern die angeordneten Pflichten den Verbraucherschutz nur eingeschränkt; sie genügen nicht verhaltensökonomischen Maßstäben. Es empfiehlt sich daher, zwischen Unternehmern und Verbrauchern bestimmte verpflichtende Informationsinhalte ganz zu streichen, auf im konkreten Fall nicht erforderliche Information zu verzichten, erst nach Vertragsschluss relevante Informationen auf diese Zeit zu verschieben und die verbleibenden vorvertraglichen Pflichtinformationen in einer für den Verbraucher besser zu verarbeitenden Weise zu präsentieren. Von den einem Verbraucher zu erteilenden Informationen sollte stets verlangt werden, dass sie klar und verständlich sind; die Beweislast für ihre ordnungsgemäße Erteilung sollte generell dem Unternehmer obliegen.
Neben die ausdrücklich angeordneten Informationspflichten treten ungeachtet der Verbraucher- oder Unternehmereigenschaft sowie der Käufer- oder Verkäuferrolle stark einzelfallabhängige Informationspflichten nach Treu und Glauben, die im Recht der Willensmängel niedergelegt sind. Hier ist der Grundsatz verwirklicht, dass mangelnde Information zunächst das eigene Risiko jeder Partei ist; berechtigtes Vertrauen und freie Willensbildung werden geschützt. Diese Pflichten berücksichtigen auch das Ziel der Effizienz und achten die Vertragsfreiheit. Das Vertrauen auf jegliche erteilten Informationen wird zudem dadurch geschützt, dass sie den Vertragsinhalt – allerdings in Verbraucherverträgen nicht umfassend genug – mitbestimmen können und dass ihre Unrichtigkeit sanktioniert wird.
Die Verletzung jeglicher Arten von Informationspflichten kann insbesondere einen Schadensersatzanspruch sowie – über das Recht der Willensmängel – die Möglichkeit zur Lösung vom Vertrag nach sich ziehen. Das Zusammenspiel der unterschiedlichen Mechanismen führt allerdings zu Friktionen sowie zu Lücken in den Rechtsfolgen von Informationspflichtverletzungen. Daher empfiehlt sich die Schaffung eines Schadensersatzanspruchs für jede treuwidrig unterlassene Informationserteilung; hierdurch wird das Gebot von Treu und Glauben auch außerhalb des Rechts der Willensmängel zu einer eigentlichen einzelfallabhängigen Informationspflicht aufgewertet.
Today, almost every modern computing device is equipped with multicore processors capable of efficient concurrent and parallel execution of threads. This processor feature can be leveraged by concurrent programming, which is a challenge for software developers for two reasons: first, it introduces a paradigm shift that requires a new way of thinking. Second, it can lead to issues that are unique to concurrent programs due to the non-deterministic, interleaved execution of threads. Consequently, the debugging of concurrency and related performance issues is a rather difficult and often tedious task. Developers still lack on thread-aware programming tools that facilitate the understanding of concurrent programs. Ideally, these tools should be part of their daily working environment, which typically includes an Integrated Development Environment (IDE). In particular, the way source code is visually presented in traditional source-code editors does not convey much information on whether the source code is executed concurrently or in parallel in the first place.
With this dissertation, we pursue the main goal of facilitating and supporting the understanding and debugging of concurrent programs. To this end, we formulate and utilize a visualization paradigm that particularly includes the display of interactive glyph-based visualizations embedded in the source-code editor close to their corresponding artifacts (in-situ).
To facilitate the implementation of visualizations that comply with our paradigm as plugins for IDEs, we designed, implemented and evaluated a programming framework called CodeSparks. After presenting the design goals and the architecture of the framework, we demonstrate its versatility with a total of fourteen plugins realized by different developers using the CodeSparks framework (CodeSparks plugins). With focus group interviews, we empirically investigated how developers of the CodeSparks plugins experienced working with the framework. Based on the plugins, deliberate design decisions and the interview results, we discuss to what extent we achieved our design goals. We found that the framework is largely target programming-language independent and that it supports the development of plugins for a wide range of source-code-related tasks while hiding most of the details of the underlying plugin development API.
In addition, we applied our visualization paradigm to thread-related runtime data from concurrent programs to foster the awareness of source code being executed concurrently or in parallel. As a result, we developed and designed two in-situ thread visualizations, namely ThreadRadar and ThreadFork, with the latter building on the former. Both thread visualizations are based on a debugging approach, which combines statistical profiling, thread-aware runtime metrics, clustering of threads on the basis of these metrics, and finally interactive glyph-based in-situ visualizations. To address scalability issues of the ThreadRadar in terms of space required and the number of displayable thread clusters, we designed a revised thread visualization. This revision also involved the question of how many thread clusters k should be computed in the first place. To this end, we conducted experiments with the clustering of threads for artifacts from a corpus of concurrent Java programs that include real-world Java applications and concurrency bugs. We found that the maximum k on the one hand and the optimal k according to four cluster validation indices on the other hand rarely exceed three. However, occasionally thread clusterings with k > 3 are available and also optimal. Consequently, we revised both the clustering strategy and the visualization as parts of our debugging approach, which resulted in the ThreadFork visualization. Both in-situ thread visualizations, including their additional features that support the exploration of the thread data, are implemented in a tool called CodeSparks-JPT, i.e., as a CodeSparks plugin for IntelliJ IDEA.
With various empirical studies, including anecdotal usage scenarios, a usability test, web surveys, hands-on sessions, questionnaires and interviews, we investigated quality aspects of the in-situ thread visualizations and their corresponding tools. First, by a demonstration study, we illustrated the usefulness of the ThreadRadar visualization in investigating and fixing concurrency bugs and a performance bug. This was confirmed by a subsequent usability test and interview, which also provided formative feedback. Second, we investigated the interpretability and readability of the ThreadFork glyphs as well as the effectiveness of the ThreadFork visualization through anonymous web surveys. While we have found that the ThreadFork glyphs are correctly interpreted and readable, it remains unproven that the ThreadFork visualization effectively facilitates understanding the dynamic behavior of threads that concurrently executed portions of source code. Moreover, the overall usability of CodeSparks-JPT is perceived as "OK, but not acceptable" as the tool has issues with its learnability and memorability. However, all other usability aspects of CodeSparks-JPT that were examined are perceived as "above average" or "good".
Our work supports software-engineering researchers and practitioners in flexibly and swiftly developing novel glyph-based visualizations that are embedded in the source-code editor. Moreover, we provide in-situ thread visualizations that foster the awareness of source code being executed concurrently or in parallel. These in-situ thread visualizations can, for instance, be adapted, extended and used to analyze other use cases or to replicate the results. Through empirical studies, we have gradually shaped the design of the in-situ thread visualizations through data-driven decisions, and evaluated several quality aspects of the in-situ thread visualizations and the corresponding tools for their utility in understanding and debugging concurrent programs.
Income is one of the key indicators to measure regional differences, individual opportunities, and inequalities in society. In Germany, the regional distribution of income is a central concern, especially regarding persistent East-West, North-South, or urban-rural inequalities.
Effective local policies and institutions require reliable data and indicators on
regional inequality. However, its measurement faces severe data limitations: Inconsistencies
in the existing microdata sources yield an inconclusive picture of regional inequality.
While survey data provide a wide range of individual and household information but lack top incomes, tax data contain the most reliable income records but offer a limited range of socio-demographic variables essential for income analysis. In addition, information on the
long-term evolution of the income distribution at the small-scale level is scarce.
In this context, this thesis evaluates regional income inequality in Germany from various perspectives and embeds three self-contained studies in Chapters 3, 4, and 5, which present different data integration approaches. The first chapter motivates this thesis, while the second chapter provides a brief overview of the theoretical and empirical concepts as well
as the datasets, highlighting the need to combine data from different sources.
Chapter 3 tackles the issue of poor coverage of top incomes in surveys, also referred to as the ’missing rich’ problem, which leads to severe underestimation of income inequality. At the regional level this shortcoming is even more eminent due to small regional sample sizes. Based on reconciled tax and survey data, this chapter therefore proposes a new multiple
imputation top income correction approach that, unlike previous research, focuses on the regional rather than the national level. The findings indicate that inequality between and within the regions is much larger than previously understood with the magnitude of the adjustment depending on the federal states’ level of inequality in the tail. To increase the potential of the tax data for income analysis and to overcome the lack
of socio-demographic characteristics, Chapter 4 enriches the tax data with information on education and working time from survey data. For that purpose, a simulation study evaluates missing data methods and performant prediction models, finding that Multinomial
Regression and Random Forest are the most suitable methods for the specific data fusion scenario. The results indicate that data fusion approaches broaden the scope for regional inequality analysis from cross-sectional enhanced tax data.
Shifting from a cross-sectional to a longitudinal perspective on regional income inequality, Chapter 5 contributes to the currently relatively small body of literature dealing with the potential development of regional income disparities over time. Regionalized dynamic microsimulations provide a powerful tool for the study of long-term income developments. Therefore, this chapter extends the microsimulation model MikroSim with an income module
that accounts for the individual, household, and regional context. On this basis, the potential dynamics in gender and migrant income gaps across the districts in Germany are simulated under scenarios of increased full-time employment rates and higher levels
of tertiary education. The results show that the scenarios have regionally differing effects on inequality dynamics, highlighting the considerable potential of dynamic microsimulations for regional evidence-based policies. For the German case, the MikroSim model is well suited to analyze future regional developments and can be flexibly adapted for further specific research questions.
Optimal Error Bounds in Normal and Edgeworth Approximation of Symmetric Binomial and Related Laws
(2024)
This thesis explores local and global normal and Edgeworth approximations for symmetric
binomial distributions. Further, it examines the normal approximation of convolution powers
of continuous and discrete uniform distributions.
We obtain the optimal constant in the local central limit theorem for symmetric binomial
distributions and its analogs in higher-order Edgeworth approximation. Further, we offer a
novel proof for the known optimal constant in the global central limit theorem for symmetric
binomial distributions using Fourier inversion. We also consider the effect of simple continuity
correction in the global central limit theorem for symmetric binomial distributions. Here, and in
higher-order Edgeworth approximation, we found optimal constants and asymptotically sharp
bounds on the approximation error. Furthermore, we prove asymptotically sharp bounds on the
error in the local case of a relative normal approximation to symmetric binomial distributions.
Additionally, we provide asymptotically sharp bounds on the approximation error in the local
central limit theorem for convolution powers of continuous and discrete uniform distributions.
Our methods include Fourier inversion formulae, explicit inequalities, and Edgeworth expansions, some of which may be of independent interest.
This thesis consists of four highly related chapters examining China’s rise in the aluminium industry. The first chapter addresses the conditions that allowed China, which first entered the market in the 1950s, to rise to world leadership in aluminium production. Although China was a latecomer, its re-entry into the market after the oil crises in the 1970s was a success and led to its ascent as the world’s largest aluminium producer by 2001. With an estimated production of 40.4 million tonnes in 2022, China represented almost 60% of the global output. Chapter 1 examines the factors underlying this success, such as the decline of international aluminium cartels, the introduction of innovative technology, the US granting China the MFN tariff status, Chinese-specific factors, and supportive government policies. Chapter 2 develops a mathematical model to analyze firms’ decisions in the short term. It examines how an incumbent with outdated technology and a new entrant with access to a new type of technology make strategic decisions, including the incumbent’s decision whether to deter entry, the production choice of firms, the optimal technology adoption rate of the newcomer, and cartel formation. Chapter 3 focuses on the adoption of new technology by firms upon market entry in four scenarios: firstly, a free market Cournot competition; secondly, a situation in which the government determines technology adoption rates; thirdly, a scenario in which the government controls both technology and production; and finally, a scenario where the government dictates technology adoption rates, production levels, and also the number of market participants. Chapter 4 applies the Spencer and Brander (1983) framework to examine strategic industrial policy. The model assumes that there are two exporting firms in two different countries that sell a product to a third country. We examine how the domestic firm is influenced by government intervention, such as the provision of a fixed-cost subsidy to improve its competitiveness relative to the foreign company. Chapter 4 initially investigates a scenario where only one government offers a fixed-cost subsidy, followed by an analysis of the case when both governments simultaneously provide financial help. Taken together, these chapters provide a comprehensive analysis of the strategic, technological, and political factors contributing to China’s leadership in the global aluminium industry.
Chapter 1: The Rise of China as a Latecomer in the Global Aluminium Industry
This chapter examines China’s remarkable transformation into a global leader in the aluminium industry, a sector in which the country accounted for approximately 58.9% of worldwide production in 2022. We examine how China, a latecomer to the aluminium industry that started off with labor-intensive technology in 1953, grew into the largest aluminium producer with some of the most advanced smelters in the world. This analysis identifies and discusses several opportunities that Chinese aluminium producers took advantage of. The first set of opportunities happened during the 1970s oil crises, which softened international competition and allowed China to acquire innovative smelting technology from Japan. The second set of opportunities started at about the same time when China opened its economy in 1978. The substantial demand for aluminium in China is influenced by both external and internal factors. Externally, the US granted China’s MFN tariff status in 1980 and China entered the World Trade Organization (WTO) in 2001. Both events contributed to a surge in Chinese aluminium consumption. Internally, China’s investment-led growth model boosted further its aluminium demand. Additional factors specific to China, such as low labor costs and the abundance of coal as an energy source, offer Chinese firms competitive advantages against international players. Furthermore, another window of opportunity is due to Chinese governmental policies, including phasing out old technology, providing subsidies, and gradually opening the economy to enhance domestic competition before expanding globally. By describing these elements, the study provides insights into the dynamic interplay of external circumstances and internal strategies that contributed to the success of the Chinese aluminium industry.
Chapter 2: Technological Change and Strategic Choices for Incumbent and New Entrant
This chapter introduces an oligopoly model that includes two actors: an incumbent and a potential entrant, that compete in the same market. We assume that two participants are located in different parts of the market: the incumbent is situated in area 1, whereas the potential entrant may venture into the other region, area 2. The incumbent exists in stage zero, where it can decide whether to deter the newcomer’s entry. A new type of technology exists in period one, when the newcomer may enter the market. In the short term, the incumbent is trapped with the outdated technology, while the new entrant may choose to partially or completely adopt the latest technology. Our results suggest the following: Firstly, the incumbent only tries to deter the new entrant if a condition for entry cost is met. Secondly, the new entrant is only interested in forming a cartel with the incumbent if a function of the ratio of the variable to new technology’s fixed-cost parameters is sufficiently high. Thirdly, if the newcomer asks to form a cartel, the incumbent will always accept this request. Finally, we can obtain the optimal new technology adoption rate for the newcomer.
Chapter 3: Technological Adoption and Welfare in Cournot Oligopoly
This study examines the difference between the optimal technology adoption rates chosen by firms in a homogeneous Cournot oligopoly and that preferred by a benevolent government upon firms’ market entry. To address the question of whether the technology choices of firms and government are similar, we analyze several different scenarios, which differ in the extent of government intervention in the market. Our results suggest a relationship between the number of firms in the market and the impact of government intervention on technology adoption rates. Especially in situations with a low number of firms that are interested in entering the market, greater government influence tends to lead to higher technology adoption rates of firms. Conversely, in scenarios with a higher number of firms and a government that lacks control over the number of market players, the technology adoption rate of firms will be highest when the government plays no role.
Chapter 4: International Technological Innovation and Industrial Strategies
Supporting domestic firms when they first enter the market may be seen as a favorable policy choice by governments around the world thanks to their ability to enhance the competitive advantage of domestic firms in non-cooperative competition against foreign enterprises (infant industry protection argument). This advantage may allow domestic firms to increase their market share and generate higher profits, thereby improving domestic welfare. This chapter utilizes the Spencer and Brander (1983) framework as a theoretical foundation to elucidate the effects of fixed-cost subsidies on firms’ production levels, technological innovations, and social welfare. The analysis examines two firms in different countries, each producing a homogeneous product that is sold in a third, separate country. We first examine the Cournot-Nash equilibrium in the absence of government intervention, followed by analyzing a scenario where just one government provides a financial subsidy for its domestic firm, and finally, we consider a situation where both governments simultaneously provide financial assistance for their respective firms. Our results suggest that governments aim to maximize social welfare by providing fixed-cost subsidies to their respective firms, finding themselves in a Chicken game scenario. Regarding technology innovation, subsidies lead to an increased technological adoption rate for recipient firms, regardless of whether one or both firms in a market receive support, compared to the situation without subsidies. The technology adoption rate of the recipient firm is higher than of its rival when only the recipient firm benefits from the fixed-cost subsidy. The lowest technology adoption rate of a firm occurs when the firm does not receive a fixed-cost subsidy, but its competitor does. Furthermore, global welfare will benefit the most in case when both exporting countries grant fixed-cost subsidies, and this welfare level is higher when only one country subsidizes than when no subsidies are provided by any country.
Der zentrale Gegenstand der Untersuchung ist die Rechtsfigur des Indigenats im Kontext der württembergischen und preußischen Staatenlandschaft. Das Indigenat lässt sich als ein Recht bestimmen, das seine potenziellen Rechtsträger maßgeblich über das Abstammungsprinzip definiert und ein Verhältnis zwischen Rechtsträger und einem übergeordneten Rechtssubjekt zum Ausdruck bringt, sei es lehns- oder standes-, staats- oder auch bundes- beziehungsweise reichsrechtlicher Natur. Der zeitliche Schwerpunkt der Betrachtung liegt auf dem 19. Jahrhundert. Es werden jedoch auch Rückblicke in die Frühe Neuzeit geworfen, weil Wandel und Kontinuität in der Entwicklung des Indigenats in einer solch langen Perspektive besonders klar hervortreten können. Die zentrale These dieser Arbeit ist, dass ein enger Zusammenhang zwischen der im 19. Jahrhundert entstehenden und bis heute geläufigen Form der Zuordnung von Menschen zum Staat und den aus diesem Verhältnis entspringenden Rechten einerseits und dem frühneuzeitlichen Indigenat andererseits besteht. Dabei kann gezeigt werden, dass Gesellschaften ihre politischen Machtpositionen gegenüber „fremdstämmigen“, etwa zuwandernden Personen abschirmten, indem sie sich auf indigenatrechtliche, ethnische Bestimmungen beriefen.
Sowohl national als auch international wird die zunehmende Digitalisierung von Prozessen gefordert. Die Heterogenität und Komplexität der dabei entstehenden Systeme erschwert die Partizipation für reguläre Nutzergruppen, welche zum Beispiel kein Expertenwissen in der Programmierung oder einen informationstechnischen Hintergrund aufweisen. Als Beispiel seien hier Smart Contracts genannt, deren Programmierung komplex ist und bei denen etwaige Fehler unmittelbar mit monetärem Verlust durch die direkte Verknüpfung der darunterliegenden Kryptowährung verbunden sind. Die vorliegende Arbeit stellt ein alternatives Protokoll für cyber-physische Verträge vor, das sich besonders gut für die menschliche Interaktion eignet und auch von regulären Nutzergruppen verstanden werden kann. Hierbei liegt der Fokus auf der Transparenz der Übereinkünfte und es wird weder eine Blockchain noch eine darauf beruhende digitale Währung verwendet. Entsprechend kann das Vertragsmodell der Arbeit als nachvollziehbare Verknüpfung zwischen zwei Parteien verstanden werden, welches die unterschiedlichen Systeme sicher miteinander verbindet und so die Selbstorganisation fördert. Diese Verbindung kann entweder computergestützt automatisch ablaufen, oder auch manuell durchgeführt werden. Im Gegensatz zu Smart Contracts können somit Prozesse Stück für Stück digitalisiert werden. Die Übereinkünfte selbst können zur Kommunikation, aber auch für rechtlich bindende Verträge genutzt werden. Die Arbeit ordnet das neue Konzept in verwandte Strömungen wie Ricardian oder Smart Contracts ein und definiert Ziele für das Protokoll, welche in Form der Referenzimplementierung umgesetzt werden. Sowohl das Protokoll als auch die Implementierung werden im Detail beschrieben und durch eine Erweiterung der Anwendung ergänzt, welche es Nutzenden in Regionen ohne direkte Internetverbindung ermöglicht, an ebenjenen Verträgen teilnehmen zu können. Weiterhin betrachtet die Evaluation die rechtlichen Rahmenbedinungen, die Übertragung des Protokolls auf Smart Contracts und die Performanz der Implementierung.
Im Rahmen psychologischer Wissenschaftskommunikation werden Plain Language Summaries (PLS, Kerwer et al., 2021) zunehmend bedeutsamer. Es handelt sich hierbei um
zugängliche, überblicksartige Zusammenfassungen, welche das Verständnis von Lai:innen
potenziell unterstützen und ihr Vertrauen in wissenschaftliche Forschung fördern können.
Dies erscheint speziell vor dem Hintergrund der Replikationskrise (Wingen et al., 2019) sowie Fehlinformationen in Online-Kontexten (Swire-Thompson & Lazer, 2020) relevant. Die
positiven Auswirkungen zweier Effekte auf Vertrauen sowie ihre mögliche Interaktion fanden im Kontext von PLS bisher kaum Berücksichtigung: Zum einen die einfache Darstellung von Informationen (Easiness-Effekt, Scharrer et al., 2012), zum anderen ein möglichst wissenschaftlicher Stil (Scientificness-Effekt, Thomm & Bromme, 2012). Diese Dissertation hat zum Ziel, im Kontext psychologischer PLS genauere Bestandteile beider Effekte zu identifizieren und den Einfluss von Einfachheit und Wissenschaftlichkeit auf Vertrauen zu beleuchten. Dazu werden drei Artikel zu präregistrierten Online-Studien mit deutschsprachigen Stichproben vorgestellt.
Im ersten Artikel wurden in zwei Studien verschiedene Textelemente psychologischer PLS systematisch variiert. Es konnte ein signifikanter Einfluss von Fachtermini, Informationen zur
Operationalisierung, Statistiken und dem Grad an Strukturierung auf die von Lai:innen berichtete Einfachheit der PLS beobachtet werden. Darauf aufbauend wurden im zweiten Artikel vier PLS, die von Peer-Review-Arbeiten abgeleitet wurden, in ihrer Einfachheit und
Wissenschaftlichkeit variiert und Lai:innen zu ihrem Vertrauen in die Texte und Autor:innen befragt. Hier ergab sich zunächst nur ein positiver Einfluss von Wissenschaftlichkeit auf
Vertrauen, während der Easiness-Effekt entgegen der Hypothesen ausblieb. Exploratorische Analysen legten jedoch einen positiven Einfluss der von Lai:innen subjektiv wahrgenommenen Einfachheit auf ihr Vertrauen sowie eine signifikante Interaktion mit der
wahrgenommenen Wissenschaftlichkeit nahe. Diese Befunde lassen eine vermittelnde Rolle der subjektiven Wahrnehmung von Lai:innen für beide Effekte vermuten. Im letzten Artikel
wurde diese Hypothese über Mediationsanalysen geprüft. Erneut wurden zwei PLS
präsentiert und sowohl die Wissenschaftlichkeit des Textes als auch die der Autor:in manipuliert. Der Einfluss höherer Wissenschaftlichkeit auf Vertrauen wurde durch die
subjektiv von Lai:innen wahrgenommene Wissenschaftlichkeit mediiert. Zudem konnten
dimensionsübergreifende Mediationseffekte beobachtet werden.
Damit trägt diese Arbeit über bestehende Forschung hinaus zur Klärung von Rahmenbedingungen des Easiness- und Scientificness-Effektes bei. Theoretische
Implikationen zur zukünftigen Definition von Einfachheit und Wissenschaftlichkeit, sowie
praktische Konsequenzen hinsichtlich unterschiedlicher Zielgruppen von
Wissenschaftskommunikation und dem Einfluss von PLS auf die Entscheidungsbildung von
Lai:innen werden diskutiert.
Physically-based distributed rainfall-runoff models as the standard analysis tools for hydro-logical processes have been used to simulate the water system in detail, which includes spa-tial patterns and temporal dynamics of hydrological variables and processes (Davison et al., 2015; Ek and Holtslag, 2004). In general, catchment models are parameterized with spatial information on soil, vegetation and topography. However, traditional approaches for eval-uation of the hydrological model performance are usually motivated with respect to dis-charge data alone. This may thus cloud model realism and hamper understanding of the catchment behavior. It is necessary to evaluate the model performance with respect to in-ternal hydrological processes within the catchment area as well as other components of wa-ter balance rather than runoff discharge at the catchment outlet only. In particular, a consid-erable amount of dynamics in a catchment occurs in the processes related to interactions of the water, soil and vegetation. Evapotranspiration process, for instance, is one of those key interactive elements, and the parameterization of soil and vegetation in water balance mod-eling strongly influences the simulation of evapotranspiration. Specifically, to parameterize the water flow in unsaturated soil zone, the functional relationships that describe the soil water retention and hydraulic conductivity characteristics are important. To define these functional relationships, Pedo-Transfer Functions (PTFs) are common to use in hydrologi-cal modeling. Opting the appropriate PTFs for the region under investigation is a crucial task in estimating the soil hydraulic parameters, but this choice in a hydrological model is often made arbitrary and without evaluating the spatial and temporal patterns of evapotran-spiration, soil moisture, and distribution and intensity of runoff processes. This may ulti-mately lead to implausible modeling results and possibly to incorrect decisions in regional water management. Therefore, the use of reliable evaluation approaches is continually re-quired to analyze the dynamics of the current interactive hydrological processes and predict the future changes in the water cycle, which eventually contributes to sustainable environ-mental planning and decisions in water management.
Remarkable endeavors have been made in development of modelling tools that provide insights into the current and future of hydrological patterns in different scales and their im-pacts on the water resources and climate changes (Doell et al., 2014; Wood et al., 2011). Although, there is a need to consider a proper balance between parameter identifiability and the model's ability to realistically represent the response of the natural system. Neverthe-less, tackling this issue entails investigation of additional information, which usually has to be elaborately assembled, for instance, by mapping the dominant runoff generation pro-cesses in the intended area, or retrieving the spatial patterns of soil moisture and evapotran-spiration by using remote sensing methods, and evaluation at a scale commensurate with hydrological model (Koch et al., 2022; Zink et al., 2018). The present work therefore aims to give insights into the modeling approaches to simulate water balance and to improve the soil and vegetation parameterization scheme in the hydrological model subject to producing more reliable spatial and temporal patterns of evapotranspiration and runoff processes in the catchment.
An important contribution to the overall body of work is a book chapter included among publications. The book chapter provides a comprehensive overview of the topic and valua-ble insights into the understanding the water balance and its estimation methods.
Moreover, the first paper aimed to evaluate the hydrological model behavior with re-spect to contribution of various sources of information. To do so, a multi-criteria evaluation metric including soft and hard data was used to define constraints on outputs of the 1-D hydrological model WaSiM-ETH. Applying this evaluation metric, we could identify the optimal soil and vegetation parameter sets that resulted in a “behavioral” forest stand water balance model. It was found out that even if simulations of transpiration and soil water con-tent are consistent with measured data, but still the dominant runoff generation processes or total water balance might be wrongly calculated. Therefore, only using an evaluation scheme which looks over different sources of data and embraces an understanding of the local controls of water loss through soil and plant, allowed us to exclude the unrealistic modeling outputs. The results suggested that we may need to question the generally accept-ed soil parameterization procedures that apply default parameter sets.
The second paper attempts to tackle the pointed model evaluation hindrance by getting down to the small-scale catchment (in Bavaria). Here, a methodology was introduced to analyze the sensitivity of the catchment water balance model to the choice of the Pedo-Transfer Functions (PTF). By varying the underlying PTFs in a calibrated and validated model, we could determine the resulting effects on the spatial distribution of soil hydraulic properties, total water balance in catchment outlet, and the spatial and temporal variation of the runoff components. Results revealed that the water distribution in the hydrologic system significantly differs amongst various PTFs. Moreover, the simulations of water balance components showed high sensitivity to the spatial distribution of soil hydraulic properties. Therefore, it was suggested that opting the PTFs in hydrological modeling should be care-fully tested by looking over the spatio-temporal distribution of simulated evapotranspira-tion and runoff generation processes, whether they are reasonably represented.
To fulfill the previous studies’ suggestions, the third paper then aims to focus on evalu-ating the hydrological model through improving the spatial representation of dominant run-off processes. It was implemented in a mesoscale catchment in southwestern Germany us-ing the hydrological model WaSiM-ETH. Dealing with the issues of inadequate spatial ob-servations for rigorous spatial model evaluation, we made use of a reference soil hydrologic map available for the study area to discern the expected dominant runoff processes across a wide range of hydrological conditions. The model was parameterized by applying 11 PTFs and run by multiple synthetic rainfall events. To compare the simulated spatial patterns to the patterns derived by digital soil map, a multiple-component spatial performance metric (SPAEF) was applied. The simulated DRPs showed a large variability with regard to land use, topography, applied rainfall rates, and the different PTFs, which highly influence the rapid runoff generation under wet conditions.
The three published manuscripts proceeded towards the model evaluation viewpoints that ultimately attain the behavioral model outputs. It was performed through obtaining information about internal hydrological processes that lead to certain model behaviors, and also about the function and sensitivity of some of the soil and vegetation parameters that may primarily influence those internal processes in a catchment. Accordingly, using this understanding on model reactions, and by setting multiple evaluation criteria, it was possi-ble to identify which parameterization could lead to behavioral model realization. This work, in fact, will contribute to solving some of the issues (e.g., spatial variability and modeling methods) identified as the 23 unsolved problems in hydrology in the 21st century (Blöschl et al., 2019). The results obtained in the present work encourage the further inves-tigations toward a comprehensive model calibration procedure considering multiple data sources simultaneously. This will enable developing the new perspectives to the current parameter estimation methods, which in essence, focus on reproducing the plausible dy-namics (spatio-temporal) of the other hydrological processes within the watershed.
This meta-scientific dissertation comprises three research articles that investigated the reproducibility of psychological research. Specifically, they focused on the reproducibility of eye-tracking research on the one hand, and studying preregistration (i.e., the practice of publishing a study protocol before data collection or analysis) as one method to increase reproducibility on the other hand.
In Article I, it was demonstrated that eye-tracking data quality is influenced by both the utilized eye-tracker and the specific task it is measuring. That is, distinct strengths and weaknesses were identified in three devices (Tobii Pro X3-120, GP3 HD, EyeLink 1000+) in an extensive test battery. Consequently, both the device and specific task should be considered when designing new studies. Meanwhile, Article II focused on the current perception of preregistration in the psychological research community and future directions for improving this practice. The survey showed that many researchers intended to preregister their research in the future and had overall positive attitudes toward preregistration. However, various obstacles were identified currently hindering preregistration, which should be addressed to increase its adoption. These findings were supplemented by Article III, which took a closer look at one preregistration-specific tool: the PRP-QUANT Template. In a simulation trial and a survey, the template demonstrated high usability and emerged as a valuable resource to support researchers in using preregistration. Future revisions of the template could help to further facilitate this open science practice.
In this dissertation, the findings of the three articles are summarized and discussed regarding their implications and potential future steps that could be implemented to improve the reproducibility of psychological research.
This dissertation focusses on research into the personality construct of action vs. state orientation. Derived from the Personality-Systems-Interaction Theory (PSI Theory), state orientation is defined as a low ability to self-regulate emotions and associated with many adverse consequences – especially under stress. Because of the high prevalence of state orientation, it is a very important topic to investigate factors that help state-oriented people to buffer these adverse consequences. Action orientation, in contrast, is defined as a high ability to self-regulate own emotions in a very specific way: through accessing the self. The present dissertation demonstrates this theme in five studies, using a total of N = 1251 participants with a wide age range, encompassing different populations (students, non-student population (people from the coaching and therapy sector), applying different operationalisations to investigate self-access as a mediator or an outcome variable. Furthermore, it is tested whether the popular technique of mindfulness - that is advertised as a potent remedy for bringing people closer to the self -really works for everybody. The findings show that the presumed remedy is rather harmful for state-oriented individuals. Finally, an attempt to ameliorate these alienating effects, the present dissertation attempts to find theory-driven, and easy-to-apply solution how mindfulness exercises can be adapted.
Differential equations yield solutions that necessarily contain a certain amount of regularity and are based on local interactions. There are various natural phenomena that are not well described by local models. An important class of models that describe long-range interactions are the so-called nonlocal models, which are the subject of this work.
The nonlocal operators considered here are integral operators with a finite range of interaction and the resulting models can be applied to anomalous diffusion, mechanics and multiscale problems.
While the range of applications is vast, the applicability of nonlocal models can face problems such as the high computational and algorithmic complexity of fundamental tasks. One of them is the assembly of finite element discretizations of truncated, nonlocal operators.
The first contribution of this thesis is therefore an openly accessible, documented Python code which allows to compute finite element approximations for nonlocal convection-diffusion problems with truncated interaction horizon.
Another difficulty in the solution of nonlocal problems is that the discrete systems may be ill-conditioned which complicates the application of iterative solvers. Thus, the second contribution of this work is the construction and study of a domain decomposition type solver that is inspired by substructuring methods for differential equations. The numerical results are based on the abstract framework of nonlocal subdivisions which is introduced here and which can serve as a guideline for general nonlocal domain decomposition methods.
Representation Learning techniques play a crucial role in a wide variety of Deep Learning applications. From Language Generation to Link Prediction on Graphs, learned numerical vector representations often build the foundation for numerous downstream tasks.
In Natural Language Processing, word embeddings are contextualized and depend on their current context. This useful property reflects how words can have different meanings based on their neighboring words.
In Knowledge Graph Embedding (KGE) approaches, static vector representations are still the dominant approach. While this is sufficient for applications where the underlying Knowledge Graph (KG) mainly stores static information, it becomes a disadvantage when dynamic entity behavior needs to be modelled.
To address this issue, KGE approaches would need to model dynamic entities by incorporating situational and sequential context into the vector representations of entities. Analogous to contextualised word embeddings, this would allow entity embeddings to change depending on their history and current situational factors.
Therefore, this thesis provides a description of how to transform static KGE approaches to contextualised dynamic approaches and how the specific characteristics of different dynamic scenarios are need to be taken into consideration.
As a starting point, we conduct empirical studies that attempt to integrate sequential and situational context into static KG embeddings and investigate the limitations of the different approaches. In a second step, the identified limitations serve as guidance for developing a framework that enables KG embeddings to become truly dynamic, taking into account both the current situation and the past interactions of an entity. The two main contributions in this step are the introduction of the temporally contextualized Knowledge Graph formalism and the corresponding RETRA framework which realizes the contextualisation of entity embeddings.
Finally, we demonstrate how situational contextualisation can be realized even in static environments, where all object entities are passive at all times.
For this, we introduce a novel task that requires the combination of multiple context modalities and their integration with a KG based view on entity behavior.
Left ventricular assist devices (LVADs) have become a valuable treatment for patients with advanced heart failure. Women appear to be disadvantaged in the usage of LVADs and concerning clinical outcomes such as death and adverse events after LVAD implant. Contrary to typical clinical characteristics (e.g., disease severity), device-related factors such as the intended device strategy, bridge to a heart transplantation or destination therapy, are often not considered in research on gender differences. In addition, the relevance of pre-implant psychosocial risk factors, such as substance abuse and limited social support, for LVAD outcomes is currently unclear. Thus, the aim of this dissertation is to explore the role of pre-implant psychosocial risk factors for gender differences in clinical outcomes, accounting for clinical and device-related risk factors.
In the first article, gender differences in pre-implant characteristics of patients registered in The European Registry for Patients with Mechanical Circulatory Support (EUROMACS) were investigated. It was found that women and men differed in multiple pre-implant characteristics depending on device strategy. In the second article, gender differences in major clinical outcomes (i.e., death, heart transplant, device explant due to cardiac recovery, device replacement due to complications) were evaluated for patients in the device strategy destination therapy in the Interagency Registry for Mechanically Assisted Circulation (INTERMACS). Additionally, the association of gender and psychosocial risk factors with the major outcomes were analyzed. Women had similar probabilities to die on LVAD support, and even higher probabilities to experience explant of the device due to cardiac recovery compared with men in the destination therapy subgroup. Pre-implant psychosocial risk factors were not associated with major outcomes. The third article focused on gender differences in 10 adverse events (e.g., device malfunction, bleeding) after LVAD implant in INTERMACS. The association of a psychosocial risk indicator with gender and adverse events after LVAD implantation was evaluated. Women were less likely to have psychosocial risk pre-implant but more likely to experience seven out of 10 adverse events compared with men. Pre-implant psychosocial risk was associated with adverse events, even suggesting a dose response-relationship. These associations appeared to be more pronounced in women.
In conclusion, women appear to have similar survival to men when accounting for device strategy. They have higher probabilities of recovery, but higher probabilities of device replacement and adverse events compared with men. Regarding these adverse events, women may be more susceptible to psychosocial risk factors than men. The results of this dissertation illustrate the importance of gender-sensitive research and suggest considering device strategy when studying gender differences in LVAD recipients. Further research is warranted to elucidate the role of specific psychosocial risk factors that lead to higher probabilities of adverse events, to intervene early and improve patient care in both, women and men
Knowledge acquisition comprises various processes. Each of those has its dedicated research domain. Two examples are the relations between knowledge types and the influences of person-related variables. Furthermore, the transfer of knowledge is another crucial domain in educational research. I investigated these three processes through secondary analyses in this dissertation. Secondary analyses comply with the broadness of each field and yield the possibility of more general interpretations. The dissertation includes three meta-analyses: The first meta-analysis reports findings on the predictive relations between conceptual and procedural knowledge in mathematics in a cross-lagged panel model. The second meta-analysis focuses on the mediating effects of motivational constructs on the relationship between prior knowledge and knowledge after learning. The third meta-analysis deals with the effect of instructional methods in transfer interventions on knowledge transfer in school students. These three studies provide insights into the determinants and processes of knowledge acquisition and transfer. Knowledge types are interrelated; motivation mediates the relation between prior and later knowledge, and interventions influence knowledge transfer. The results are discussed by examining six key insights that build upon the three studies. Additionally, practical implications, as well as methodological and content-related ideas for further research, are provided.
Sozialunternehmen haben mindestens zwei Ziele: die Erfüllung ihrer sozialen bzw. ökologischen Mission und finanzielle Ziele. Zwischen diesen Zielen können Spannungen entstehen. Wenn sie sich in diesem Spannungsfeld wiederholt zugunsten der finanziellen Ziele entscheiden, kommt es zum Mission Drift. Die Priorisierung der finanziellen Ziele überlagert dabei die soziale Mission. Auch wenn das Phänomen in der Praxis mehrfach beobachtet und in Einzelfallanalysen beschrieben wurde, gibt es bislang wenig Forschung zu Mission Drift. Der Fokus der vorliegenden Arbeit liegt darauf, diese Forschungslücke zu schließen und eigene Erkenntnisse für die Auslöser und Treiber des Mission Drifts von Sozialunternehmen zu ermitteln. Ein Augenmerk liegt auf den verhaltensökonomischen Theorien und der Mixed-Gamble-Logik. Dieser Logik zufolge liegt bei Entscheidungen immer eine Gleichzeitigkeit von Gewinnen und Verlusten vor, sodass Entscheidungsträger die Furcht vor Verlusten gegenüber der Aussicht auf Gewinne abwägen müssen. Das Modell wird genutzt, um eine neue theoretische Betrachtungsweise auf die Abwägung zwischen sozialen und finanziellen Zielen bzw. Mission Drift zu erhalten. Mit einem Conjoint Experiment werden Daten über das Entscheidungsverhalten von Sozialunternehmern generiert. Im Zentrum steht die Abwägung zwischen sozialen und finanziellen Zielen in verschiedenen Szenarien (Krisen- und Wachstumssituationen). Mithilfe einer eigens erstellten Stichprobe von 1.222 Sozialunternehmen aus Deutschland, Österreich und der Schweiz wurden 187 Teilnehmende für die Studie gewonnen. Die Ergebnisse dieser Arbeit zeigen, dass eine Krisensituation Auslöser für Mission Drift von Sozialunternehmen sein kann, weil in diesem Szenario den finanziellen Zielen die größte Bedeutung zugemessen wird. Für eine Wachstumssituation konnten hingegen keine solche Belege gefunden werden. Hinzu kommen weitere Einflussfaktoren, welche die finanzielle Orientierung verstärken können, nämlich die Gründeridentitäten der Sozialunternehmer, eine hohe Innovativität der Unternehmen und bestimmte Stakeholder. Die Arbeit schließt mit einer ausführlichen Diskussion der Ergebnisse. Es werden Empfehlungen gegeben, wie Sozialunternehmen ihren Zielen bestmöglich treu bleiben können. Außerdem werden die Limitationen der Studie und Wege für zukünftige Forschung im Bereich Mission Drift aufgezeigt.
Social entrepreneurship is a successful activity to solve social problems and economic challenges. Social entrepreneurship uses for-profit industry techniques and tools to build financially sound businesses that provide nonprofit services. Social entrepreneurial activities also lead to the achievement of sustainable development goals. However, due to the complex, hybrid nature of the business, social entrepreneurial activities are typically supported by macrolevel determinants. To expand our knowledge of how beneficial macro-level determinants can be, this work examines empirical evidence about the impact of macro-level determinants on social entrepreneurship. Another aim of this dissertation is to examine the impact at the micro level, as the growth ambitions of social and commercial entrepreneurs differ. At the beginning, the introductory section is explained in Chapter 1, which contains the motivation for the research, the research question, and the structure of the work.
There is an ongoing debate about the origin and definition of social entrepreneurship. Therefore, the numerous phenomena of social entrepreneurship are examined theoretically in the previous literature. To determine the common consensus on the topic, Chapter 2 presents
the theoretical foundations and definition of social entrepreneurship. The literature shows that a variety of determinants at the micro and macro levels are essential for the emergence of social entrepreneurship as a distinctive business model (Hartog & Hoogendoorn, 2011; Stephan et al., 2015; Hoogendoorn, 2016). It is impossible to create a society based on a social mission without the support of micro and macro-level-level determinants. This work examines the determinants and consequences of social entrepreneurship from different methodological perspectives. The theoretical foundations of the micro- and macro-level determinants influencing social entrepreneurial activities were discussed in Chapter 3. The purpose of reproducibility in research is to confirm previously published results (Hubbard et al., 1998; Aguinis & Solarino, 2019). However, due to the lack of data, lack of transparency of methodology, reluctance to publish, and lack of interest from researchers, there is a lack of promoting replication of the existing research study (Baker, 2016; Hedges & Schauer, 2019a). Promoting replication studies has been regularly emphasized in the business and management literature (Kerr et al., 2016; Camerer et al., 2016). However, studies that provide replicability of the reported results are considered rare in previous research (Burman et al., 2010; Ryan & Tipu, 2022). Based on the research of Köhler and Cortina (2019), an empirical study on this topic is carried out in Chapter 4 of this work.
Given this focus, researchers have published a large body of research on the impact of microand macro-level determinants on social inclusion, although it is still unclear whether these studies accurately reflect reality. It is important to provide conceptual underpinnings to the field through a reassessment of published results (Bettis et al., 2016). The results of their research make it abundantly clear that the macro determinants support social entrepreneurship.
In keeping with the more narrative approach, which is a crucial concern and requires attention, Chapter 5 considered the reproducibility of previous results, particularly on the topic of social entrepreneurship. We replicated the results of Stephan et al. (2015) to establish the trend of reproducibility and validate the specific conclusions they drew. The literal and constructive replication in the dissertation inspired us to explore technical replication research on social entrepreneurship. Chapter 6 evaluates the fundamental characteristics that have proven to be key factors in the growth of social ventures. The current debate reviews and references literature that has specifically focused on the development of social entrepreneurship. An empirical analysis of factors directly related to the ambitious growth of social entrepreneurship is also carried out.
Numerous social entrepreneurial groups have been studied concerning this association. Chapter 6 compares the growth ambitions of social and traditional (commercial) entrepreneurship as consequences at the micro level. This study examined many characteristics of social and commercial entrepreneurs' growth ambitions. Scholars have claimed to some extent that the growth of social entrepreneurship differs from commercial entrepreneurial activities due to objectivity differences (Lumpkin et al., 2013; Garrido-Skurkowicz et al., 2022). Qualitative research has been used in studies to support the evidence on related topics, including Gupta et al (2020) emphasized that research needs to focus on specific concepts of social entrepreneurship for the field to advance. Therefore, this study provides a quantitative, analysis-based assessment of facts and data. For this purpose, a data set from the Global Entrepreneurship Monitor (GEM) 2015 was used, which examined 12,695 entrepreneurs from 38 countries. Furthermore, this work conducted a regression analysis to evaluate the influence of various social and commercial characteristics of entrepreneurship on economic growth in developing countries. Chapter 7 briefly explains future directions and practical/theoretical implications.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
This thesis contains four parts that are all connected by their contributions to the Efficient Market Hypothesis and decision-making literature. Chapter two investigates how national stock market indices reacted to the news of national lockdown restrictions in the period from January to May 2020. The results show that lockdown restrictions led to different reactions in a sample of OECD and BRICS countries: there was a general negative effect resulting from the increase in lockdown restrictions, but the study finds strong evidence for underreaction during the lockdown announcement, followed by some overreaction that is corrected subsequently. This under-/overreaction pattern, however, is observed mostly during the first half of our time series, pointing to learning effects. Relaxation of the lockdown restrictions, on the other hand, had a positive effect on markets only during the second half of our sample, while for the first half of the sample, the effect was negative. The third chapter investigates the gender differences in stock selection preferences on the Taiwan Stock Exchange. By utilizing trading data from the Taiwan Stock Exchange over a span of six years, it becomes possible to analyze trading behavior while minimizing the self-selection bias that is typically present in brokerage data. To study gender differences, this study uses firm-level data. The percentage of male traders in a company is the dependent variable, while the company’s industry and fundamental/technical aspects serve as independent variables. The results show that the percentage of women trading a company rises with a company’s age, market capitalization, a company’s systematic risk, and return. Men trade more frequently and show a preference for dividend-paying stocks and for industries with which they are more familiar. The fourth chapter investigated the relationship between regret and malicious and benign envy. The relationship is analyzed in two different studies. In experiment 1, subjects had to fill out psychological scales that measured regret, the two types of envy, core self-evaluation and the big 5 personality traits. In experiment 2, felt regret is measured in a hypothetical scenario, and the subject’s felt regret was regressed on the other variables mentioned above. The two experiments revealed that there is a positive direct relationship between regret and benign envy. The relationship between regret and malicious envy, on the other hand, is mostly an artifact of core self-evaluation and personality influencing both malicious envy and regret. The relationship can be explained by the common action tendency of self-improvement for regret and benign envy. Chapter five discusses the differences in green finance regulation and implementation between the EU and China. China introduced the Green Silk Road, while the EU adopted the Green Deal and started working with its own green taxonomy. The first difference comes from the definition of green finance, particularly with regard to coal-fired power plants. Especially the responsibility of nation-states’ emissions abroad. China is promoting fossil fuel projects abroad through its Belt and Road Initiative, but the EU’s Green Deal does not permit such actions. Furthermore, there are policies in both the EU and China that create contradictory incentives for economic actors. On the one hand, the EU and China are improving the framework conditions for green financing while, on the other hand, still allowing the promotion of conventional fuels. The role of central banks is also different between the EU and China. China’s central bank is actively working towards aligning the financial sector with green finance. A possible new role of the EU central bank or the priority financing of green sectors through political decision-making is still being debated.
Semantic-Aware Coordinated Multiple Views for the Interactive Analysis of Neural Activity Data
(2024)
Visualizing brain simulation data is in many aspects a challenging task. For one, data used in brain simulations and the resulting datasets is heterogeneous and insight is derived by relating all different kinds of it. Second, the analysis process is rapidly changing while creating hypotheses about the results. Third, the scale of data entities in these heterogeneous datasets is manifold, reaching from single neurons to brain areas interconnecting millions. Fourth, the heterogeneous data consists of a variety of modalities, e.g.: from time series data to connectivity data, from single parameters to a set of parameters spanning parameter spaces with multiple possible and biological meaningful solutions; from geometrical data to hierarchies and textual descriptions, all on mostly different scales. Fifth, visualizing includes finding suitable representations and providing real-time interaction while supporting varying analysis workflows. To this end, this thesis presents a scalable and flexible software architecture for visualizing, integrating and interacting with brain simulations data. The scalability and flexibility is achieved by interconnected services forming in a series of Coordinated Multiple View (CMV) systems. Multiple use cases are presented, introducing views leveraging this architecture, extending its ecosystem and resulting in a Problem Solving Environment (PSE) from which custom-tailored CMV systems can be build. The construction of such CMV system is assisted by semantic reasoning hence the term semantic-aware CMVs.
Nachdem er in den 1750er und 1760er Jahren graphische Bildsatiren zu aktuellen innen- und außenpolitischen Themen veröffentlich hatte, wurde William Hogarth selbst in zahlreichen Karikaturen verspottet und verleumdet. Ausgehend von dieser Beobachtung fragt die vorliegende Dissertation, welche Haltung sich den politischen Blättern des Künstlers entnehmen lässt und mit welchen künstlerischen Mitteln er dieser Ausdruck verlieh. Durch Analyse der politischen Ikonographie lassen sich die Themen und Akteure beschreiben. Mit der rezeptionsästhetischen Methode unter Hinzunahme der Sprech- und Bildakttheorie und der Propaganda Studies werden ihre tendenziösen Aussagen und manipulative Absichten entschlüsselt.
In ihrer Regierungsaffinität unterscheidet sich Hogarths politische Kunst maßgeblich von der oppositionellen Bildsatire Londons. Die Differenz spiegelt sich v. a. in den persönlichen Angriffen, mit denen zeitgenössische Satiriker Hogarth kritisierten. Als erstes reagierte Paul Sandby („The Painter’s March from Finchly“, 1753) auf Hogarths Darstellung des Jakobitischen Aufstandes 1745, womit er eine Begründung für die von William Augustus, Duke of Cumberland angestrebte Militärreform lieferte („March of the Guards to Finchley“, 1751); Für seine Gin Act-Kampagne („Gin Lane“ und „Beer Street“, 1750/51) erweiterte er die Pro-Gin-Ikonographie der 1730er Jahre (Anonymous: „The lamentable Fall of Madam Geneva”, 1736, Anonymous: „To those melancholly Sufferers the Destillers […] The Funeral Procession of Madam Geneva“, 1751), um sich für die staatliche Reglementierung der Destillen auszusprechen. In seinen Publikationen zum Siebenjährigen Krieg, mit denen er die Politik der jeweiligen Regierungen unter Thomas Pellham-Holles, Duke of Newcastle und William Pitt (the Elder) („The Invasion“, 1756) oder John Stuart, Earl of Bute („The Times Pl. 1“, 1763) unterstützte, zeigt sich Hogarths Opportunismus. Letztlich wurde seine Fürsprache für die unbeliebte Tory-Regierung und seine Kritik an William Pitt Anlass für Hogarths Herabwürdigung durch die Whig-treue Satire. Nach diesem Bruch publizierten beide Seiten verunglimpfende Portraitkarikaturen, die auf Rufmord des Gegners durch Kriminalisierung, Deformation und Dämonisierung setzten (William Hogarth: „John Wilkes Esqr.“, 1763, Anonymous „Tit for Tat“, 1763, Anonymous: „An Answer to the Print of John Wilkes Esqr. by WM Hogarth“, 1763, Anonymous: „Pug the snarling cur chastised Or a Cure for the Mange“, 1763).
Die Bildvergleiche zwischen Hogarths politischen Werken und den Reaktionen, die sie hervorriefen, zeigen, dass der Unterschied nicht im Bildgegenstand oder der politischen Ikonographie liegt, sondern in der Ausrichtung ihrer politischen Einflussnahme. Dabei ist vor allem Hogarths regierungsloyale Haltung hervorzuheben. Folglich muss die Forschungsmeinung von einer grundsätzlich kritischen Haltung Hogarths redigiert werden, da er sich nachweislich konservativ positioniert und dem Regierungshandeln und Machterhalt der Eliten Vorschub leistete.
Das vorliegende Dissertationsvorhaben untersucht die propagandistische Qualität der Werke Hogarths im Vergleich zu den zeitgenössischen Satirikern und macht die unterschiedliche politische Stoßrichtung sichtbar. Aufschluss gibt dabei die Anwendung künstlerischer und karikaturesker Mittel (das „Wie“) zum Zweck der burlesque (Posse/Parodie), des ridicule (Lächerlichmachung/Spott) bis bin zur Agitation, sowohl in Hogarths Werken als auch in den Karikaturen, die gegen ihn gerichtet waren. Da William Hogarth diese Stilmittel maßgeblich prägte und ihre Entwicklung forcierte, werden sie in der vorliegenden Arbeit unter dem Begriff Hogarthian Wit summiert. Mithilfe der Methode und Begriffe der Propaganda Studies lassen sich Intention und Zweck (das „Was“) als Bildakte beschreiben: Während es sich bei den Werken grundsätzlich um bias handelte, die basierend auf einer Ideologie die öffentliche Meinung beeinflusste, nahm ihre Schlagkraft in den 1760er Jahren stark zu; auf verrätselte Stellungnahmen folgte persönliche und offene Kritik an öffentlichen Personen, bis hin zum Rufmord. Dabei rezipierten sich die Künstler gegenseitig und bildeten Thesen und Antithesen aus. Hogarths einseitige Darstellungen wurden korrigiert und ergänzt, seine politische Kunst als Propaganda enttarnt. Schließlich wurden ihm Lügen und üble Nachrede vorgeworfen. Indem sie ihn anklagten oder durch Sekundärstigmatisierung eine Bestrafung in effigie vornahmen, forderten die Werke vom Rezipienten ein strafendes Urteil. Zu den künstlerischen Mitteln, die dabei zur Anwendung kommen, gehören eine politische Ikonographie und stereotype Feindbilder sowie nationale Konstruktionen, rezeptionsästhetische Mittel wie Juxtapositionen, Rezeptions- und Identifikationsfiguren sowie rhetorische und Mittel des Sprechakts, bis hin zu Perlokutionen. Die Werke lassen sich als Propaganda und somit als hierarchische Kommunikation beschreiben, die manipulative Bildstrategien nutzten, welche nicht nur der Beeinflussung der öffentlichen Meinung dienten, sondern politische Handlungen forcierten. Bezeichnend ist, dass beide Seiten dieselben Ikonographie, Stil-, Kompositions- und Kommunikationsmittel anwendeten, unabhängig von ihrer politischen Aussage, wodurch der Hogarthian Wit gefestigt und stetig weiterentwickelt wurde.
In machine learning, classification is the task of predicting a label for each point within a data set. When the class of each point in the labeled subset is already known, this information is used to recognize patterns and make predictions about the points in the remainder of the set, referred to as the unlabeled set. This scenario falls in the field of supervised learning.
However, the number of labeled points can be restricted, because, e.g., it is expensive to obtain this information. Besides, this subset may be biased, such as in the case of self-selection in a survey. Consequently, the classification performance for unlabeled points may be limited. To improve the reliability of the results, semi-supervised learning tackles the setting of labeled and unlabeled data. Moreover, in many cases, additional information about the size of each class can be available from undisclosed sources.
This cumulative thesis presents different studies to combine this external cardinality constraint information within three important algorithms for binary classification in the supervised context: support vector machines (SVM), classification trees, and random forests. From a mathematical point of view, we focus on mixed-integer programming (MIP) models for semi-supervised approaches that consider a cardinality constraint for each class for each algorithm.
Furthermore, since the proposed MIP models are computationally challenging, we also present techniques that simplify the process of solving these problems. In the SVM setting, we introduce a re-clustering method and further computational techniques to reduce the computational cost. In the context of classification trees, we provide correct values for certain bounds that play a crucial role for the solver performance. For the random forest model, we develop preprocessing techniques and an intuitive branching rule to reduce the solution time. For all three methods, our numerical results show that our approaches have better statistical performances for biased samples than the standard approach.
When humans encounter attitude objects (e.g., other people, objects, or constructs), they evaluate them. Often, these evaluations are based on attitudes. Whereas most research focuses on univalent (i.e., only positive or only negative) attitude formation, little research exists on ambivalent (i.e., simultaneously positive and negative) attitude formation. Following a general introduction into ambivalence, I present three original manuscripts investigating ambivalent attitude formation. The first manuscript addresses ambivalent attitude formation from previously univalent attitudes. The results indicate that responding to a univalent attitude object incongruently leads to ambivalence measured via mouse tracking but not ambivalence measured via self-report. The second manuscript addresses whether the same number of positive and negative statements presented block-wise in an impression formation task leads to ambivalence. The third manuscript also used an impression formation task and addresses the question of whether randomly presenting the same number of positive and negative statements leads to ambivalence. Additionally, the effect of block size of the same valent statements is investigated. The results of the last two manuscripts indicate that presenting all statements of one valence and then all statements of the opposite valence leads to ambivalence measured via self-report and mouse tracking. Finally, I discuss implications for attitude theory and research as well as future research directions.
Mixed-Integer Optimization Techniques for Robust Bilevel Problems with Here-and-Now Followers
(2025)
In bilevel optimization, some of the variables of an optimization problem have to be an optimal solution to another nested optimization problem. This specific structure renders bilevel optimization a powerful tool for modeling hierarchical decision-making processes, which arise in various real-world applications such as in critical infrastructure defense, transportation, or energy. Due to their nested structure, however, bilevel problems are also inherently hard to solve—both in theory and in practice. Further challenges arise if, e.g., bilevel problems under uncertainty are considered.
In this dissertation, we address different types of uncertainties in bilevel optimization using techniques from robust optimization. We study mixed-integer linear bilevel problems with lower-level objective uncertainty, which we tackle using the notion of Gamma-robustness. We present two exact branch-and-cut approaches to solve these Gamma-robust bilevel problems, along with cuts tailored to the important class of monotone interdiction problems. Given the overall hardness of the considered problems, we additionally propose heuristic approaches for mixed-integer, linear, and Gamma-robust bilevel problems. The latter rely on solving a linear number of deterministic bilevel problems so that no problem-specific tailoring is required. We assess the performance of both the exact and the heuristic approaches through extensive computational studies.
In addition, we study the problem of determining optimal tolls in a traffic network in which the network users hedge against uncertain travel costs in a robust way. The overall toll-setting problem can be seen as a single-leader multi-follower problem with multiple robustified followers. We model this setting as a mathematical problem with equilibrium constraints, for which we present a mixed-integer, nonlinear, and nonconvex reformulation that can be tackled using state-of-the-art general-purpose solvers. We further illustrate the impact of considering robustified followers on the toll-setting policies through a case study.
Finally, we highlight that the sources of uncertainty in bilevel optimization are much richer compared to single-level optimization. To this end, we study two aspects related to so-called decision uncertainty. First, we propose a strictly robust approach in which the follower hedges against erroneous observations of the leader's decision. Second, we consider an exemplary bilevel problem with a continuous but nonconvex lower level in which algorithmic necessities prevent the follower from making a globally optimal decision in an exact sense. The example illustrates that even very small deviations in the follower's decision may lead to arbitrarily large discrepancies between exact and computationally obtained bilevel solutions.
This dissertation examines the relevance of regimes for stock markets. In three research articles, we cover the identification and predictability of regimes and their relationships to macroeconomic and financial variables in the United States.
The initial two chapters contribute to the debate on the predictability of stock markets. While various approaches can demonstrate in-sample predictability, their predictive power diminishes substantially in out-of-sample studies. Parameter instability and model uncertainty are the primary challenges. However, certain methods have demonstrated efficacy in addressing these issues. In Chapter 1 and 2, we present frameworks that combine these methods meaningfully. Chapter 3 focuses on the role of regimes in explaining macro-financial relationships and examines the state-dependent effects of macroeconomic expectations on cross-sectional stock returns. Although it is common to capture the variation in stock returns using factor models, their macroeconomic risk sources are unclear. According to macro-financial asset pricing, expectations about state variables may be viable candidates to explain these sources. We examine their usefulness in explaining factor premia and assess their suitability for pricing stock portfolios.
In summary, this dissertation improves our understanding of stock market regimes in three ways. First, we show that it is worthwhile to exploit the regime dependence of stock markets. Markov-switching models and their extensions are valuable tools for filtering the stock market dynamics and identifying and predicting regimes in real-time. Moreover, accounting for regime-dependent relationships helps to examine the dynamic impact of macroeconomic shocks on stock returns. Second, we emphasize the usefulness of macro-financial variables for the stock market. Regime identification and forecasting benefit from their inclusion. This is particularly true in periods of high uncertainty when information processing in financial markets is less efficient. Finally, we recommend to address parameter instability, estimation risk, and model uncertainty in empirical models. Because it is difficult to find a single approach that meets all of these challenges simultaneously, it is advisable to combine appropriate methods in a meaningful way. The framework should be as complex as necessary but as parsimonious as possible to mitigate additional estimation risk. This is especially recommended when working with financial market data with a typically low signal-to-noise ratio.
Veterinärantibiotika werden weltweit in großem Umfang zur Behandlung von Tierkrankheiten eingesetzt. Aufgrund der schlechten Resorption der Mittel im Darm der Tiere gelangen sie zum Großteil unverändert über Ausscheidungen auf landwirtschaftliche Nutzflächen. Dort können sie von Nichtzielorganismen, wie Gefäßpflanzen, aufgenommen werden und deren frühe Entwicklung bedrohen. In diesem Kontext wurde bisher vor allem der Einfluss auf Kulturpflanzen untersucht, während Wildpflanzenarten des ökologisch bedeutsamen Kulturgraslandes, die vor allem durch Gülleausbringung in Kontakt mit Antibiotikastoffen kommen, deutlich weniger fokussiert wurden. Deshalb wurde in dieser Arbeit der Einfluss realistischer Konzentrationen (0,1 - 20 mg/L) zweier häufig verwendeter Veterinärantibiotika, Tetracyclin und Sulfamethazin, auf die Keimung und das frühe Wachstum von typischen Arten des temperaten Kulturgraslandes untersucht. Da in der Natur oft mehrere Stressoren gleichzeitig auf einen Organismus einwirken, wurden auch zwei Multistressszenarien, nämlich Pharmazeutikamischungen und das Zusammenspiel von pharmazeutischem Wirkstoff mit abiotischen Bedingungen (Trockenstress) untersucht. In vier Themenblöcken wurden sowohl standardisierte Laborversuche als auch naturnähere Topf- und Feldversuche durchgeführt.
Die Ergebnisse zeigten, dass sowohl die Keimung als auch das frühe Wachstum durch beide Wirkstoffe, jedoch häufiger durch Tetracyclin, beeinträchtigt wurden. Während die Keimung uneinheitlich in Bezug auf die Effektrichtung beeinflusst wurde, zeigte sich eine starke, antibiotika- und konzentrationsabhängige Reduktion der Wurzellänge vor allem durch Tetracyclin, in den Petrischalenversuchen (20 mg/L bis 96 %, bei Dactylis glomerata). Das oberirdische Wachstum (Blattlänge, Wuchshöhe, Biomasse) wurde geringer beinflusst, und dabei oft wachstumsfördernd. In der gesamten Arbeit zeigten sich immer wieder Hormesis- Effekte, d.h. geringe Konzentrationen, die stimulierend wirkten, während höhere Konzentrationen toxisch wirkten. Die betrachteten Kombinationen verschiedener Faktoren führten entgegen der Erwartung nicht eindeutig zu stärkeren oder alleinigen Einflüssen. In einzelnen Fällen zeigten sich solche Muster, jedoch wurden auch Verluste von Einzeleffekten bei den Kombinationen beobachtet oder Einzeleffekte, die sich dort erneut abbildeten.
Es zeigten sich, wenn auch uneinheitlich, signifikante Einflüsse auf die frühen Entwicklungsstadien von typischen Wildpflanzenarten, die bereits durch andere Faktoren einen Rückgang erfahren. Gerade im Hinblick auf die wiederholte Ausbringung von Gülle und die potenzielle Akkumulation dieser hoch persistenten Stoffe stellen Veterinärantibiotika einen weiteren wichtigen Einflussfaktor dar, der die Biodiversität und Artzusammensetzung gefährdet, weshalb zu einem umweltbewussten Umgang mit ihnen geraten wird.
Partial differential equations are not always suited to model all physical phenomena, especially, if long-range interactions are involved or if the actual solution might not satisfy the regularity requirements associated with the partial differential equation. One remedy to this problem are nonlocal operators, which typically consist of integrals that incorporate interactions between two separated points in space and the corresponding solutions to nonlocal equations have to satisfy less regularity conditions.
In PDE-constrained shape optimization the goal is to minimize or maximize an objective functional that is dependent on the shape of a certain domain and on the solution to a partial differential equation, which is usually also influenced by the shape of this domain. Moreover, parameters associated with the nonlocal model are oftentimes domain dependent and thus it is a natural next step to now consider shape optimization problems that are governed by nonlocal equations.
Therefore, an interface identification problem constrained by nonlocal equations is thoroughly investigated in this thesis. Here, we focus on rigorously developing the first and second shape derivative of the associated reduced functional. In addition, we study first- and second-order shape optimization algorithms in multiple numerical experiments.
Moreover, we also propose Schwarz methods for nonlocal Dirichlet problems as well as regularized nonlocal Neumann problems. Particularly, we investigate the convergence of the multiplicative Schwarz approach and we conduct a number of numerical experiments, which illustrate various aspects of the Schwarz method applied to nonlocal equations.
Since applying the finite element method to solve nonlocal problems numerically can be quite costly, Local-to-Nonlocal couplings emerged, which combine the accuracy of nonlocal models on one part of the domain with the fast computation of partial differential equations on the remaining area. Therefore, we also examine the interface identification problem governed by an energy-based Local-to-Nonlocal coupling, which can be numerically computed by making use of the Schwarz method. Here, we again present a formula for the shape derivative of the associated reduced functional and investigate a gradient based shape optimization method.
Based on data collected from two surveys conducted in Germany and Taiwan, my first paper (Chapter 2) examines the impact of culture through language priming (Chinese vs. German or English) on individuals’ price fairness perception and attitudes towards government intervention and economic policy involving inequality. We document large cross-language differences: in both surveys, subjects who were asked and answered in Chinese demonstrated significantly higher perceived price fairness in a free market mechanism than their counterparts who completed the survey in German or English language. They were also more inclined to accept a Pareto improvement policy which increases social and economic inequality. In the second survey, Chinese language induced also a lower readiness to accept government intervention in markets with price limits compared to English language. Since language functions as a cultural mindset prime, our findings imply that culture plays an important role in fairness perception and preferences regarding social and economic inequality.
Chapter 3 of this work deals with patriotism priming. By conducting two online experimental studies conducted in Germany and China, we tested three different kinds of priming methods for constructive and blind patriotism respectively. Subjects were randomly distributed to one of three treatments motivated by previous studies in different countries: a constructive patriotism priming treatment, a blind patriotism priming treatment and a non-priming baseline. While the first experiment had a between-subject design, the second one enabled both a between-subject and within-subject comparison, since the level of patriotism of individuals was measured before and after priming respectively. The design of the second survey also enabled a comparison among the three priming methods for constructive and blind patriotism. The results showed that the tested methods, especially the national achievements as a priming mechanism, functioned well overall for constructive patriotism.
Surprisingly, the priming for blind patriotism did not work in either Germany or China and the opposite results were observed. Discussion and implications for future studies are provided at the end of the chapter.
Using data from the same studies as in Chapter 3, Chapter 4 examines the impact of patriotism on individuals’ fairness perception and preferences regarding inequality and on their attitudes toward economic policy involving inequality. Across surveys and countries, a positive and significant effect of blind patriotism on economic individualism was found. For China, we also found a significant relationship between blind patriotism and the agreement to unequal economic policy. In contrast to blind patriotism, we did not find an association of constructive patriotism to economic individualism and to attitudes toward economic policy involving inequality. Political and economic implications based on the results are discussed.
The last chapter (Chapter 5) studies the self-serving bias (when an individual’s perception about fairness is biased by self-interest) in the context of price setting and profit distribution. By analyzing data from four surveys conducted in six countries, we found that the stated appropriate product price and the fair allocation of profit was significantly higher, when the outcome was favorable to oneself. This self-serving bias in price fairness perception, however, differed across countries significantly and was significantly higher in Germany, Taiwan and China than in Vietnam, Estonia and Japan.
Although economic individualism and masculinity were found to have a significant negative effect on self-interest bias in price fairness judgment, they did not sufficiently explain the differences in self-interest bias between countries. Furthermore, we also observed an increase of self-interest bias in profit allocation over time in time-series data for one country (Germany) with data from 2011 to 2023.
The four papers are all co-authored with Prof. Marc Oliver Rieger, and the first paper has been accepted for publications in Review of Behavioral Economics.
The gender wage gap in labor market outcomes has been intensively investigated for decades, yet it remains a relevant and innovative research topic in labor economics. Chapter 2 of this dissertation explores the pressing issue of gender wage disparity in Ethiopia. By applying various empirical methodologies and measures of occupational segregation, this chapter aims to analyze the role of female occupational segregation in explaining the gender wage gap across the pay distribution. The findings reveal a significant difference in monthly wages, with women consistently earning lower wages across the wage distribution.
Importantly, the result indicates a negative association between female occupational segregation and the average earnings of both men and women. Furthermore, the estimation result shows that female occupational segregation partially explains the gender wage gap at the bottom of the wage distribution. I find that the magnitude of the gender wage gap in the private sector is higher than in the public sector.
In Chapter 3, the Ethiopian Demography and Health Survey data are leveraged to explore the causal relationship between female labor force participation and domestic violence. Domestic violence against women is a pervasive public health concern, particularly in Africa, including Ethiopia, where a significant proportion of women endure various forms of domestic violence perpetrated by intimate partners. Economic empowerment of women through increased participation in the labor market can be one of the mechanisms for mitigating the risk of domestic violence.
This study seeks to provide empirical evidence supporting this hypothesis. Using the employment rate of women at the community level as an instrumental variable, the finding suggests that employment significantly reduces the risk of domestic violence against women. More precisely, the result shows that women’s employment status significantly reduces domestic violence by about 15 percentage points. This finding is robust for different dimensions of domestic violence, such as physical, sexual, and emotional violence.
By examining the employment outcomes of immigrants in the labor market, Chapter 4 extends the dissertation's inquiry to the dynamics of immigrant economic integration into the destination country. Drawing on data from the German Socio-Economic Panel, the chapter scrutinizes the employment gap between native-born individuals and two distinct groups of first-generation immigrants: refugees and other migrants. Through rigorous analysis, Chapter 4 aims to identify the factors contributing to disparities in employment outcomes among these groups. In this chapter, I aim to disentangle the heterogeneity characteristic of refugees and other immigrants in the labor market, thereby contributing to a deeper understanding of immigrant labor market integration in Germany.
The results show that refugees and other migrants are less likely to find employment than comparable natives. The refugee-native employment gap is much wider than other migrant-native employment gap. Moreover, the findings vary by gender and migration categories. While other migrant men do not differ from native men in the probability of being employed, refugee women are the most disadvantaged group compared to other migrant women and native women in the probability of being employed. The study suggests that German language proficiency and permanent resident permits partially explain the lower employment probability of refugees in the German labor market.
Chapter 5 (co-authored with Uwe Jirjahn) utilizes the same dataset to explore the immigrant-native trade union membership gap, focusing on the role of integration in the workplace and into society. The integration of immigrants into society and the workplace is vital not only to improve migrant's performance in the labor market but also to actively participate in institutions such as trade unions. In this study, we argue that the incomplete integration of immigrants into the workplace and society implies that immigrants are less likely to be union members than natives. Our findings show that first-generation immigrants are less likely to be trade union members than natives. Notably, the analysis shows that the immigrant-native gap in union membership depends on immigrants’ integration into the workplace and society. The gap is smaller for immigrants working in firms with a works council and having social contacts with Germans. Moreover, the results reveal that the immigrant-native union membership gap is decreasing in the year since arrival in Germany.
In dieser Dissertation wird der Workflow der Erstellung einer Augmented Reality App für das Projekt „ARmob” auf Androidgeräten beschrieben. Diese App positioniert durch SfM-Technik erstellte, nach dem neuesten Stand der Forschung rekonstruierte 3D-Objekte an ihren ursprünglichen Standort in der Realität. Die virtuellen Objekte werden jeweils vom Standpunkt und Blickwinkel des Betrachters passend in die reale Welt eingeblendet, so dass der Eindruck entsteht, die Objekte seien Teil der Realität. Die lagegenaue Darstellung ist abhängig von der Satellitenerreichbarkeit der GNSS und der Genauigkeit der weiteren Sensoren. Die App soll als Grundlage und Framework für weitere Apps zur Erforschung der Raumwahrnehmung im Bereich der Kartographie dienen.
Convex Duality in Consumption-Portfolio Choice Problems with Epstein-Zin Recursive Preferences
(2025)
This thesis deals with consumption-investment allocation problems with Epstein-Zin recursive utility, building upon the dualization procedure introduced by [Matoussi and Xing, 2018]. While their work exclusively focuses on truly recursive utility, we extend their procedure to include time-additive utility using results from general convex analysis. The dual problem is expressed in terms of a backward stochastic differential equation (BSDE), for which existence and uniqueness results are established. In this regard, we close a gap left open in previous works, by extending results restricted to specific subsets of parameters to cover all parameter constellations within our duality setting.
Using duality theory, we analyze the utility loss of an investor with recursive preferences, that is, her difference in utility between acting suboptimally in a given market, compared to her best possible (optimal) consumption-investment behaviour. In particular, we derive universal power utility bounds, presenting a novel and tractable approximation of the investors’ optimal utility and her welfare loss associated to specific investment-consumption choices. To address quantitative shortcomings of those power utility bounds, we additionally introduce one-sided variational bounds that offer a more effective approximation for recursive utilities. The theoretical value of our power utility bounds is demonstrated through their application in a new existence and uniqueness result for the BSDE characterizing the dual problem.
Moreover, we propose two approximation approaches for consumption-investment optimization problems with Epstein-Zin recursive preferences. The first approach directly formalizes the classical concept of least favorable completion, providing an analytic approximation fully characterized by a system of ordinary differential equations. In the special case of power utility, this approach can be interpreted as a variation of the well-known Campbell-Shiller approximation, improving some of its qualitative shortcomings with respect to state dependence of the resulting approximate strategies. The second approach introduces a PDE-iteration scheme, by reinterpreting artificial completion as a dynamic game, where the investor and a dual opponent interact until reaching an equilibrium that corresponds to an approximate solution of the investors optimization problem. Despite the need for additional approximations within each iteration, this scheme is shown to be quantitatively and qualitatively accurate. Moreover, it is capable of approximating high dimensional optimization problems, essentially avoiding the curse of dimensionality and providing analytical results.
Globalization significantly transforms labor markets. Advances in production technologies, transportation, and political integration reshape how and where goods and services are produced. Local economic conditions and diverse policy responses create varying speeds of change, affecting regions' attractiveness for living and working -- and promoting mobility.
Competition for talent necessitates a deep understanding of why individuals choose specific destinations, how to ensure their effective labor market integration, and what workplace factors affect workers' well-being.
This thesis focuses on two crucial aspects of labor market change -- Migration and workplace technological change. It contributes to our understanding of the determinants of labor mobility, the factors facilitating migrant integration, and the role of workplace automation for worker well-being.
Chapter 2 investigates the relationship between minimum wages (MWs) and regional worker mobility in the EU. EU citizens are free to work anywhere in the common market, which allows them to take advantage of the significant variation in MWs across the EU. However, although MWs are set at the national level, it is also their local relevance that varies substantially -- depending on factors such as the share of affected workers or the extent to which they shift local compensation levels. These variations may attract workers from elsewhere, from within a country or from abroad.
Analyzing regional variations in the Kaitz index, a measure of local MW impact, reveals that higher MWs can significantly increase inflows of low-skilled EU workers, particularly in central Europe.
Chapter 3 examines the inequality in returns to skills experienced by immigrants, focusing on the role of linguistic proximity between migrants' origin and destination countries. Harmonized individual-level data from nine linguistically diverse migrant-hosting economies allows for an analysis of the wage gaps faced by immigrants from various origins, implicitly indicating how well they and their skills are integrated into the local labor markets. The analysis reveals that greater linguistic distance is associated with a higher wage penalty for highly skilled immigrants and a lower position in the wage distribution for those without tertiary education.
Chapter 4 investigates an institutional factor potentially relevant for the integration of immigrants -- the labor market impact of Confucius Institutes (CIs), Chinese government-sponsored institutions that promote Chinese language and culture abroad. CIs have been found to foster trade and cultural exchange, indicating their potential relevance in shaping attitudes and trust of natives towards China and Chinese individuals. Examining the relationship between local CI presence and the wages of Chinese immigrants in local labor markets of the United States, the analysis reveals that CIs associate with significantly reduced wages for nearby residing Chinese immigrants. An event study demonstrates that the mere announcement of a new CI negatively impacts local wages for Chinese immigrants, independent of the CI's actual opening.
Chapter 5 explores how working in automatable jobs affects life satisfaction in Germany. Following earlier literature, we classify occupations by potential for automation, and define the top third of occupations in this metric as \textit{automatable jobs}. We find workers in highly automatable jobs reporting a lower life satisfaction. Moreover, we detect a non-linearity, where workers in moderately automatable jobs (the second third of the distribution) experience a positive association with life satisfaction. Overall, the negative relationship of automation is most pronounced among younger and blue-collar workers, irrespective of the non-linearity.
Einige Forschungsergebnisse zeigen, dass emotionale Empfindungen kognitive Bereiche beeinflussen oder mit diesen im Zusammenhang stehen. Aufbauend auf den Ergebnissen wurden zwei Studien konzipiert. In Studie 1 wurde der Zusammenhang zwischen den Valenzen der dispositionalen emotionalen Empfindungen und der globalen Selbstbewertung des Gedächtnisses (Metagedächtnis) bei Lehramtsstudierenden (N = 218) untersucht. Die dispositionalen Empfindungen wurden mittels des deutschen Positive and Negativ Affect Schedule (PANAS) (Krohne, Egloff, Kohlmann & Tausch, 1996) und die globale Selbstbewertung des Gedächtnisses mit dem deutschen Squire Subjective Memory Questionnaire (SSMQ) (Wolf, 2017) erfasst. Angenommen wurde, dass die positive Valenz im Gegensatz zu der negativen Valenz im positiven Zusammenhang mit der höheren Gedächtniseinschätzung stehen. Die Ergebnisse bestätigen die Hypothesen. In Studie 2 wurde die aktuelle Valenz mittels des Open Affective Standardized Image Set (OASIS) (Kurdi, Lozano & Banaji, 2017) induziert, um Veränderungen des Metagedächtnisses und der tatsächlichen Gedächtnisleistung bei Lehramtsstudierenden (N = 44) zu untersuchen. Angenommen wurde, dass die positive Valenz positiv, die negative Valenz negativ und die neutrale Valenz nicht auf das Metagedächtnis und die Gedächtnisleistung wirkt. Weitere Zusammenhänge zwischen dem Metagedächtnis und der Gedächtnisleistung sowie der induzierten Valenz und der Gedächtnisleistung wurden angenommen. Die Messinstrumente aus Studie 1 blieben dieselben. Die Gedächtnisleistung wurde mittels eines sinnarmen Silbentests nach Ebbinghaus (1885) operationalisiert. Die Ergebnisse bestätigen die Hypothesen nicht. Die Emotionsinduktion hatte keinen Erfolg. Die Ergebnisse können damit nicht auf eine veränderte Valenz bezogen werden. Wie in Studie 1 zeigte sich ein Zusammenhang zwischen den dispositionalen Empfindungen und dem Metagedächtnis. Weitere explorative Ergebnisse, vor allem im Bezug auf das Geschlecht, wurden dargestellt. Die Ergebnisse sind bedeutsam für die Professionalisierung von Lehramtsstudierenden.
Die Abteilung Kunstschutz der deutschen Wehrmacht im besetzten Griechenland (1941-1944) bestand aus wehrpflichtigen deutschen Archäologen. Sie waren zunächst Stipendiaten oder Mitarbeiter des Archäologischen Instituts des Deutschen Reiches (AIDR) unter den Bedingungen des Nationalsozialismus, bevor sie im Zweiten Weltkrieg in der Uniform der Wehrmacht zurückkehrten. Ihre Biografien im Kontext der Abteilung Athen, deren Direktor Georg Karo bis 1936 war, sowie der Zentrale der Instituts, unter dem von 1932 bis 1936 amtierenden Präsidenten Theodor Wiegand, sind ein Untersuchungsgegenstand. Die außenpolitische Legitimation des NS-Regimes durch die Olympischen Spiele und der wichtigste wissenschaftspolitische Erfolg des Institutes, die Wiederaufnahme der Olympiagrabung, die Wiegand und Karo seit 1933 anstrebten und durch ihre politischen Netzwerke 1936 erreichten, werden in der Dissertation in ihrer wechselseitigen Bedingtheit aufgezeigt. Diese Anpassungsleistungen an das NS-Regime prägten den eigenen archäologischen Nachwuchs aber auch die griechische Gesellschaft.
Schutzmaßnahmen waren nur ein kleiner Tätigkeitsbereich der Kunstschützer aber ein wichtiger Teil der Wehrmachtspropaganda. Der Institutspräsident Martin Schede (1937 bis 1945) forderte Mitarbeitern vor allem für zwei AIDR-Projekte an: die Erstellung von Flugbildern von möglichst ganz Griechenland und Ausgrabungen auf Kreta. Bereits diese Zwischenergebnisse berechtigen zu dem Titel „Kunstschutz als Alibi“.
Die Dissertation versucht, die Frage zu beantworten, warum der archäologische Kunstschutz nicht mehr als ein Alibi sein konnte. Dies geschieht vor allem unter Berücksichtigung der politischen aber auch der militärischen Traditionslinien deutscher Archäologie in Griechenland und Deutschland.
The goal of this work is to compare operators that are defined on probably varying Hilbert spaces. Distance concepts for operators as well as convergence concepts for such operators are explained and examined. For distance concepts we present three main notions. All have in common that they use space-linking operators that connect the spaces. At first, we look at unitary maps and compare the unitary orbits of the operators. Then, we consider isometric embeddings, which is based on a concept of Joachim Weidmann. Then we look at contractions but with more norm equations in comparison. The latter idea is based on a concept of Olaf Post called quasi-unitary equivalence. Our main result is that the unitary and isometric distances are equal provided the operators are both self-adjoint and have 0 in their essential spectra. In the third chapter, we focus specifically on the investigation of these distance terms for compact operators or operators in p-Schatten classes. In this case, the interpretation of the spectra as null sequences allows further distance investigation. Chapter four deals mainly with convergence terms of operators on varying Hilbert spaces. The analyses in this work deal exclusively with concepts of norm resolvent convergence. The main conclusion of the chapter is that the generalisation for norm resolvent convergence of Joachim Weidmann and the generalisation of Olaf Post, called quasi-unitary equivalence, are equivalent to each other. In addition, we specify error bounds and deal with the convergence speed of both concepts. Two important implications of these convergence notions are that the approximation is spectrally exact, i.e., the spectra converge suitably, and that the convergence is transferred to the functional calculus of the bounded functions vanishing at infinity.
The new millennium has been characterized by rising digitalization, the proliferation of shadow banking, and significant advancements in machine learning and natural language processing. These trends present both challenges and opportunities, which my dissertation addresses. This cumulative dissertation investigates critical aspects of financial stability, monetary policy, and the transition towards cashless economies through three distinct but interrelated studies.
The first paper examines the risk-taking channel of monetary policy transmission within the euro area, focusing on shadow banks. Through vector autoregressive models, it assesses the impact of conventional and unconventional monetary policy shocks on shadow banks' asset growth and risk asset ratios. The results indicate that lower interest rates lead to a portfolio reallocation towards riskier assets and a general expansion of assets in shadow banks. In the case of conventional monetary policy shocks, both effects last three times as long as in the case of unconventional monetary policy shocks. Country-specific as well as sector-specific estimations confirm these findings. This study bridges gaps in the existing literature, especially in the eurozone, by highlighting the significant role shadow banks play in monetary policy transmission, suggesting implications for financial regulation and stability.
The second paper explores the influence of financial stability considerations on US monetary policy, particularly during the Great Recession. Utilizing natural language processing and machine learning techniques on congressional hearings, this study constructs indicators for financial stability sentiment expressed by the Federal Reserve Chairs. Empirical analysis is conducted using Taylor-rule models, revealing that negative financial stability sentiment is associated with a more accommodative monetary policy stance, even before the Great Recession. This work provides new insights into the integration of financial stability concerns into monetary policy frameworks, demonstrating the need for a balanced approach to economic stability. The article suggests that under a dual mandate, such as that of the Federal Reserve, financial stability can, to some extent, already be factored into monetary policy deliberations.
The third paper sheds new light on ``cash paradox'' by uncovering the factors of the cashless transition that has not been entirely understood so far. Using a comprehensive dataset across 65 countries, the study employs panel data models to explain the paradox (increasing demand for central bank money despite soaring digitalization), especially among technologically advanced countries, e.g., Japan. Empirical evidence suggests that digitalization is not significantly associated with higher reliance on physical cash. It uncovers a unique non-linear relationship between trust and cash usage (``Arch of Trust'') which holds after addressing potential endogeneity issues using 2SLS estimation. Opposed to the widespread misinterpretations of Keynes' (1937) reasons for holding cash, the findings highlight that distrust is the key factor unlocking two distinct puzzles in economics, linking cash hoarding with ``missing'' funds on capital markets and slower shift toward digital payments in low-trust societies. A key insight is the role of trust as a (social) insurance, cushion or safety net, dampening the perception of risk and reducing precautionary and transactionary demand for physical cash, while encouraging a shift towards riskier alternatives. This, in turn, is connected to the third puzzle, the ``paradox of prudence.'' A shift from riskier investments to safer assets, cash, may be prudent at the individual level but risky for the overall economy, a concern for macroprudential policymakers. Additionally, the research highlights the critical role of culture in driving the global movement towards cashless economies. Moreover, cultures that are more self-expression-oriented (which is the main cultural dimension) and culturally closer to Sweden are associated with less cash-intensive economies. These insights are vital for macroprudential regulators as well as for policymakers designing payment systems and CBDC in culturally diverse regions like the Eurozone.
Collectively, these papers contribute to a deeper understanding of monetary policy, financial stability, and the transition from cash-based to (nearly) cashless societies, offering significant theoretical and practical implications for academics, regulators and central bankers.
Although universality has fascinated over the last decades, there are still numerous open questions in this field that require further investigation. In this work, we will mainly focus on classes of functions whose Fourier series are universal in the sense that they allow us to approximate uniformly any continuous function defined on a suitable subset of the unit circle.
The structure of this thesis is as follows. In the first chapter, we will initially introduce the most important notation which is needed for our following discussion. Subsequently, after recalling the notion of universality in a general context, we will revisit significant results concerning universality of Taylor series. The focus here is particularly on universality with respect to uniform convergence and convergence in measure. By a result of Menshov, we will transition to universality of Fourier series which is the central object of study in this work.
In the second chapter, we recall spaces of holomorphic functions which are characterized by the growth of their coefficients. In this context, we will derive a relationship to functions on the unit circle via an application of the Fourier transform.
In the second part of the chapter, our attention is devoted to the $\mathcal{D}_{\textup{harm}}^p$ spaces which can be viewed as the set of harmonic functions contained in the $W^{1,p}(\D)$ Sobolev spaces. In this context, we will also recall the Bergman projection. Thanks to the intensive study of the latter in relation to Sobolev spaces, we can derive a decomposition of $\mathcal{D}_{\textup{harm}}^p$ spaces which may be seen as analogous to the Riesz projection for $L^p$ spaces. Owing to this result, we are able to provide a link between $\mathcal{D}_{\textup{harm}}^p$ spaces and spaces of holomorphic functions on $\mathbb{C}_\infty \setminus \s$ which turns out to be a crucial step in determining the dual of $\mathcal{D}_{\textup{harm}}^p$ spaces.
The last section of this chapter deals with the Cauchy dual which has a close connection to the Fantappié transform. As an application, we will determine the Cauchy dual of the spaces $D_\alpha$ and $D_{\textup{harm}}^p$, two results that will prove to be very helpful later on. Finally, we will provide a useful criterion that establishes a connection between the density of a set in the direct sum $X \oplus Y$ and the Cauchy dual of the intersection of the respective spaces.
The subsequent chapter will delve into the theory of capacities and, consequently, potential theory which will prove to be essential in formulating our universality results. In addition to introducing further necessary terminologies, we will define capacities in the first section following [16], however in the frame of separable metric spaces, and revisit the most important results about them.
Simultaneously, we make preparations that allow us to define the $\mathrm{Li}_\alpha$-capacity which will turn out to be equivalent to the classical Riesz $\alpha$-capacity. The $\mathrm{Li}_\alpha$-capacity proves to be more adapted to the $D_\alpha$ spaces. It becomes apparent in the course of our discussion that the $\mathrm{Li}_\alpha$-capacity is essential to prove uniqueness results for the class $D_\alpha$. This leads to the centerpiece of this chapter which forms the energy formula for the $\mathrm{Li}_\alpha$-capacity on the unit circle. More precisely, this identity establishes a connection between the energy of a measure and its corresponding Fourier coefficients. We will briefly deal with the complement-equivalence of capacities before we revisit the concept of Bessel and Riesz capacities, this time, however, in a much more general context, where we will mainly rely on [1]. Since we defined capacities on separable metric spaces in the first section, we can draw a connection between Bessel capacities and $\mathrm{Li}_\alpha$-capacities. To conclude this chapter, we would like to take a closer look at the geometric meaning of capacities. Here, we will point out a connection between the Hausdorff dimension and the polarity of a set, and transfer it to the $\mathrm{Li}_\alpha$-capacity. Another aspect will be the comparison of Bessel capacities across different dimensions, in which the theory of Wolff potentials crystallizes as a crucial auxiliary tool.
In the fourth chapter of this thesis, we will turn our focus to the theory of sets of uniqueness, a subject within the broader field of harmonic analysis. This theory has a close relationship with sets of universality, a connection that will be further elucidated in the upcoming chapter.
The initial section of this chapter will be dedicated to the notion of sets of uniqueness that is specifically adapted to our current context. Building on this concept, we will recall some of the fundamental results of this theory.
In the subsequent section, we will primarily rely on techniques from previous chapters to determine the closed sets of uniqueness for the class $\mathcal{D}_{\alpha}$. The proofs we will discuss are largely influenced by [16, p.\ 178] and [9, pp.\ 82].
One more time, it will become evident that the introduction of the $\mathrm{Li}_\alpha$-capacity in the third chapter and the closely associated energy formula on the unit circle, were the pivotal factors that enabled us to carry out these proofs.
In the final chapter of our discourse, we will present our results on universality. To begin, we will recall a version of the universality criterion which traces back to the work of Grosse-Erdmann (see [26]). Coupled with an outcome from the second chapter, we will prove a result that allows us to obtain the universality of a class using the technique of simultaneous approximation. This tool will play a key role in the proof of our universality results which will follow hereafter.
Our attention will first be directed toward the class $D_\alpha$ with $\alpha$ in the interval $(0,1]$. Here, we summarize that universality with respect to uniform convergence occurs on closed and $\alpha$-polar sets $E \subset \s$. Thanks to results of Carleson and further considerations, which particularly rely on the favorable behavior of the $\mathrm{Li}_\alpha$-kernel, we also find that this result is sharp. In particular, it may be seen as a generalization of the universality result for the harmonic Dirichlet space.
Following this, we will investigate the same class, however, this time for $\alpha \in [-1,0)$. In this case, it turns out that universality with respect to uniform convergence occurs on closed and $(-\alpha)$-complement-polar sets $E \subset \s$. In particular, these sets of universality can have positive arc measure. In the final section, we will focus on the class $D_{\textup{harm}}^p$. Here, we manage to prove that universality occurs on closed and $(1,p)$-polar sets $E \subset \s$. Through results of Twomey [68] combined with an observation by Girela and Pélaez [23], as well as the decomposition of $D_{\textup{harm}}^p$, we can deduce that the closed sets of universality with respect to uniform convergence of the class $D_{\textup{harm}}^p$ are characterized by $(1,p)$-polarity. We conclude our work with an application of the latter result to the class $D^p$. We will show that the closed sets of divergence for the class $D^p$ are given by the $(1,p)$-polar sets.
Ensuring fairness in machine learning models is crucial for ethical and unbiased automated decision-making. Classifications from fair machine learning models should not discriminate against sensitive variables such as sexual orientation and ethnicity. However, achieving fairness is complicated by biases inherent in training data, particularly when data is collected through group sampling, like stratified or cluster sampling as often occurs in social surveys. Unlike the standard assumption of independent observations in machine learning, clustered data introduces correlations that can amplify biases, especially when cluster assignment is linked to the target variable.
To address these challenges, this cumulative thesis focuses on developing methods to mitigate unfairness in machine learning models. We propose a fair mixed effects support vector machine algorithm, a Cluster-Regularized Logistic Regression and a fair Generalized Linear Mixed Model based on boosting, all of them are capable of handling both grouped data and fairness constraints simultaneously. Additionally, we introduce a Julia package, FairML.jl, which provides a comprehensive framework for addressing fairness issues. This package offers a preprocessing technique, based on resampling methods, to mitigate biases in the data, as well as a post-processing method, that seeks for a optimal cut-off selection.
To improve fairness in classifications both processes can be incorporated in any classification method available in the MLJ.jl package. Furthermore, FairML.jl incorporates in-processing approaches, such as optimization-based techniques for logistic regression and support vector machine, to directly address fairness during model training in regular and mixed models.
By accounting for data complexities and implementing various fairness-enhancing strategies, our work aims to contribute to the development of more equitable and reliable machine learning models.
This dissertation addresses the measurement and evaluation of the energy and resource efficiency of software systems. Studies show that the environmental impact of Information and Communications Technologies (ICT) is steadily increasing and is already estimated to be responsible for 3 % of the total greenhouse gas (GHG) emissions. Although it is the hardware that consumes natural resources and energy through its production, use, and disposal, software controls the hardware and therefore has a considerable influence on the used capacities. Accordingly, it should also be attributed a share of the environmental impact. To address this softwareinduced impact, the focus is on the continued development of a measurement and assessment model for energy and resource-efficient software. Furthermore, measurement and assessment methods from international research and practitioner communities were compared in order to develop a generic reference model for software resource and energy measurements. The next step was to derive a methodology and to define and operationalize criteria for evaluating and improving the environmental impact of software products. In addition, a key objective is to transfer the developed methodology and models to software systems that cause high consumption or offer optimization potential through economies of scale. These include, e. g., Cyber-Physical Systems (CPS) and mobile apps, as well as applications with high demands on computing power or data volumes, such as distributed systems and especially Artificial Intelligence (AI) systems.
In particular, factors influencing the consumption of software along its life cycle are considered. These factors include the location (cloud, edge, embedded) where the computing and storage services are provided, the role of the stakeholders, application scenarios, the configuration of the systems, the used data, its representation and transmission, or the design of the software architecture. Based on existing literature and previous experiments, distinct use cases were selected that address these factors. Comparative use cases include the implementation of a scenario in different programming languages, using varying algorithms, libraries, data structures, protocols, model topologies, hardware and software setups, etc. From the selection, experimental scenarios were devised for the use cases to compare the methods to be analyzed. During their execution, the energy and resource consumption was measured, and the results were assessed. Subtracting baseline measurements of the hardware setup without the software running from the scenario measurements makes the software-induced consumption measurable and thus transparent. Comparing the scenario measurements with each other allows the identification of the more energyefficient setup for the use case and, in turn, the improvement/optimization of the system as a whole. The calculated metrics were then also structured as indicators in a criteria catalog. These indicators represent empirically determinable variables that provide information about a matter that cannot be measured directly, such as the environmental impact of the software. Together with verification criteria that must be complied with and confirmed by the producers of the software, this creates a model with which the comparability of software systems can be established.
The gained knowledge from the experiments and assessments can then be used to forecast and optimize the energy and resource efficiency of software products. This enables developers, but also students, scientists and all other stakeholders involved in the life cycleof software, to continuously monitor and optimize the impact of their software on energy and resource consumption. The developed models, methods, and criteria were evaluated and validated by the scientific community at conferences and workshops. The central outcomes of this thesis, including a measurement reference model and the criteria catalog, were disseminated in academic journals. Furthermore, the transfer to society has been driven forward, e. g., through the publication of two book chapters, the development and presentation of exemplary best practices at developer conferences, collaboration with industry, and the establishment of the eco-label “Blue Angel” for resource and energy-efficient software products. In the long term, the objective is to effect a change in societal attitudes and ultimately to achieve significant resource savings through economies of scale by applying the methods in the development of software in general and AI systems in particular.
In most textbooks optimal sample allocation is tailored to rather theoretical examples. However, in practice we often face large-scale surveys with conflicting objectives and many restrictions on the quality and cost at population and subpopulation levels. This multiobjectiveness results in a multitude of efficient sample allocations, each giving different weight to a single survey purpose. Additionally, since the input data to the allocation problem often relies on supplementary information derived from estimation, historical data, or expert knowledge, allocations might be inefficient when specified for sampling.
This doctoral thesis presents a framework for optimal allocation to standard sampling schemes that allows for specifying the tradeoff between different objectives and analyzing their sensitivity to other problem components, aiming to support a decision-maker in identifying an at-most preferred sample allocation. It dedicates a full chapter to each of the following core questions: 1) How to efficiently incorporate quality and cost constraints for large-scale surveys, say, for thousands of strata with hundreds of precision and cost constraints? 2) How to handle vector-valued objectives with their components addressing different, possibly conflicting survey purposes? 3) How to consider uncertainty in the input data?
The techniques presented can be used separately or in combination as a general problem-solving framework for constrained multivariate and multidomain, possibly uncertain, sample allocation. The main problem is formulated in a way that highlights the different components of optimal sample allocation and can be taken as a gateway to develop solution strategies to each of the questions above, while shifting the focus between different problem aspects. The first question is addressed through a conic quadratic reformulation, which can be efficiently solved for large problem instances using interior-point methods. Based on this the second question is tackled using a weighted Chebyshev minimization, which provides insight into the sensitivity of the problem and enables a stepwise procedure for considering nonlinear decision functionals. Lastly, uncertainty in the input data is addressed through regularization, chance constraints and robust problem formulations.
Biodiversity is threatened by a wide range of anthropogenic activities. Monitoring offers critical insights into how and why biodiversity is changing, helping to identify effective measures for maintaining biodiversity and its ecosystem services. However, conventional biodiversity monitoring methods are labor-intensive, and standardized long-term monitoring series are scarce. DNA-based approaches like metabarcoding environmental DNA (eDNA) promise rapid, cost-efficient, and highly resolved community data. At the same time, scientists are looking for alternative data sources that can compensate for the lack of long-term monitoring data to study past biodiversity changes. This work explores the potential of the German Environmental Specimen Bank (ESB), a pollution monitoring archive, which appears particularly promising for retrospective biodiversity monitoring. Biota samples from different ecosystems across the country are collected and archived at an exceptional level of standardization. Sampling species act as natural eDNA samplers, accumulating genetic traces from surrounding organisms. The cryogenic storage at the ESB preserves any eDNA in the samples in its original state. In this thesis, Chapter I serves as an introductory chapter, outlining the general chances and challenges of metabarcoding for assessing biodiversity. Chapter II focuses on primer design and testing the utility of ESB sampling species like mussels and macroalgae for characterizing the surrounding community. Both chapters form the basis for Chapters III to V, which report the use of ESB time series to uncover sample-associated communities and the changes they undergo. Chapter III illustrates the value of these time series by revealing the invasion trajectory of an alien barnacle into German coastal waters and linking the process to climate change. Chapter IV forms the core of this thesis by presenting an expanded measurement of biodiversity change in ESB time series across different taxonomic groups and ecosystem types. Here, a gradual compositional change (turnover) is reported from bacterial, fungal, microeukaryotic, and metazoan communities tending to either spatial homogenization or differentiation. Observed trends are tested for significance using a dynamic model of community ecology based on the equilibrium theory of island biogeography. The model reveals significantly accelerated turnover rates across all taxonomic groups and ecosystems investigated, suggesting a common, anthropogenically induced driver of biodiversity change. Since these analyses most likely include DNA derived from dead as well as from living organisms, Chapter V aims to separate both groups by metabarcoding both DNA and less stable ribosomal RNA from mussel samples. Contrary to the hypothesis, RNA is detectable from both living endobionts and dietary taxa. However, it outcompetes DNA in detecting microeukaryotic biodiversity. In summary, this thesis demonstrates the outstanding potential of archived ESB samples for retrospective biodiversity monitoring, a resource that offers many further untapped opportunities for future biodiversity research at multiple scales.
The present dissertation deals with variable stress patterns in English complex adjectives such as celebratory, identifiable or imaginative. This variation is usually described in terms of retaining the stress from the embedded base (idéntify -> idéntifiable) or deviating from the stress of the embedded base (idéntify -> identifíable). While several accounts have explored this variation, none of them have been able to identify a plausible reason for why it occurs. Additionally, the role of individual speaker differences has been disregarded in the discussion. This dissertation therefore explores the empirically observable extent of the variation and investigates possible causes of it with a special focus on individual differences between speakers. It uses data from a complex online experiment that included five different tasks to assess speakers' stress production, perception, morphological processing, vocabulary size and other factors. It furthermore tests the predictions of previous accounts on the large set of authentic utterances from speakers collected using this online experiment. The data show that individual differences in vocabulary size between speakers are a significant predictor of a speaker's tendency to retain the stress of the embedded base.
Biotic communities experienced significant changes in recent decades. Climate change, the overexploitation of natural resources and the immigration of invasive species are major drivers for this change and present unknown challenges for communities worldwide. To assess the impact of these drivers, standardised long-term studies are required, which are currently lacking for many species and ecosystems. Analysing environmental samples and the DNA of associated organisms using metabarcoding and high-throughput sequencing provides a cost-efficient and rapid way to generate the high-resolution biodiversity data which is so direly needed.
In this thesis, I demonstrate the great potential of using samples from the German Environ- mental Specimen Bank (ESB), a long-term monitoring archive that has been collecting and cryogenically storing highly standardised environmental samples since 1985. Modern analytical methods enable retrospective long-term biodiversity monitoring using these samples. In the first chapter, I illustrate metabarcoding as a central method, discussing its strengths and drawbacks, how to avoid them, and new application approaches. This chapter provides the methodological basis for the following studies.
In subsequent chapters, I present time series analyses of communities associated with these environmental samples. While for Chapter two the focus is on terrestrial arthropod communities, in Chapter three aquatic and terrestrial communities across the tree of life are analysed. A null model was developed for this survey for robust conclusions. The studies covered the last three decades and revealed substantial compositional changes across all ecosystems. These changes deviated significantly from the model, indicating that the changes are occurring faster than expected. Moreover, a trend toward homogenization in many terrestrial communities was uncovered. Climate change and the immigration of invasive species in combination with the loss of site-specific species are suspected to be the main drivers for this. In a follow-up study, changes of arthropod communities in German and South Korean terrestrial ecosystems were compared using ESB leaf samples from these two countries. Since both ESBs are harmonised in sample collection and processing, comparative analyses were applicable. This research covered the last decade and revealed substantial declines in species richness in Korea. Abiotic and biotic factors are discussed as potential drivers of these results.
Finally, the possibility of assessing tree health by analysing changes in functional fungal groups using German ESB samples was investigated. The results indicate that increasing infestation of specific functional groups is a proxy for declining tree health, with further analyses planned. In this dissertation, I present the great potential of samples from long-term monitoring archives to conduct retrospective biodiversity trend analyses across the tree of life. As technologies evolve, these samples will help to understand past and predict future ecosystem changes.
Knapp 90 Jahre nach Erscheinen des Buchs von Paul Graindor zu den „Bustes et Statues-Portraits d'Egypte Romaine“ widmet sich mit der vorliegenden Dissertation erstmals wieder eine monographische Studie der marmornen Bildnisplastik der römischen Provinz Aegyptus von ihrer Gründung im Jahr 30 v. Chr. bis zum Ende des 3. Jhs. n. Chr. Basierend auf einer umfassenden Zusammenstellung bekannter, aber auch bislang unpublizierter Portraits sowie einer Neudokumentation zahlreicher Objekte gelingt erstmalig eine belastbare chronologische und typologische Auswertung dieser Bildnisse. Zwar bilden dabei die Darstellungen aus weißem Marmor die zentrale und auch quantitativ bei weitem größte Materialgruppe, doch es finden auch Bildnisse aus anderen Werkstoffen wie Bronze, Kalkstein, Gips oder Alabaster Berücksichtigung. Da die Provinz aufgrund geringer eigener Marmorvorkommen fast ausschließlich auf Importe angewiesen war, sind die Marmorbildnisse ein exzellentes Forschungsobjekt, um nicht nur den Handel von Marmor nach Ägypten und seine Distribution und Weiterverarbeitung in der Provinz zu untersuchen, sondern auch damit verbundene handwerkliche Besonderheiten, wie die häufig zu beobachtenden Ergänzungen mit Stuck- oder Steinelementen. Darüber hinaus werden auch Überlegungen zur Semantik des Materials sowie der Herkunft und dem Selbstverständnis der dargestellten Personen angestellt.
Building on Social Virtual Reality to Support Flexible Collaboration and Enrich Therapy Sessions
(2025)
Social virtual environments allow their users to meet and collaborate in a shared three-dimensional space, even when far apart from each other in the real world. Within these spaces, the appearance and interaction capabilities of both users and environments can be adapted and changed in a myriad of ways. To enable virtual environments to fulfill their potential of supporting a wide variety of collaboration use-cases, both the impacts of basic interaction design decisions and the individual needs of specific usage areas need to be explored further.
This thesis approaches this topic in two ways. First, the basic building blocks of collaboration in social virtual environments are explored by asking the question: "How can social virtual spaces that allow interaction beyond real-world constraints utilize the potential of mutual assistance and shared workflows between multiple users?". Going into further detail for a serious use-case in which direct collaborative interactions and their effect on the included users are especially important, it then explores the potential of collaborative virtual spaces in the therapy domain by asking "How can the potential of social virtual spaces be utilized to support and improve therapy encounters?"
With regards to the first research question, the thesis presents two theoretical frameworks detailing different aspects of supporting smooth and varied collaboration processes. In addition, several user studies on the topic of collaborative virtual interaction are described, focusing on the role that different users can play during shared interaction and the effects that this distribution of roles and responsibilities has on both the performance and experience of the involved user pairs.
The results presented for this first research question show that social virtual spaces have the potential to provide dedicated support for collaborative workflows. To enable users to adapt their working mode individually and as a team, interaction techniques should complement a team's natural interaction and communication. When presenting novel interactions to users, providing them with a way to support each other can ease their adaptation to these interactions. In these cases, the inclusion of all interested collaborators as active participators should be prioritized in order to let all users benefit from being immersed in a virtual environment.
Addressing the combination of social virtual spaces with therapy in relation to the second research question, this thesis presents the result of a series of interviews with practicing physio- and psychotherapists. Motivated by the recorded expert feedback, it also reports on two more detailed explorations of specific areas of interest. The work presented for the second research question demonstrated the promise of using virtual environments in both exercise- and conversation-based therapy practice. Investigating the potential of shared interactions, the exploration of virtual recordings and the adaptation of virtual appearances, the presented work uncovered several topic areas that could be further explored regarding their possible use in the treatment of patients.
Taken together, the six research articles presented in this thesis show both the value of supporting and understanding shared interactions in virtual spaces and their potential place in serious use-cases like the therapy domain. When introducing shared virtual environments to new user groups, the opportunity for mutual support through shared interaction techniques could be a crucial building block towards making virtual spaces both accessible and attractive to a variety of users.
Three-Point Difference Schemes of High Order of Accuracy for Solving the Sturm-Liouville Problem
(2025)
The dissertation is devoted to the construction and justification of three-point difference schemes of high order of accuracy for solving the Sturm-Liouville problem. A new algorithmic realization of the exact three-point difference scheme on a non-uniform grid has been developed. We show that to compute the coefficients of the exact scheme in an arbitrary grid node, it is necessary to solve two auxiliary Cauchy problems for the system of three linear ordinary differential equations of the first order. The coefficient stability of the exact three-point difference scheme is proved. If the Cauchy problems are solved numerically using any one-step method, we obtain the truncated three-point difference scheme. The accuracy estimate of three-point difference schemes was obtained and the algorithm for finding their solution was developed.
We also developed a new algorithmic realization of the exact three-point difference scheme for the Sturm-Liouville problem with singularities at the ends of the interval. As in the case of the classical Sturm-Liouville problem, to find the coefficients of the exact three-point difference scheme, it is necessary to solve two auxiliary Cauchy problems for each grid node. The coefficient stability of the exact three-point difference scheme is proved. Since the Cauchy problems for the first and last grid nodes are singular, the Taylor series method has been developed to solve them. The accuracy estimate of truncated three-point difference schemes was obtained. To solve the difference scheme, the Newton's iterative method is used.
Numerical experiments are presented which confirm the efficiency of the proposed approach.
Entrepreneurship is recognized as an important discipline to achieve sustainable development and to address sustainability goals without losing sight of economic aspects. However, entrepreneurship rates are rather low in many industrialized countries with high income levels. Research clearly shows that there is a gap in the entrepreneurial process between intentions and subsequent actions. This means that not everyone with entrepreneurial ambitions also follows through and implements actions. This gap also exists for aspects of sustainability. As a result, there is a need to better understand the traditional and sustainability-focused entrepreneurial process in order to increase corresponding actions. This dissertation offers such a comprehensive perspective and sheds light on individual and contextual predictors for traditional and sustainability-focused behavior of entrepreneurs and self-employed across four studies.
The first three studies focus on individual predictors. By providing a systematic literature review with 107 articles, Chapter 2 highlights the ambivalent role of religion for the entrepreneurial process. Relying on the theory of planned behavior (TPB) as theoretical basis, religion can have positive effects on entrepreneurial attitudes and behavioral control, but also negative consequences for other aspects of behavioral control and subjective norms due to religious restrictions.
The quantitative empirical study in Chapter 3 similarly relies on the TPB and sheds light on individual perceptual factors influencing the sustainability-related intention-action gap in entrepreneurship. Using data from the 2021 Global Entrepreneurship Monitor (GEM) Adult Population Survey (APS) including 22,008 early-stage entrepreneurs from 44 countries worldwide, the results support our theoretical reasoning that sustainability-focused intentions are positively related to social entrepreneurial actions. In addition, it is demonstrated that positive perceptual moderators such as self-efficacy and knowing other entrepreneurs as role models strengthen this relationship while a negative perception such as fear of failure restricts social actions in early-stage entrepreneurship.
The next quantitative empirical study in Chapter 4 examines the behavioral consequences of well-being at a sample of 6,955 German self-employed during COVID-19. This chapter builds on two complementary behavioral perspectives to predict how reductions in financial and non-financial well-being relate to investments in venture development. In this regard, reductions in financial well-being are positively related to time investments, supporting the performance feedback perspective in terms of higher search efforts under negative performance. In contrast, reductions in non-financial well-being are negatively related to time and monetary investments, yielding support for the broadening-and-build perspective indicating that negative psychological experiences narrow the thought-action repertoire and hinder resource deployment. The insights across these first three studies about individual predictors indicate that many different, subjective beliefs, perceptions and emotional states can influence the entrepreneurial process making entrepreneurship and self-employment highly individualized disciplines.
The last quantitative empirical study provides an explorative view on a large number of contextual predictors for social and ecological considerations in entrepreneurial actions. Combining GEM data from 2021 on country level with further information from the World Bank and the OECD, a machine learning approach is employed on a sample of 84 countries worldwide. The results suggest that governmental and regulatory as well as cultural factors are relevant to predict social and ecological considerations. Moreover, market-related aspects are shown to be relevant predictors, especially socio-economic factors for social considerations and economic factors for ecological considerations. Overall, the four studies in this dissertation highlight the complexity of the entrepreneurial process being determined by many different individual and contextual factors. Due to the multitude of potential predictors, this dissertation can only give an initial overview of a selection of factors with many more aspects and interdependencies still to be examined by future research.
Small and medium-sized enterprises (SMEs) and mid-sized companies are vital contributors to the global economy, driving employment growth, fostering innovation, and enhancing international competiveness. However, in the aftermath of the Great Financial Crisis (GFC) and the collapse of the large finance company CIT Group, which provided 60% loans to US middle-market firms, banks reduced their lending activities. Thus, it became challenging for firms to obtain long-term loans. The financing gap has increased further due to high interest rates, the COVID-19 pandemic, the unstable situation in the real estate market as well as higher costs due to the adoption of digital infrastructure and sustainability goals. Therefore, the search for alternative financing solutions outside bank lending and public markets became unavoidable for SMEs and mid-sized companies. Private debt funds entered the market, and, since the GFC, they have played a crucial role in offering alternative financing for firms globally. Private debt fund managers raise capital commitments through closed-end funds (like private equity) and make senior loans (like banks) directly to, mostly, middlemarket firms. The private debt market has experienced rapid growth in recent decades. The private debt funds assets under management (AuM) increased by 380% from 2008 to 2022, reaching $1.5 trillion AuM in 2022 . The high growth of private debt shows great interest from investors in this alternative asset class and lucrative investment opportunities.
Despite its substantial and growing size, the private debt market is relatively understudied. This dissertation introduces private debt as an important alternative financing source, provides an overview of private debt strategies, seniority, and structure, discusses the legal considerations concerning private debt, and briefly compares the two most mature private debt markets: Europe and the U.S. Moreover, it assesses the size of the European private debt market and compares its development in different European regions. Furthermore, it examines in detail the business model of private debt funds based on a survey of 191 European and U.S. private debt managers with private debt assets under management of over $390 billion. Finally, it delves deeper into the relationship between private debt and private equity funds and their role in buyouts.
To sum up, this dissertation provides a basis and inspiration for future research to expand upon and dive deeper into the world of private debt funds, their business model, and their impact on portfolio companies and the economy as a whole.
In this dissertation, I analyze how large players in financial markets exert influence on smaller players and how this affects the decisions of the large ones. I focus on how the large players process information in an uncertain environment, form expectations and communicate these to smaller players through their actions. I examine these relationships empirically in the foreign exchange market and in the context of a game-theoretic model of an investment project.
In Chapter 2, I investigate the relationship between the foreign exchange trading activity of large US-based market participants and the volatility of the nominal spot exchange rate. Using a novel dataset, I utilize the weekly growth rate of aggregate foreign currency positions of major market participants to proxy trading activity in the foreign exchange market. By estimating the heterogeneous autoregressive model of realized volatility (HAR-RV), I find evidence of a positive relationship between trading activity and volatility, which is mainly driven by unexpected changes in trading activity and is asymmetric for some of the currencies considered. My results contribute to the understanding of the drivers of exchange rate volatility and the role of large players in the flow of information in financial markets.
In Chapters 3 and 4, I consider a sequential global game of an investment project to examine how a large creditor influences the decisions of small creditors with her lending decision. I pay particular attention to the timing of the large player’s decision, i.e. whether she makes her decision to roll over a credit before or after the small players. I show that she faces a trade-off between signaling to and learning from small creditors. By being a focal point for coordination, her actions have a substantial impact on the probability of coordination failure and the failure of the investment project. I investigate the sensitivity of the equilibrium by comparing settings with perfect and imperfect learning. The results highlight the importance of signaling and provide a new perspective on the idea of catalytic finance and the influence of a lender-of-last-resort in self-fulfilling debt crises.
There is a wide range of methodologies for policy evaluation and socio-economic impact assessment. A fundamental distinction can be made between micro and macro approaches. In contrast to micro models, which focus on the micro-unit, macro models are used to analyze aggregate variables. The ability of microsimulation models to capture interactions occurring at the micro-level makes them particularly suitable for modeling complex real-world phenomena. The inclusion of a behavioral component into microsimulation models provides a framework for assessing the behavioral effects of policy changes.
The labor market is a primary area of interest for both economists and policy makers. The projection of labor-related variables is particularly important for assessing economic and social development needs, as it provides insight into the potential trajectory of these variables and can be used to design effective policy responses. As a result, the analysis of labor market behavior is a primary area of application for behavioral microsimulation models. Behavioral microsimulation models allow for the study of second-round effects, including changes in hours worked and participation rates resulting from policy reforms. It is important to note, however, that most microsimulation models do not consider the demand side of the labor market.
The combination of micro and macro models offers a possible solution as it constitutes a promising way to integrate the strengths of both models. Of particular relevance is the combination of microsimulation models with general equilibrium models, especially computable general equilibrium (CGE) models. CGE models are classified as structural macroeconomic models, which are defined by their basis in economic theory. Another important category of macroeconomic models are time series models. This thesis examines the potential for linking micro and macro models. The different types of microsimulation models are presented, with special emphasis on discrete-time dynamic microsimulation models. The concept of behavioral microsimulation is introduced to demonstrate the integration of a behavioral element into microsimulation models. For this reason, the concept of utility is introduced and the random utility approach is described in detail. In addition, a brief overview of macro models is given with a focus on general equilibrium models and time series models. Various approaches for linking micro and macro models, which can either be categorized as sequential approaches or integrated approaches, are presented. Furthermore, the concept of link variables is introduced, which play a central role in combining both models. The focus is on the most complex sequential approach, i.e., the bi-directional linking of behavioral microsimulation models with general equilibrium macro models.
In den letzten Jahren hat die Nutzung von Drohnen deutlich zugenommen. Dies liegt unter anderem an der Leistungssteigerung, der guten Verfügbarkeit und an dem einfachen Einsatz von Drohnen. Damit sind auch Anwendungen in der Forschung möglich geworden, die zuvor unmöglich oder mit hohen Kosten verbunden waren. Als Sensor zur Datenaufzeichnung findet im Bereich der Forschung häufig eine Kamera Verwendung. Zusammen mit einer Drohne können Bereiche einfach und kostengünstig überflogen und dabei erkundet, beobachtet oder überwacht werden. Neben der Kamera als Sensor werden auch häufig Multispektralkameras und Lidar eingesetzt. Dagegen findet Radar im Bereich von kleinen Drohnen kaum Anwendung. Ziel dieser Forschungsarbeit war es zu untersuchen, ob neuste Radartechnik einen Mehrwert in der Fernerkundung mit kleinen Drohnen bieten kann.
Hierfür wurden moderne Radarsensoren aus dem Automobilbereich ausgewählt. Als Drohnen wurden sowohl Quadrocopter als auch eine Starrflügler-Drohne eingesetzt. Für die Analyse, Berechnung und Auswertung der Daten wurde MATLAB verwendet. Der erste Ansatz beruhte auf einer Starrflügler-Drohne, die sich durch ihren freien Zugriff auf die Steuerung auszeichnet. Dadurch können auch spezielle Anforderungen an die Flugregelung berücksichtigt werden. Allerdings können mit einer Starrflügler-Drohne keine langsamen oder sogar statische Luftaufnahmen erstellt werden, um Erfahrung mit den Radardaten zu erlangen. Aus diesem Grund wurde anschließend ein Radar-Messsystem entworfen, das unabhängig von der Drohne eingesetzt werden kann. Zusammen mit einem Quadrocopter konnten so statische Radarmessungen durchgeführt werden, um die Verwendbarkeit der Radardaten in der Fernerkundung zu bestätigen. Das Messsystem konnte so aber nur für 2-dimensionale Anwendungen eingesetzt werden. In der weiteren Forschungsarbeit wurde untersucht, ob es möglich ist, mit einem Radarsensor der nur in 2-dimensionen misst eine 3-dimensionale Aufzeichnungen zu erstellen. Als Versuchsobjekt wurde eine Hütte gewählt, die Anhand der Radardaten dargestellt werden sollte. Dafür wurde ein Prozess zur Datenverarbeitung mit elf Schritten entworfen, womit die Hütte auf 0,6 Meter genau rekonstruiert werden konnte. Im letzten Teil der Forschungsarbeit wurde untersucht, ob sich die Genauigkeit des Messsystems erhöhen lässt, um noch mehr Anwendungsfälle bedienen zu können. Dafür wurde ein neuer Radarsensor eingesetzt, der eine höhere Genauigkeit besitzt. Die Forschungsarbeit konzentrierte sich darauf, die Abhängigkeit der Radardaten zum ungenauen Lagesensor aufzulösen. Dabei wurde die Fluglage über die Radardaten selbst berechnet, womit die Fluglage genauer bestimmt werden kann als allein über den Lagesensor. Erst damit kann die höhere Genauigkeit des neuen Radarsensors auch tatsächlich ausgenutzt werden.
Mit den Ergebnissen der Forschungsarbeit sowie den vorgestellten Radarsensoren, stehen der Fernerkundung mit kleinen Drohnen, neben den klassischen Sensoren, zukünftig auch Radarsensoren zur Verfügung. Mit dem Messsystem und den Erkenntnissen aus der Forschungsarbeit werden bereits erste spezifische Anwendungen in Forschungsprojekten untersucht. Darüber hinaus konnten auch Anwendungsfälle außerhalb der Fernerkundung identifiziert werden. Die Weiterentwicklung im Bereich des autonomen Fahrens wird für Leistungssteigerungen bei Radarsensoren sorgen. Damit stehen auch der Fernerkundung zukünftig noch bessere Radarsensoren zur Verfügung.
Within this thesis the hedging behaviour of airlines from 2005 to 2019 is analysed by using an unbalanced panel dataset consisting of a total of 78 airlines from 39 countries. The focus of the analysis is on financial and operational hedging as well as the influence of both on CO2 emissions and the development of emitted CO2 emissions. For the analysis Probit models with random effects and OLS models with fixed effects were used.
The results regarding the relationship between leverage and financial hedging indicate a negative relationship between everage and financial fuel hedging and a non-linear convex relationship for highly leveraged airlines, which is contrary to the theory of financial distress.
In addition, the study provides evidence that airlines using other types of derivatives, such as interest rate derivatives, engage in more fuel hedging.
In terms of operational hedging, the analysis suggests that operating a diversified fleet is a complement to, rather than a substitute for, financial hedging. With regard to alliance membership, the results do not show that alliance membership is a substitute for financial hedging, as members of alliances are more likely to engage in hedging transactions and to a greater extent.
The analysis shows that the relative CO2 emissions fall in the period under review, but this does not apply to the absolute amount. No general statement can be made about the influence of financial and operational hedging on CO2 emissions, as the results are mixed.
Zirkularität und zirkulare Geschäftsmodelle in der Holzindustrie: eine empirische Untersuchung
(2025)
Der ökologische Zustand der Erde befindet sich infolge von Umweltverschmutzung, Abfallaufkommen und CO₂-bedingtem Klimawandel in einem kritischen Zustand. Mit rund 40 % trägt der Bau- und Gebäudesektor erheblich zu den globalen Treibhausgasemissionen bei. Holz gilt als klimafreundliche Alternative zu Beton und Stahl, bedarf jedoch ebenfalls einer nachhaltigen Nutzung. Die Kreislaufwirtschaft bietet mit der Wiederverwendung ein zukunftsweisendes Konzept: So sind etwa 45% des beim Rückbau von Gebäuden anfallenden Holzes potenziell als Rohstoff nutzbar. Dadurch werden alternative Rohstoffquellen erschlossen und das Abfallaufkommen reduziert.
Trotz dieses Potenzials liegt der Zirkularitätsgrad der Weltwirtschaft derzeit nur bei 7,2 %. Vor diesem Hintergrund untersucht die Dissertation, welche Wettbewerbsstrategien und welche organisationalen Fähigkeiten die Entwicklung zirkulärer Geschäftsmodelle fördern. Der Fokus liegt auf der Holzindustrie der DACH-Region, die historisch durch forstwirtschaftliche Nachhaltigkeit geprägt ist, jedoch bislang überwiegend linearen Strukturen folgt.
Die Arbeit kombiniert theoretische Fundierung, eine vierjährige Literaturrecherche, Experteninterviews sowie im Zentrum eine quantitative Unternehmensbefragung (n = 200). Daraus wurde eine aktivitätsorientierte Skala zur Bewertung der Zirkularität eines Geschäftsmodells entwickelt. Analysiert wurden drei Perspektiven: Fähigkeiten, Strategien und Stakeholder.
Im Kontext der Fähigkeitsperspektive wurde ermittelt, dass die dynamischen Fähigkeiten positive Implikationen auf die Umsetzung von Zirkularität haben. Im Forschungsfeld der Strategieperspektive wurde deutlich, dass die Innovationsführerschaft positive Effekte auf die Umsetzung der Kreislaufwirtschaft besitzt. Zudem weisen sowohl die Innovationsführerschaft als auch die Qualitätsführerschaft einen positiven indirekten Effekt über die dynamischen Fähigkeiten auf die Entwicklung zirkulärer Geschäftsmodelle auf. Im Rahmen der Stakeholderperspektive wurde eruiert, dass der Stakeholder-Druck im Zusammenwirken mit einem grünen Unternehmensimage eine Katalysator-Wirkung besitzt. Der Einfluss der Interessengruppen führt dazu, dass die Unternehmen ein grünes Image in eine substanzielle Umsetzungsphase überführen. Darüber hinaus wurde ersichtlich, dass der Stakeholder-Druck als zentraler Veränderungsfaktor wirkt. Während die direkten Auswirkungen der dynamischen Fähigkeiten durch den Druck zurückgehen, nehmen die indirekten Effekte auf die Erreichung von Zirkularität zu. Abschließend werden Handlungsempfehlungen für Unternehmen sowie wissenschaftliche Implikationen und zukünftige Forschungsmöglichkeiten abgeleitet.
Case-Based Reasoning (CBR) is a symbolic Artificial Intelligence (AI) approach that has been successfully applied across various domains, including medical diagnosis, product configuration, and customer support, to solve problems based on experiential knowledge and analogy. A key aspect of CBR is its problem-solving procedure, where new solutions are created by referencing similar experiences, which makes CBR explainable and effective even with small amounts of data. However, one of the most significant challenges in CBR lies in defining and computing meaningful similarities between new and past problems, which heavily relies on domain-specific knowledge. This knowledge, typically only available through human experts, must be manually acquired, leading to what is commonly known as the knowledge-acquisition bottleneck.
One way to mitigate the knowledge-acquisition bottleneck is through a hybrid approach that combines the symbolic reasoning strengths of CBR with the learning capabilities of Deep Learning (DL), a sub-symbolic AI method. DL, which utilizes deep neural networks, has gained immense popularity due to its ability to automatically learn from raw data to solve complex AI problems such as object detection, question answering, and machine translation. While DL minimizes manual knowledge acquisition by automatically training models from data, it comes with its own limitations, such as requiring large datasets, and being difficult to explain, often functioning as a "black box". By bringing together the symbolic nature of CBR and the data-driven learning abilities of DL, a neuro-symbolic, hybrid AI approach can potentially overcome the limitations of both methods, resulting in systems that are both explainable and capable of learning from data.
The focus of this thesis is on integrating DL into the core task of similarity assessment within CBR, specifically in the domain of process management. Processes are fundamental to numerous industries and sectors, with process management techniques, particularly Business Process Management (BPM), being widely applied to optimize organizational workflows. Process-Oriented Case-Based Reasoning (POCBR) extends traditional CBR to handle procedural data, enabling applications such as adaptive manufacturing, where past processes are analyzed to find alternative solutions when problems arise. However, applying CBR to process management introduces additional complexity, as procedural cases are typically represented as semantically annotated graphs, increasing the knowledge-acquisition effort for both case modeling and similarity assessment.
The key contributions of this thesis are as follows: It presents a method for preparing procedural cases, represented as semantic graphs, to be used as input for neural networks. Handling such complex, structured data represents a significant challenge, particularly given the scarcity of available process data in most organizations. To overcome the issue of data scarcity, the thesis proposes data augmentation techniques to artificially expand the process datasets, enabling more effective training of DL models. Moreover, it explores several deep learning architectures and training setups for learning similarity measures between procedural cases in POCBR applications. This includes the use of experience-based Hyperparameter Optimization (HPO) methods to fine-tune the deep learning models.
Additionally, the thesis addresses the computational challenges posed by graph-based similarity assessments in CBR. The traditional method of determining similarity through subgraph isomorphism checks, which compare nodes and edges across graphs, is computationally expensive. To alleviate this issue, the hybrid approach seeks to use DL models to approximate these similarity calculations more efficiently, thus reducing the computational complexity involved in graph matching.
The experimental evaluations of the corresponding contributions provide consistent results that indicate the benefits of using DL-based similarity measures and case retrieval methods in POCBR applications. The comparison with existing methods, e.g., based on subgraph isomorphism, shows several advantages but also some disadvantages of the compared methods. In summary, the methods and contributions outlined in this work enable more efficient and robust applications of hybrid CBR and DL in process management applications.
When natural phenomena and data-based relations are driven by dynamics which are not purely local, they cannot be described satisfactorily by partial differential equations. As a consequence, mathematical models governed by nonlocal operators are of interest. This thesis is concerned with nonlocal operators of the form
$\mathcal{L}u(x) = PV \int_{\mathbb{R}^d} (u(x)-u(y)) K(x,dy), x \in \mathbb{R}^d$,
which are determined through a family of Borel measures $K=(K(x, \cdot))_{x \in \mathbb{R}^d}$ on $\mathbb{R}^d$ and which act on the vector space of Borel measurable functions $u: \mathbb{R}^d \rightarrow \mathbb{R}$. For a large class of families $K$, namely those where $K$ is a symmetric transition kernel satisfying a specific non-degeneracy condition, a variational theory for nonlocal equations of the type $\mathcal{L}u=f$ is established which builds upon gadgets from both measure theory and classical analysis. While measure theory is used to provide a nonlocal integration by parts formula that allows to set up a reasonable variational formulation of the above equation in dependency of the particular boundary condition (Dirichlet, Robin, Neumann) considered, Hilbert space theory and fixed-point approaches are utilized to develop sufficient conditions for the existence of variational solutions. This theory is then applied to two specific realizations of $\mathcal{L}$ of interest before a weak maximum principle is established which is finally used to study overlapping domain decomposition methods for the nonlocal and homogeneous Dirichlet problem.
The application of machine learning and deep learning methods to hydrological modelling has advanced significantly in recent years, offering alternatives to traditional conceptual and physically based approaches. Within the numerous algorithms, long short-term memory (LSTM) networks have proven themselves particularly useful for the task of streamflow modelling. This thesis provides a collection of publications that investigate the capabilities, limitations and interpretability of LSTM for the purpose of streamflow modelling and climate change impact assessment within the lowland Ems catchment in Northwest Germany.
Within a comparative performance evaluation, LSTM and its predecessor, the recurrent neural network, demonstrate superior accuracy compared to the conceptual HBV model across various statistical performance metrics. However, a decline in performance was observed during low-flow conditions in certain sub-catchments. The evaluation of the flow duration curve revealed that the ML models more effectively capture the water balance, while HBV better represents streamflow dynamics.
To enhance the interpretability of LSTM, six explainable artificial intelligence techniques were applied. These methods consistently identified seasonal patterns in the temporal relevance of hydroclimatic input data. In combination with an observed correlation between the internal LSTM states and catchment-scale soil moisture dynamics, the findings suggest that LSTM models are capable of implicitly learning the relevant hydrological processes.
Following, the capabilities of LSTM to model climate change impact scenarios, particularly when they extend beyond historically observed climate conditions, are addressed. An ensemble of climate change projections is provided as hydroclimatic input to evaluate the performance of LSTMs and conceptual models. While all models reveal heterogeneous alterations in streamflow under future climate conditions, significant differences emerge based on the model type. Results provide evidence that LSTMs, in combination with the temperature-based Haude formula for estimating potential evaporation, work inadequately under altered climatic regimes, raising concerns about their applicability in long-term projections. The study also indicates the potential need to incorporate physical constraints into LSTM architectures to ensure model robustness and hydrological plausibility beyond the historical training range.
Collectively, this thesis contributes important insights into the applicability and interpretability of LSTM models in streamflow modelling. Despite the presence of a physically realistic representation of soil moisture dynamics of the Ems catchment, no robust change signals for streamflow under climate change can be derived. Those results underscore the potential of LSTM model approaches for accurate streamflow simulation, however, they require us to always critically question LSTM results, particularly when they are applied outside the training range.
Bilevel problems are optimization problems for which parts of the variables
are constrained to be an optimal solution to another nested optimization
problem. This structure renders bilevel problems particularly well-suited for
modeling hierarchical decision-making processes. They are widely applicable
in areas such as energy markets, transportation systems, security planning,
and pricing. However, the hierarchical nature of these problems also makes
them inherently challenging to solve, both in theory and in practice.
In this thesis, we study different nonlinear problem settings for the
nested optimization problem. First, we focus on nonlinear but convex bilevel
problems with purely integer variables. We propose a solution algorithm that
uses a branch-and-cut framework with tailored cutting planes. We prove
correctness and finite termination of the method under suitable assumptions
and put it into context of existing literature. Moreover, we provide an
extensive numerical study to showcase the applicability of our method and
we compare it to the state-of-the-art approach for a less general setting on
suitable instances from the literature. Furthermore, we discuss challenges that
arise when we try to generalize our approach to the mixed-integer setting.
Next, we study mixed-integer bilevel problems for which the nested
problem has a nonconvex and quadratic objective function, linear constraints,
and continuous variables. We state and prove a complexity-theoretical hardness result for this
problem class and develop a lower and upper bounding scheme to solve
these problems. We prove correctness and finite termination of the proposed
method under suitable assumptions and test its applicability in a numerical
study.
Finally, we consider bilevel problems with continuous variables, where
the nested problem has a convex-quadratic objective function and linear
constraints. We reformulate them as single-level optimization problems using
necessary and sufficient optimality conditions for the nested problem. Then,
we explore the family of so-called P-split reformulations for this single-level
problem and test their applicability in a preliminary numerical study.
Spatial microsimulation is an important tool for integrating geographical information into the evaluation of public policies and the analysis of social phenomena in urban regions. These models simulate the behavior and interaction between units of the region, such as individuals, households or firms, under specific conditions that may or not involve projections over time. This requires a representative base data set for their respective units.
In this thesis, we focus on the geo-referencing step of the population in the construction of this data set, where we define the location of the individuals so that the allocation obtained is representative in relation to the population of the region. To do this, we consider the assignment of households to dwellings with specific coordinates by solving a maximum weight matching problem where side constraints are included so that the allocation obtained satisfies statistical structures intrinsic to the considered region.
The model of this problem represents each feasible assignment of household to dwelling as a binary variable, which results in billions of variables for medium-sized municipalities such as the city of Trier, Germany. Therefore, standard solvers for mixed-integer linear optimization are not able to solve it due to their high time and memory consumption. Hence, we develop two approaches capable of producing high-quality allocations using a reasonable amount of computational resources, one based on specific decomposition algorithms, and the other characterized by the application of an approximation algorithm in the framework of Lagrangian relaxation of the side constraints.
We theoretically explore the allocations obtained by both approaches and perform an extensive computational study using synthetic data sets and real-world data sets associated with the city of Trier. The results show that the developed methods are able to obtain near-optimal solutions using significantly less memory and time than the solver Gurobi, which enables them to tackle significantly larger instances, with approximately 100 000 households and dwellings. Furthermore, the allocations obtained for the real-world data sets correspond to a realistic population distribution, which strengthens the practical applicability of our methods.
In Vielfalt geeint? Europäische Identitätskonstruktionen im bundesdeutschen Diskurs seit 1990
(2025)
Die Arbeit untersucht den bundesdeutschen Diskurs zur europäischen Integration seit 1990 aus diskurslinguistischer Perspektive und versteht ihn als Aushandlungsraum europäischer Identitätskonstruktionen. Ausgangspunkt ist die Annahme, dass institutionelle Vertiefung und geografische Erweiterung der EU nicht allein als verrechtlichte Integrationsschritte zu begreifen sind, sondern stets auch identitätspolitische Dimensionen tragen. Ziel der Studie ist es, die sprachliche Konstituierung der EU als identitätspolitisches Referenzsystem sichtbar zu machen und damit eine diskurslinguistische Ergänzung zur interdisziplinären Integrationsforschung zu leisten. Auf Grundlage eines diachronen Korpus, das zentrale integrationspolitische Etappen und Krisenphasen umfasst, wird ein Mixed-Methods-Ansatz entwickelt, der korpusgeleitete Verfahren mit der hermeneutischen Annotation diskurslinguistischer Kategorien verbindet. Analysiert werden nicht nur lexikalisch-semantische Repräsentationen Europas, sondern vor allem diskursive Grundfiguren wie Einheit, Vielfalt, Eigenes und Fremdes sowie deren Verbindung zu politischen Sinnzuschreibungen. Die Ergebnisse zeigen, in welchem Maße sich im deutschen Diskurs ein stabiler identitätspolitischer Bezugspunkt zur EU herausgebildet hat, wie sich normative Leitbilder und funktionale Rationalitäten überlagern und wie europäische Integration sprachlich zwischen symbolischer Aufladung und strategischer Instrumentalisierung verhandelt wird.
Extracellular enzymes in microbial communities play a central role in nutrient cycling and the degradation of (pollutant) substances in various natural and anthropogenic systems. Bound in aquatic biofilms, sludge aggregates, or even unbound at their interfaces, they are of great importance for both the environment and human health. In particular, in wastewater treatment plants and inland waters, hydrolytic activities influence the wide-reaching efficiency of nutrient removal and self-purification, thus contributing significantly to overall water quality.
The main goal of this dissertation project was to investigate the factors that influence enzymatic activity and the health of microbial communities in activated sludge and river systems, particularly in relation to anthropogenic influences and natural environmental conditions. The aim was to contribute to a better understanding of the sensitivity of our freshwater ecosystems and to support the long-term preservation of water quality and ecological stability. The development and optimization of appropriate methods, as well as their testing and applicability, were the focal points.
For this purpose, a fluorometric microplate assay was developed and adapted to determine both extracellular enzyme activities (EEAs) in activated sludge samples and in intact biofilms. Its suitability for field studies was subsequently tested. Inhibition and activity of selected hydrolases under different conditions were investigated to better understand the mechanisms and potential environmental risks posed by anthropogenic influences and seasonal fluctuations of hydrochemical and climatic parameters.
The first phase of the doctoral thesis involved studies on the inhibition of alkaline phosphatase in activated sludge by oxyanions. Using the fluorometric microplate assay, the inhibitory effect was sensitively detected over a pH range of 7.0 to 8.5. IC50- and IC20-concentrations were calculated from modeled dose-response functions. It was found that vanadate and tungstate caused strong inhibitory effects, while molybdate moderately inhibited the enzyme. An increasing pH led to a reduction in the inhibitory effect of tungstate and molybdate. The inhibition effects of vanadate were not significantly affected by the pH. In municipal wastewater, the concentrations of such metal ions are usually low, but industrial wastewater may have pollutant loads that can significantly impact the removal of phosphorus-containing compounds, and thus the efficiency of treatment plants.
In the second phase, an attempt was made to further adapt the developed methodology to investigate EEA and kinetics in intact freshwater biofilms. Four different types of bead materials (lava, glass, sintered quartz, and ceramics) fitting into a 96-well microplate were tested as carriers for biofilms on both the laboratory and field scale. The analysis included a total of seven hydrolases as representatives of key nutrient cycles such as phosphorus, carbon, and nitrogen: phosphatases, glucosidases, peptidases (two different types), and sulfatase. Experiments with increasing substrate concentrations led to classical kinetic profiles according to the Michaelis-Menten mechanism. This allowed for the prediction of the biofilm enzymes’ response to different substrate concentrations. Parameters such as Vmax and Km could be derived from the modeled curves.
Ceramic beads are particularly suitable for long-term studies due to their high stability, while sintered quartz beads should be preferred for the use in stagnant media (material loss under turbulent conditions). Lava and glass beads, on the other and, proved suboptimal for uniform biofilm development due to their surface properties. The potential use of this fast and sensitive test for ecotoxicological or even human-toxicological studies was demonstrated by the effects of caffeine on the activity of PDE. The result of this part of the research represents a powerful tool for assessing environmental pollution and monitoring water quality.
The high application potential was clearly highlighted in the final phase of the project. The goal here was to deepen the understanding of interactions between seasonal factors, anthropogenic influences, and biofilm processes in rivers by investigating EEA and biofilm parameters such as biomass and relating them to hydrochemical and climatic factors. Ceramic beads were exposed both upstream and downstream of a wastewater treatment plant discharge and sampled over a period of seven months. EEAs and biomass varied depending on the season and location, with higher microbial activity observed upstream in winter. Winter conditions led to the dilution of most nutrients as well as in an increse of dissolved oxygen. Nutrient concentrations analyzed downstream were significantly higher in the summer. Accumulation of nutrient or pollutants during the summer months cannot be excluded, which may have led to a general reduction in enzyme activities.
Potential causes could be inhibitory effects on the enzymes, or a reduced enzyme activity due to a sufficiently high nutrient supply. In general, the sampling site upstream showed a more pronounced seasonal dynamics, with a significant proportion of the variance in biological parameters (activity and biomass) attributable to seasonal factors. A secondary component, likely reflecting the impact of the treatment plant discharge, explained another portion of the data variance. Regardless of the season, high correlations between biological parameters were observed upstream, while downstream the data were more decorrelated. This could be because the biofilms, under chronic stress, respond less dynamically to seasonal fluctuations.
This dissertation illustrates that in addition to anthropogenic stress factors, seasonal fluctuations of hydrochemical and climatic parameters should also be considered in "stress downstream the pipe" studies. The selected methods are recommended for explaining and considering the data variance, as they highlight the complex interplay between microbial enzymatic activity, environmental factors, and pollutants in the activated sludge of wastewater treatment plants and also in aquatic systems. The novel bead assay could pave the way for the future standardization of effect-oriented studies on intact aquatic biofilms.
Perennial crops eliminate soil disturbance and reduce the amount of synthetic chemicals that are applied to the soil, improving soil biodiversity and food web structure. Additionally, perennial cropping is characterised by all year-round surface coverage which benefits soil biota in terms of habitat and food sources. Perennial intermediate wheatgrass (Thinopyrum intermedium, IWG) was domesticated and commercialised by The Land Institute in Kansas as Kernza® and serves as an example for these nature-based solutions. It develops an extensive root system that has a higher nutrient retention, possibly reducing nutrient runoff. It thereby follows a more resource-conservative strategy with improved belowground-oriented resource allocation in its root system. This may reduce the need for excessive fertiliser as the crop has a higher nitrogen efficiency, among other things.
IWG promoted the earthworm community and its diversity, more specifically, the occurrence of epigeic species (litter inhabitants), since those species benefit from the increased soil coverage and elimination of disturbances in the soil. As IWG creates a dense and extensive root system, as shown by the increased occurrence of root-feeding nematodes, endogeic species (horizontal burrowers) are supported through the provision of a reliable food source. IWG was characterised as a mostly undisturbed system with a highly structured food web through nematode analysis, as expressed through the promotion of structure indicators, for example, that are sensitive to disturbances in the soil and are therefore supported under no-till management. The root microbiome is continuously being shaped by the host as the crop regrows from the roots each vegetation period. This creates a symbiotic relationship and a beneficial feedback loop for the crop. Resultantly, the root-endophytic microbiome under IWG had a higher network complexity, connectivity and stability compared to annual wheat. The regrowth from the roots for IWG requires increased nutrient and energy storage, which was indicated by increased starch values. Correspondingly, the longer residence time of the roots in the soil resulted in higher lignin values. Furthermore, the decomposition pathway was dominated by fungivorous nematodes which may correspond to stimulated nutrient cycling and a heterogeneous resource environment, as seen for low input systems.
Overall, perennial wheat cultivation improved soil biodiversity already after an establishment of 3-6 years. As those benefits were present for all three countries, the varying soil and climate conditions do not seem to interfere with the positive effect of perennial wheat on the soil ecosystem, demonstrating a wide transferability and adaptability of the crop onto other study sites as well. Enhanced complexity and connectivity of the food web in comparison to annual wheat may indicate a resistance against abiotic stress, suggesting IWG cultivation as a viable option for a sustainable and resilient agriculture. The improvement in nutrient cycling and the resource-efficient cultivation strategy for IWG could enable cultivation on marginal land where annual crop cultivation is not possible as the soils are susceptible to erosion and nutrient runoff. This opens up new possibilities for agricultural cultivation on previously unused land, thus contributing to food security in the future.
Modellierung von o-PO4- Einträgen in saarländische Oberflächenwasserkörper im Trockenwetterfall
(2025)
Die Verfügbarkeit von ortho-Phosphat (o-PO₄) trägt wesentlich zur Eutrophierung von Fließgewässern bei und gefährdet damit das Erreichen des „guten ökologischen Zustands“ gemäß der EU-Wasserrahmenrichtlinie. Da die kommunalen Kläranlagen zentrale Eintragsquellen darstellen, gewinnt die Reduktion von o-PO₄ an dieser Stelle an Bedeutung. Neben der chemischen Phosphorelimination bietet insbesondere die vierte Reinigungsstufe, primär zur Entfernung von Mikroschadstoffen konzipiert, einen Synergieeffekt mit potenziellen Phosphorentfernungsraten von bis zu 85 %.
Zur Bewertung des Einflusses einer solchen Reinigungsstufe wurde ein Modell für ausgewählte saarländische Oberflächenwasserkörper (OWK) entwickelt, das den Trockenwetterfall als eutrophierungsrelevantes Szenario abbildet. Ein zentraler Bestandteil ist ein neu erarbeiteter Retentionsansatz, der biochemische und physikalische Prozesse wie Adsorption, Sedimentation und biologische Assimilation berücksichtigt. Auf Basis der Differenz zwischen emissionsseitig bilanziertem und gemessenem o-PO₄-Gehalt wurden für jeden OWK Verminderungsraten je Fließmeter abgeleitet und schließlich eine Gleichung zur Abschätzung der Retention in Abhängigkeit der Einzugsgebietsgröße formuliert. Die Validierung zeigt hinreichende Modellgenauigkeit, wenngleich negative Frachtdifferenzen in einigen Gewässern auf zusätzliche, nicht eindeutig quantifizierbare Einträge – etwa aus Landwirtschaft oder Kanalverlusten – hindeuten.
Die Szenarienanalyse belegt, dass eine vierte Reinigungsstufe grundsätzlich zur Reduktion von o-PO₄ an den Messstellen beiträgt. Eine Unterschreitung des geltenden Orientierungswertes wird jedoch nur erreicht, wenn sämtliche Kläranlagen eines OWK nachgerüstet werden – und auch dann nur in einigen Fällen. Damit stellt die vierte Reinigungsstufe allein keine ausreichende Alternative zu den Maßnahmen des 3. Bewirtschaftungsplans des Saarlandes dar, kann jedoch als ergänzende Strategie zur Verringerung der Phosphoreinträge dienen.
Price indices play a vital role in economic measurement as they reflect price levels
and measure price fluctuations. Price level measures are used with macroeconomic
indicators to express them in real terms. These measures are also used to index wages,
rents, and pensions. Furthermore, they are used as a reference for monetary policy
conducted by central banks. Therefore, the provision of accurate price indices is one
of the most important goals of National Statistical Institutes (NSIs), and numerous
studies have been devoted to this goal.
This cumulative dissertation also contributes to this goal. It contains four chapters,
each of which represents a separate research. The first two studies are devoted to
the treatment of seasonal products by using different price index methods. The first
research is co-authored with Ken van Loon. The third research is dedicated to finding
the most accurate method to make price predictions for missing products. The fourth
research is focused on the treatment of products by using different price index methods
when products’ quality characteristics are available.
Measuring the economic activity of a country requires high-quality data of businesses. In the case of Germany, this is not only required at national level, but also at federal state level and for different economic sectors. Important sources for high-quality business data are the business register and, among others, also 14 business surveys which are conducted by the Federal Statistical Office of Germany. However, the quality requirements of the Federal Statistical Office are in contrast to the interests of the businesses themselves. For them, answering to a survey's questionnaire is an additional cost factor, also known as response burden. A high response burden should be avoided, since it can have a negative impact on the quality of the businesses' responses to the surveys. Therefore, sample coordination can be used as a method to control the distribution of response burden while securing high-quality data.
When applying already existing business survey coordination systems, developed by different statistical institutes, legal and administrative standards of German official statistics have to be taken into account. These standards consider different sampling fractions, rotation fractions, periodicity, and stratification of the aforementioned 14 business surveys. Therefore, the aim of this doctoral thesis is to check the existing business survey coordination systems for their applicability in the context of German official statistics and, if necessary, to modify them accordingly. These modifications include the introduction of individual burden indicators which aim to take the individual perception of response burden into account.
For this purpose, several synthetic data sets have been created to test the application of the modified versions of the different business survey coordination systems through Monte Carlo simulation studies. These data sets include a large panel data set, reflecting the landscape of businesses in Rhineland-Palatinate and three smaller, synthetic data sets. The latter have been created with the help of the R package BuSuCo which has been developed within the scope of this thesis. The above mentioned simulation studies are evaluated based on different measures for estimation quality as well as for the concentration and distribution of response burden.
Income composition can have a significant impact on workers’ well-being, productivity, and career paths. Wages often include a variety of components, such as unconditional bonuses, profit-sharing payments, and incentives based on the individual performance of employees. Each of these may influence employee labour outcomes differently and the worker composition may matter for managers when designing the salary package. Simultaneously, workers’ employment choices and well-being are influenced by income outside the job, such as inheritances and lottery winnings, as well as by external factors like technological change. This dissertation includes five empirical studies that investigate these issues, yielding new insights on the role of monetary gifts, financial incentives, labour market institutions, and technology disruptions in affecting employees’ labour and well-being outcomes.
Many developed countries, including Germany, face a steady rise in the share of
individuals obtaining higher education. While rising education itself bears a series
of advantages as extensively studied in previous literature, it is also conceptually
linked to a higher likelihood of working in an occupation that does not match
one’s formal qualifications. Previous studies have predominantly evaluated
how demographic or job‐related aspects correlate with the likelihood of being
educationally ﴾mis﴿matched. However, they have largely ignored institutional
facets of the educational system or industrial organization. Moreover, little is
known about how private wealth affects educational mismatch or whether job
satisfaction is homogenously affected among individuals once such a mismatch
occurs. The five projects collected in this thesis aim to answer these open
questions in the literature for Germany, using data from the Socio‐Economic Panel
and employing different time intervals between 1984 and 2022.
Beginning with the educational system in early childhood, Chapter 2 evaluates
the impact of school‐starting age on the likelihood of over‐ and undereducation.
It exploits the exogenous variation in school‐entry rules across federal states
and years in Germany with regression discontinuity designs. The results report
a negative impact of school‐starting age on the likelihood of undereducation,
but no systematic relationship with overeducation.
Subsequently, Chapter 3 explores the variation in education costs by leveraging
the quasi‐experimental setting induced by the time‐limited introduction of tuition
fees in several German federal states between 2006 and 2014. The increase
in education costs among treated graduates results in a significantly higher
likelihood of overeducation, which endures even several years post‐graduation.
Chapter 4 focuses on the industrial relations system and examines the
correlation between trade union membership and the likelihood and extent of
educational ﴾mis﴿match. The results reveal that trade union members report
significantly less overeducation at both the intensive and extensive margin
and also a higher likelihood of being matched compared to non‐members. Furthermore, the heterogeneity analysis provides evidence that this correlation
is driven by improved bargaining power instead of informational advantages.
Chapter 5 focuses on private wealth as a determinant of educational mismatch
by investigating the impact of a wealth shock through inheritances, lottery
winnings or gifts on the likelihood of over‐ and undereducation. Due to
the diminishing marginal returns of wages with increasing windfall gains the
likelihood of undereducation is expected to decrease, while that of overeducation
is expected to increase. Empirically, these suppositions are supported for
overeducation, as its likelihood increases significantly after the windfall gain.
Further analyses reveal that this effect is driven by individuals switching
occupations while increasing their leisure time, and it materializes only for
medium to large windfall gains.
Contrary to the previous chapters, Chapter 6 focuses on educational mismatch,
more precisely on overeducation, as the independent variable. In particular, it
investigates the correlation between overeducation and job satisfaction. The
results align with the previously established negative correlation for private sector
employees exclusively. In contrast, interaction and subsample analyses reveal a
positive correlation for public sector employees. This link is driven by individuals
with a high degree of altruistic motivation and family orientation.
This dissertation examines how individuals unlock their personal power by investigating individual differences in self-regulation, in particular, how situational conditions interact with the personality dispositions of action versus state orientation. Action-oriented individuals are well able to regulate their affective states and to bridge the intention–behavior gap, showing initiative, implementing demanding intentions, and resisting temptations. State-oriented individuals, by contrast, often struggle to regulate affect and experience difficulties enacting intentions, especially under demanding conditions, tending to hesitate and ruminate. While extensive research has highlighted the advantages of action orientation across various domains such as education and health, this thesis challenges the prevailing one-sided perspective that presents action orientation as inherently superior and frames state orientation negatively. Drawing on Personality Systems Interactions theory, the dissertation adopts a dynamic view that understands these dispositions as context-sensitive rather than fixed. The central assumption is that action and state orientation each require different kinds of situational conditions to fully unlock their potential. Across six empirical studies (overall N = 1,067) using a multimethod approach that combines experimental and survey-based research in diverse populations and contextual settings, this dissertation examines (1) action and state orientation as distinct dispositions, (2) their dynamic interaction with situational factors, and (3) ways to support each in mobilizing personal power. Overall, the findings show that each disposition offers unique advantages - they simply require different situational conditions for their potential to unfold.
The role of implicit motives for affective, cognitive and behavioral processes has been a focal part of psychological research for decades. Yet, the majority of research in this field has been concentrated on processes involving implicit motives in adulthood. The systematic investigation of developmental correlates of implicit motives remains largely uncharted. The studies cumulated in this thesis aim to add to the sparse research on implicit motives in childhood and adolescence. Specifically, the development of the implicit power motive in the transition of middle to late childhood as a function of parenting behavior (Chapter 4), the predictive value of the implicit achievement motive for objective swimming performance in children and adolescents (Chapter 5) and the role of motive congruence for successful goal realization in adolescent samples across two cultures (Chapter 6) were investigated. Results of Study 1 (Chapter 4) indicate a negative longitudinal association of authoritarian parenting with the implicit power motive in children that is moderated by children’s perception of psychologically controlling parenting. Study 2 (Chapter 5) extends existing research on the assumed positive association of the implicit achievement motive and sports performance and demonstrates the moderating role of competitive anxiety on this association. Finally, Study 3 (Chapter 6) illustrates a moderating effect of implicit motives on the association of goal commitment and successful goal realization in German and Zambian adolescents, however, this effect was only observed in the domain of power motivation. Findings from all three studies are discussed in the context of the significance of implicit motives for psychological research.
Transnational protest movements continue to expose the enduring legacies of colonial exploitation and institutionalised racism within and beyond European cities. They foreground the systemic conditions under which Black lives are rendered disproportionately vulnerable to premature death. In doing so, they expose the enduring entanglements of racial capitalism, state violence and spatial exclusion. Through their ongoing political agitation these movements highlight the need for spatio-temporally situated and relationally embedded engagements with Black urban lives. My thesis responds to that call by examining place-making practices of enclosure and refusal throughout Black London’s post-World War II development.
Grounded in the ethnographic narrative of “being halfway while shooting”, I explore how Black lives are enclosed by institutional racism, how this enclosure is spatialised and how Black and differently racialised Londoners refuse these spatial enclosures through everyday and collective place-making practices. At the intersection of structural constraint and the desire to enact Black freedom in London, I specifically foreground the emergence of fugitive place-making practices.
Conceptually, I bring (critical) urban geography scholarship, Black studies and Black (British)Geographies scholarship into conversation. I develop “being halfway while shooting” as a relational concept that foregrounds the production of racialised urban knowledges, the multiplicity of Black enclosures, and the plurality of place-based strategies committed to refusal. I do so by stressing the relevance of Black fugitive thinking to account for the ongoing refusals that mark the relationship between Blackness and the British city. Methodologically, I adopt a research-activist ethnographic approach, grounded in my long-term engagement with a housing campaign in East London that organises around the housing needs of London’s racialised and gendered urban poor. Using qualitative methods - archival research, interviews, (non-)participant observations, document and media analysis - I embed contemporary struggles into long and ongoing histories of racial-capitalist urban development as well as Black and multi-ethnic refusal.
The empirical chapters trace place-making practices of enclosure and refusal across London’s post-World War II urban development. By examining the aftermath of urban revolts and changing urban welfare regimes, I explore how racialised urban governance has been historically materialised in and through the city. At the same time, I foreground how within this racialised construction of the British city, Black and differently racialised Londoners continue to hold open the possibility of refusal through places in which communal care and self-determination can be enacted. I then turn to the struggle over housing in East London, showing how contemporary processes of racialised dehumanisation and ongoing displacement are both historically rooted and actively contested. In the final empirical chapter I accentuate the relevance of these findings for German-speaking critical urban geography debates.
The research shows that racial capitalist urbanism reproduces enclosures through practices of value extraction, spatial displacement, and the policing of Black subjectivities. In response, Black and differently racialised Londoners engage in fugitive place-making. Rooted in communal care, political organisation, collective education and cultural affirmation, these practices reassert Black presence and belonging. They offer an enduring mode of place-based refusal and the ongoing possibility to stay in the city differently. These findings not only demonstrate the academic significance of my research but also underscore the urgent need to support the place-making practices of Black and differently racialised urban communities, who continue to refuse the racialised enclosure of the British city from within.
From these empirical insights, I propose the concept of a fugitive sense of place - a theoretical lens that accounts for the racialised reproduction of urban space and the transformative place-making practices of those who refuse its logics. Rather than offering prescriptive policy recommendations, I call for a reorientation of urban geographical enquiry by centring Black spatial practices, knowledges and imaginations. Through the lens of “being halfway while shooting”, I argue for a rethinking of human habitation and urban theory through the lived experiences of Black survival and refusal. Attending to a fugitive sense of place, I propose new avenues for human geography research to explore how fugitive place-making practices reshape the meanings, conditions, and possibilities of urban life.
Das Thema des Erlebnisses steht bereits seit langem im Fokus von Anbietern von Dienstleistungen. Dies gilt insbesondere für den Tourismus, einer Branche, deren Produkte zu einem signifikanten Teil aus solchen bestehen. Entsprechend der Prominenz des Themas, vor allem in den Bereichen touristischer Produktentwicklung und Marketing, ist dieses bereits breit in der Forschung diskutiert worden.
Trotz ausgiebiger Publikationsaktivitäten ist der tatsächliche Wissensstand in diesem Thema dennoch auffällig gering. Ein wichtiges Problem liegt darin begründet, dass die Terminologie im Bereich von Erlebnissen noch nicht allgemein akzeptiert und scharf abgegrenzt ist. So muss zwischen Erlebnissen und Erfahrungen unterschieden werden. Erstere treten während des Prozesses der Wahrnehmung einer touristischen Dienstleistung auf und bilden die Basis für Erfahrungen, welche prägend hinsichtlich der Wahrnehmung wirken und im Gesamtkontext der Reise betrachtet werden. Dieser Unterscheidung wird nicht nur in der englischsprachigen Literatur, in der beide Begriffe mit dem Begriff Experience beschrieben werden, sondern auch in der deutschsprachigen Literatur zumeist zu wenig Rechnung getragen, was dazu führt, dass häufig zu Erlebnissen publiziert wird, obwohl eigentlich Erfahrungen beschrieben werden. Problematisch ist dies vor allem, weil damit ein Phänomen untersucht wird, dessen Basis nahezu gänzlich unbekannt ist. Wichtige Fragen, welche zum Verständnis von Erlebnissen und damit auch von Erfahrungen bleiben unbeantwortet:
1) Welche Faktoren werden in der Genese von Erlebnissen wirksam?
2) Wie wirken diese zusammen?
3) Wie wird die Stärke eines Erlebnisses determiniert?
4) Wie werden Erlebnisse stark genug um den Konsum einer touristischen Dienstleistung zu prägen und damit gegebenenfalls zu Erfahrungen zu werden?
In der vorliegenden Arbeit wurden diese Fragen beantwortet, womit ein erster Schritt in Richtung der Füllung einer für die Tourismuswissenschaft nicht unbedeutenden Forschungslücke gelungen ist.
Um Erlebnisse, den Prozess der Genese dieser und deren Bewertung durch den Gast verstehen zu können, wurde ein triangulierter, zweistufiger Forschungsprozess ersonnen und in einem naturtouristischen Setting im Nationalpark Vorpommersche Boddenlandschaft zur Anwendung gebracht. Es handelt sich dabei um einen Mixed-Methods-Ansatz:
1) Induktive-qualitative Studie auf Basis der Grounded Theory
a. Ziel: Identifikation von Wirkkomponenten und deren Zusammenspiel und Generierung eines Modells
b. Methoden: Verdeckte Beobachtung und narrative Interviews
c. Ergebnisse: Modelle der Genese punktueller Erlebnisse und prägender Erlebnisse
2) Deduktive-quantitative Studie
a. Ziel: Überprüfung und Konkretisierung der in 1) generierten Modelle
b. Methoden: Fragebogengestützte, quantitative Befragung und Auswertung mittels multivariater Verfahren
c. Ergebnisse: Zusammenfassung der beiden Modelle zu einem finalen Modell der Erlebnis- und Erfahrungsgenese
Das Ergebnis des Vorgehens ist ein empirisch erarbeitetes und validiertes, detailliertes Modell der Genese von Erlebnissen und der Bewertung dieser durch den Erlebenden in Bezug auf deren Fähigkeit zu Erfahrungen zu werden.
Neben der Aufarbeitung und Konkretisierung dieses Prozesses konnte zusätzlich die in viele Richtungen diskutierte Bedeutung von Erwartungen und Produktzufriedenheit mit Blick auf die Bewertung von Erlebnissen geklärt werden. So konnte empirisch nachgewiesen werden, dass Erlebnisse, die auf Überraschungen, dem Unerwarteten, basierten besonders resistent gegenüber Störfaktoren waren und positive Erlebnisse zwar durchaus im Zusammenhang mit Produktzufriedenheit stehen aber sich vor allem durch eine zumindest temporär gesteigerte Lebenszufriedenheit manifestieren. Damit konnte das Hauptkriterium für die Bewertung von Erlebnissen mit Blick auf ihre Tauglichkeit zu Erfahrungen identifiziert werden.
Für die weitere Forschung kann die vorliegende Arbeit mit dem finalen Modell der Erlebnis- und Erfahrungsgenese einen soliden Ausgangspunkt bilden. So bieten zahlreiche Faktoren im Modell die Möglichkeit zur weiteren Forschung. Auch sollten die Ergebnisse in weiteren touristischen Kontexten überprüft werden.
Für die touristische Praxis kann die vorliegende Arbeit zahlreiche Hinweise geben. So bedeutet die Generierung von Erlebnissen im touristischen Kontext mehr als nur die Erfüllung von Erwartungen. Die widerstandsfähigsten Erlebnisse sind jene, die den Gast zu überraschen vermögen. Ein qualitativ hochwertiges, den Gast zufriedenstellendes Produkt ist dabei nicht mehr als ein Basisfaktor. Wirklich erfolgreich ist ein erlebnisbasierender Ansatz nur dann, wenn dieser es vermag die Lebenszufriedenheit des Gastes zu steigern.