Filtern
Erscheinungsjahr
- 2022 (27) (entfernen)
Dokumenttyp
- Dissertation (27) (entfernen)
Schlagworte
- Satellitenfernerkundung (3)
- Algorithmus (2)
- Deutschland (2)
- Englisch (2)
- Meta-Analysis (2)
- Optimierung (2)
- Stress (2)
- Action vs. State Orientation (1)
- Aktienrendite (1)
- Alpen (1)
Institut
- Fachbereich 4 (8)
- Fachbereich 6 (5)
- Fachbereich 1 (4)
- Fachbereich 5 (1)
- Informatik (1)
- Sinologie (1)
Statistical matching offers a way to broaden the scope of analysis without increasing respondent burden and costs. These would result from conducting a new survey or adding variables to an existing one. Statistical matching aims at combining two datasets A and B referring to the same target population in order to analyse variables, say Y and Z, together, that initially were not jointly observed. The matching is performed based on matching variables X that correspond to common variables present in both datasets A and B. Furthermore, Y is only observed in B and Z is only observed in A. To overcome the fact that no joint information on X, Y and Z is available, statistical matching procedures have to rely on suitable assumptions. Therefore, to yield a theoretical foundation for statistical matching, most procedures rely on the conditional independence assumption (CIA), i.e. given X, Y is independent of Z.
The goal of this thesis is to encompass both the statistical matching process and the analysis of the matched dataset. More specifically, the aim is to estimate a linear regression model for Z given Y and possibly other covariates in data A. Since the validity of the assumptions underlying the matching process determine the validity of the obtained matched file, the accuracy of statistical inference is determined by the suitability of the assumptions. By putting the focus on these assumptions, this work proposes a systematic categorisation of approaches to statistical matching by relying on graphical representations in form of directed acyclic graphs. These graphs are particularly useful in representing dependencies and independencies which are at the heart of the statistical matching problem. The proposed categorisation distinguishes between (a) joint modelling of the matching and the analysis (integrated approach), and (b) matching subsequently followed by statistical analysis of the matched dataset (classical approach). Whereas the classical approach relies on the CIA, implementations of the integrated approach are only valid if they converge, i.e. if the specified models are identifiable and, in the case of MCMC implementations, if the algorithm converges to a proper distribution.
In this thesis an implementation of the integrated approach is proposed, where the imputation step and the estimation step are jointly modelled through a fully Bayesian MCMC estimation. It is based on a linear regression model for Z given Y and accounts for both a linear regression model and a random effects model for Y. Furthermore, it yields its validity when the instrumental variable assumption (IVA) holds. The IVA corresponds to: (a) Z is independent of a subset X’ of X given Y and X*, where X* = X\X’ and (b) Y is correlated with X’ given X*. The proof, that the joint Bayesian modelling of both the model for Z and the model for Y through an MCMC simulation converges to a proper distribution is provided in this thesis. In a first model-based simulation study, the proposed integrated Bayesian procedure is assessed with regard to the data situation, convergence issues, and underlying assumptions. Special interest lies in the investigation of the interplay of the Y and the Z model within the imputation process. It turns out that failure scenarios can be distinguished by comparing the CIA and the IVA in the completely observed dataset.
Finally, both approaches to statistical matching, i.e. the classical approach and the integrated approach, are subject to an extensive comparison in (1) a model-based simulation study and (2) a simulation study based on the AMELIA dataset, which is an openly available very large synthetic dataset and, by construction, similar to the EU-SILC survey. As an additional integrated approach, a Bayesian additive regression trees (BART) model is considered for modelling Y. These integrated procedures are compared to the classical approach represented by predictive mean matching in the form of multiple imputations by chained equation. Suitably chosen, the first simulation framework offers the possibility to clarify aspects related to the underlying assumptions by comparing the IVA and the CIA and by evaluating the impact of the matching variables. Thus, within this simulation study two related aspects are of special interest: the assumptions underlying each method and the incorporation of additional matching variables. The simulation on the AMELIA dataset offers a close-to-reality framework with the advantage of knowing the whole setting, i.e. the whole data X, Y and Z. Special interest lies in investigating assumptions through adding and excluding auxiliary variables in order to enhance conditional independence and assess the sensitivity of the methods to this issue. Furthermore, the benefit of having an overlap of units in data A and B for which information on X, Y, Z is available is investigated. It turns out that the integrated approach yields better results than the classical approach when the CIA clearly does not hold. Moreover, even when the classical approach obtains unbiased results for the regression coefficient of Y in the model for Z, it is the method relying on BART that over all coefficients performs best.
Concluding, this work constitutes a major contribution to the clarification of assumptions essential to any statistical matching procedure. By introducing graphical models to identify existing approaches to statistical matching combined with the subsequent analysis of the matched dataset, it offers an extensive overview, categorisation and extension of theory and application. Furthermore, in a setting where none of the assumptions are testable (since X, Y and Z are not observed together), the integrated approach is a valuable asset by offering an alternative to the CIA.
Algorithmen als Richter
(2022)
Die menschliche Entscheidungsgewalt wird durch algorithmische
Entscheidungssysteme herausgefordert. Verfassungsrechtlich besonders
problematisch ist dies in Bereichen, die das staatliche Handeln betreffen.
Eine herausgehobene Stellung nimmt durch den besonderen Schutz der
Art. 92 ff. GG die rechtsprechende Gewalt ein. Lydia Wolff fragt daher danach, welche Antworten das Grundgesetz auf digitale Veränderungen in diesem Bereich bereithält und wie sich ein Eigenwert menschlicher Entscheidungen in der Rechtsprechung angesichts technischen Wandels darstellen lässt.
Das Werk erörtert hierzu einen Beitrag zum verfassungsrechtlichen
Richterbegriff und stellt diesen etablierten Begriff in einen Kontext neuer digitaler Herausforderungen durch algorithmische Konkurrenz.
Insekten stellen die artenreichste Klasse des Tierreichs dar, wobei viele der Arten bedroht sind. Das liegt neben dem Klimawandel vor allem an der sich in den letzten Jahrzehnten stark verändernden landwirtschaftlichen Nutzung von Flächen, was zu Lebensraumzerstörung und Habitatfragmentierung führt. Die intensivere Bewirtschaftung von Gunstflächen einerseits, sowie die Flächenaufgabe unrentabler Flächen andererseits, hat schwerwiegende Folgen für Insekten, die an extensiv genutzte Kulturflächen angepasst sind, was besonders durch den abnehmenden Anteil an Spezialisten deutlich wird. Eine Region, die aufgrund des kleinräumigen Nebeneinanders von naturnahen Bereichen und anthropogen geschaffenen Kulturflächen (entlang eines großen Höhengradienten) eine wichtige Rolle für die Biodiversität besitzt, speziell als Lebensraum für Spezialisten aller Artengruppen, sind die Alpen. Auch hier stellt der landwirtschaftliche Nutzungswandel ein großes Problem dar, weshalb es einen nachhaltigen Schutz der extensiv genutzten Kulturlebensräume bedarf. Um zu klären, wie eine nachhaltige Berglandwirtschaft zukünftig erhalten bleiben kann, wurden im ersten Kapitel der Promotion die Regelungsrahmen der internationalen, europäischen, nationalen und regionalen Gesetze näher betrachtet. Es zeigt sich, dass der multifunktionale Ansatz der Alpenkonvention und des zugehörigen Protokolls „Berglandwirtschaft“ nur eine geringe normative Konkretisierung aufweisen und daher nicht im ausreichenden Maße in der Gemeinsamen Agrarpolitik der EU sowie im nationalen Recht umgesetzt werden; dadurch können diese einer negativen Entwicklung in der Berglandwirtschaft nicht ausreichend entgegenwirken. Neben diesen Rechtsgrundlagen fehlt es jedoch auch an naturwissenschaftlichen Grundlagen, um die Auswirkungen des landwirtschaftlichen Nutzungswandels auf alpine und arktische Tierarten zu beurteilen. Untersuchungen mit Charakterarten für diese Kulturräume sind somit erforderlich, wobei Tagfalter aufgrund ihrer Sensibilität gegenüber Umweltveränderungen geeignete Indikatoren sind. Deshalb wurden im zweiten Kapitel der Promotion die beiden Schwestertaxa Boloria pales und B. napaea untersucht, die für arktische und / oder alpine Grünlandflächen typisch sind. Die bisher unbekannte Phylogeographie beider Arten wurde daher mit zwei mitochondrialen und zwei Kerngenen über das gesamte europäische Verbreitungsgebiet untersucht. In diesem Zusammenhang die zwischen- und innerartlichen Auftrennungen analysiert und datiert sowie die ihnen unterliegenden Ausbreitungsmuster entschlüsselt. Um spezielle Anpassungsformen an die arktischen und alpinen Lebensräume der Arten zu entschlüsseln und die Folgen der landwirtschaftlichen Nutzungsänderung richtig einordnen zu können, wurden mehrere Populationen beider Arten freilandökologisch untersucht. Während B. pales über den gesamten alpinen Sommer schlüpfen kann und proterandrische Strukturen zeigt, ist B. napaea durch das Fehlen der Proterandie und ein verkürztes Schlupfzeitfenster eher an die kürzeren, arktischen Sommer angepasst. Obwohl beide Arten die gleichen Nektarquellen nutzen, gibt es aufgrund verschiedener Bedürfnisse Unterschiede in den Nektarpräferenzen zwischen den Geschlechtern; auch innerartliche Unterschiede im Dispersionsverhalten wurden gefunden. Populationen beider Arten können eine kurze Beweidung überleben, wobei der Zeitpunkt der Beweidung von Bedeutung ist; eine Nutzung gegen Ende der Schlupfphase hat einen größeren Einfluss auf die Population. Daneben wurde ein deutlicher Unterschied zwischen Flächen mit langfristiger und fehlender Beweidung gefunden. Neben einer geringen Populationsdichte, gibt es auf ganzjährig beweideten Flächen einen größeren Druck, den Lebensraum zu verlassen und die zurückgelegten Flugdistanzen sind hier auch deutlich größer.
Die endemischen Arganbestände in Südmarokko sind die Quelle des wertvollen Arganöls, sind aber durch bspw. Überweidung oder illegale Feuerholzgewinnung stark übernutzt. Aufforstungsmaßnahmen sind vorhanden, sind aber aufgrund von zu kurz angelegten Bewässerungs- und Schutzverträgen häufig nicht erfolgreich. Das Aufkommen von Neuwuchs ist durch das beinahe restlose Sammeln von Kernen kaum möglich, durch Fällen oder Absterben von Bäumen verringert sich die kronenüberdeckte Fläche und unbedeckte Flächen zwischen den Bäumen nehmen zu.
Die Entwicklung der Arganbestände wurde über den Zeitraum von 1972 und 2018 mit historischen und aktuellen Satellitenbildern untersucht, ein Großteil der Bäume hat sich in dieser Zeit kaum verändert. Zustandsaufnahmen von 2018 zeigten, dass viele dieser Bäume durch Überweidung und Abholzung nur als Sträucher wachsen und so in degradiertem Zustand stabil sind.
Trotz der Degradierung einiger Bäume zeigt sich, dass der Boden unter den Bäumen die höchsten Gehalte an organischer Bodensubstanz und Nährstoffen auf den Flächen aufweist, zwischen zwei Bäumen sind die Gehalte am niedrigsten. Der Einfluss des Baumes auf den Boden geht über die Krone hinaus in Richtung Norden durch Beschattung in der Mittagssonne, Osten durch Windverwehung von Streu und Bodenpartikeln und hangabwärts durch Verspülung von Material.
Über experimentelle Methoden unter und zwischen den Arganbäumen wurden Erkenntnisse zur Bodenerosion gewonnen. Die hydraulische Leitfähigkeit unter Bäumen ist um den Faktor 1,2-1,5 höher als zwischen den Bäumen, Oberflächenabflüsse und Bodenabträge sind unter den Bäumen etwas niedriger, bei degradierten Bäumen ähnlich den Bereichen zwischen den Bäumen. Die unterschiedlichen Flächenbeschaffenheiten wurden mit einem Windkanal untersucht und zeigten, dass gerade frisch gepflügte Flächen hohe Windemissionen verursachen, während Flächen mit hoher Steinbedeckung kaum von Winderosion betroffen sind.
Die Oberflächenabflüsse von den unterschiedlichen Flächentypen werden in die Vorfluter abgeleitet. Die Sedimentdynamik in diesen Wadis wird hauptsächlich von Niederschlag zwischen den Messungen, Einzugsgebiet und Wadilänge und kaum von den verschiedenen Landnutzungen beeinflusst.
Das Landschaftssystem Argan konnte über diesen Multi-Methodenansatz auf verschiedenen Ebenen analysiert werden.
This thesis is concerned with two classes of optimization problems which stem
mainly from statistics: clustering problems and cardinality-constrained optimization problems. We are particularly interested in the development of computational techniques to exactly or heuristically solve instances of these two classes
of optimization problems.
The minimum sum-of-squares clustering (MSSC) problem is widely used
to find clusters within a set of data points. The problem is also known as
the $k$-means problem, since the most prominent heuristic to compute a feasible
point of this optimization problem is the $k$-means method. In many modern
applications, however, the clustering suffers from uncertain input data due to,
e.g., unstructured measurement errors. The reason for this is that the clustering
result then represents a clustering of the erroneous measurements instead of
retrieving the true underlying clustering structure. We address this issue by
applying robust optimization techniques: we derive the strictly and $\Gamma$-robust
counterparts of the MSSC problem, which are as challenging to solve as the
original model. Moreover, we develop alternating direction methods to quickly
compute feasible points of good quality. Our experiments reveal that the more
conservative strictly robust model consistently provides better clustering solutions
than the nominal and the less conservative $\Gamma$-robust models.
In the context of clustering problems, however, using only a heuristic solution
comes with severe disadvantages regarding the interpretation of the clustering.
This motivates us to study globally optimal algorithms for the MSSC problem.
We note that although some algorithms have already been proposed for this
problem, it is still far from being “practically solved”. Therefore, we propose
mixed-integer programming techniques, which are mainly based on geometric
ideas and which can be incorporated in a
branch-and-cut based algorithm tailored
to the MSSC problem. Our numerical experiments show that these techniques
significantly improve the solution process of a
state-of-the-art MINLP solver
when applied to the problem.
We then turn to the study of cardinality-constrained optimization problems.
We consider two famous problem instances of this class: sparse portfolio optimization and sparse regression problems. In many modern applications, it is common
to consider problems with thousands of variables. Therefore, globally optimal
algorithms are not always computationally viable and the study of sophisticated
heuristics is very desirable. Since these problems have a discrete-continuous
structure, decomposition methods are particularly well suited. We then apply a
penalty alternating direction method that explores this structure and provides
very good feasible points in a reasonable amount of time. Our computational
study shows that our methods are competitive to
state-of-the-art solvers and heuristics.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
Forest inventories provide significant monitoring information on forest health, biodiversity,
resilience against disturbance, as well as its biomass and timber harvesting potential. For this
purpose, modern inventories increasingly exploit the advantages of airborne laser scanning (ALS)
and terrestrial laser scanning (TLS).
Although tree crown detection and delineation using ALS can be seen as a mature discipline, the
identification of individual stems is a rarely addressed task. In particular, the informative value of
the stem attributes—especially the inclination characteristics—is hardly known. In addition, a lack
of tools for the processing and fusion of forest-related data sources can be identified. The given
thesis addresses these research gaps in four peer-reviewed papers, while a focus is set on the
suitability of ALS data for the detection and analysis of tree stems.
In addition to providing a novel post-processing strategy for geo-referencing forest inventory plots,
the thesis could show that ALS-based stem detections are very reliable and their positions are
accurate. In particular, the stems have shown to be suited to study prevailing trunk inclination
angles and orientations, while a species-specific down-slope inclination of the tree stems and a
leeward orientation of conifers could be observed.
Die Effekte diverser Hormone auf das Sozialverhalten von Männern und Frauen sind nicht vollständig geklärt, da eine genaue Messung dieser, sowie eine Ableitung kausaler Zusammenhänge, die Forschung seither vor Herausforderungen stellt. Umso wichtiger sind Studien, welche versuchen für konfundierende Aspekte zu kontrollieren und die hormonellen oder endokrinen Effekte auf das Sozialverhalten und die soziale Kognition zu untersuchen. Während Studien bereits Effekte von akutem Stress auf Sozialverhalten zeigten, sind die zugrundeliegenden neurobiologischen Mechanismen nicht vollständig bekannt, da hierfür ein rein pharmakologischer Ansatz von Nöten wäre. Die wenigen Studien, die einen solchen wählten, zeigen konträre Befunde. Bisherige Untersuchungen mit psychosozialen Stressoren lassen jedoch prosoziale Tendenzen nach Stress sowohl für Männer als auch für Frauen vermuten. Darüber hinaus sind auch Untersuchungen zu weiblichen Geschlechtshormonen und ihrem Einfluss auf Sozialverhalten sowie die soziale Kognition bei Frauen besonders herausfordernd durch die hormonellen Schwankungen während des Menstruationszyklus oder auch Veränderungen durch die Einnahme oraler Kontrazeptiva. Studien die sowohl Zyklusphasen als auch die Effekte von oralen Kontrazeptiva untersuchten, deuten aber bereits auf Unterschiede zwischen den verschiedenen Phasen, sowie Frauen mit natürlichem Zyklus und Einnahme oraler Kontrazeptiva hin.
Der theoretische Teil beschreibt die Grundlagen zur Stressreaktion des Menschen und die hormonellen Veränderungen weiblicher Geschlechtshormone. Folgend, soll ein Kapitel zur aktuellen Forschungslage zu Effekten von akutem Stress auf Sozialverhalten und die soziale Kognition einen Überblick über die bisherige Befundlage schaffen. Die erste empirische Studie, welche die Effekte von Hydrocortison auf das Sozialverhalten und die Emotionserkennung untersucht, soll anschließend in diese aktuelle Befundlage eingeordnet werden und zu der weniger erforschten Sparte der pharmakologischen Studien beitragen. Die zweite empirische Studie befasst sich folgend mit den Effekten weiblicher Geschlechtshormone auf Sozialverhalten und Empathie, genauer wie auch Zyklusphasen und orale Kontrazeptiva (über Hormone vermittelt) einen Einfluss bei Frauen nehmen. Abschließend sollen die Effekte von Stresshormonen bei Männern, und modulierende Eigenschaften weiblicher Geschlechtshormone, Zyklusphasen und oraler Kontrazeptiva bei Frauen, jeweils in Hinblick auf Sozialverhalten und die soziale Kognition diskutiert werden.
Broadcast media such as television have spread rapidly worldwide in the last century. They provide viewers with access to new information and also represent a source of entertainment that unconsciously exposes them to different social norms and moral values. Although the potential impact of exposure to television content have been studied intensively in economic research in recent years, studies examining the long-term causal effects of media exposure are still rare. Therefore, Chapters 2 to 4 of this thesis contribute to the better understanding of long-term effects of television exposure.
Chapter 2 empirically investigates whether access to reliable environmental information through television can influence individuals' environmental awareness and pro-environmental behavior. Analyzing exogenous variation in Western television reception in the German Democratic Republic shows that access to objective reporting on environmental pollution can enhance concerns regarding pollution and affect the likelihood of being active in environmental interest groups.
Chapter 3 utilizes the same natural experiment and explores the relationship between exposure to foreign mass media content and xenophobia. In contrast to the state television broadcaster in the German Democratic Republic, West German television regularly confronted its viewers with foreign (non-German) broadcasts. By applying multiple measures for xenophobic attitudes, our findings indicate a persistent mitigating impact of foreign media content on xenophobia.
Chapter 4 deals with another unique feature of West German television. In contrast to East German media, Western television programs regularly exposed their audience to unmarried and childless characters. The results suggest that exposure to different gender stereotypes contained in television programs can affect marriage, divorce, and birth rates. However, our findings indicate that mainly women were affected by the exposure to unmarried and childless characters.
Chapter 5 examines the influence of social media marketing on crowd participation in equity crowdfunding. By analyzing 26,883 investment decisions on three German equity crowdfunding platforms, our results show that startups can influence the success of their equity crowdfunding campaign through social media posts on Facebook and Twitter.
In Chapter 6, we incorporate the concept of habit formation into the theoretical literature on trade unions and contribute to a better understanding of how internal habit preferences influence trade union behavior. The results reveal that such internal reference points lead trade unions to raise wages over time, which in turn reduces employment. Conducting a numerical example illustrates that the wage effects and the decline in employment can be substantial.
Due to the transition towards climate neutrality, energy markets are rapidly evolving. New technologies are developed that allow electricity from renewable energy sources to be stored or to be converted into other energy commodities. As a consequence, new players enter the markets and existing players gain more importance. Market equilibrium problems are capable of capturing these changes and therefore enable us to answer contemporary research questions with regard to energy market design and climate policy.
This cumulative dissertation is devoted to the study of different market equilibrium problems that address such emerging aspects in liberalized energy markets. In the first part, we review a well-studied competitive equilibrium model for energy commodity markets and extend this model by sector coupling, by temporal coupling, and by a more detailed representation of physical laws and technical requirements. Moreover, we summarize our main contributions of the last years with respect to analyzing the market equilibria of the resulting equilibrium problems.
For the extension regarding sector coupling, we derive sufficient conditions for ensuring uniqueness of the short-run equilibrium a priori and for verifying uniqueness of the long-run equilibrium a posteriori. Furthermore, we present illustrative examples that each of the derived conditions is indeed necessary to guarantee uniqueness in general.
For the extension regarding temporal coupling, we provide sufficient conditions for ensuring uniqueness of demand and production a priori. These conditions also imply uniqueness of the short-run equilibrium in case of a single storage operator. However, in case of multiple storage operators, examples illustrate that charging and discharging decisions are not unique in general. We conclude the equilibrium analysis with an a posteriori criterion for verifying uniqueness of a given short-run equilibrium. Since the computation of equilibria is much more challenging due to the temporal coupling, we shortly review why a tailored parallel and distributed alternating direction method of multipliers enables to efficiently compute market equilibria.
For the extension regarding physical laws and technical requirements, we show that, in nonconvex settings, existence of an equilibrium is not guaranteed and that the fundamental welfare theorems therefore fail to hold. In addition, we argue that the welfare theorems can be re-established in a market design in which the system operator is committed to a welfare objective. For the case of a profit-maximizing system operator, we propose an algorithm that indicates existence of an equilibrium and that computes an equilibrium in the case of existence. Based on well-known instances from the literature on the gas and electricity sector, we demonstrate the broad applicability of our algorithm. Our computational results suggest that an equilibrium often exists for an application involving nonconvex but continuous stationary gas physics. In turn, integralities introduced due to the switchability of DC lines in DC electricity networks lead to many instances without an equilibrium. Finally, we state sufficient conditions under which the gas application has a unique equilibrium and the line switching application has finitely many.
In the second part, all preprints belonging to this cumulative dissertation are provided. These preprints, as well as two journal articles to which the author of this thesis contributed, are referenced within the extended summary in the first part and contain more details.
The main focus of this work is to study the computational complexity of generalizations of the synchronization problem for deterministic finite automata (DFA). This problem asks for a given DFA, whether there exists a word w that maps each state of the automaton to one state. We call such a word w a synchronizing word. A synchronizing word brings a system from an unknown configuration into a well defined configuration and thereby resets the system.
We generalize this problem in four different ways.
First, we restrict the set of potential synchronizing words to a fixed regular language associated with the synchronization under regular constraint problem.
The motivation here is to control the structure of a synchronizing word so that, for instance, it first brings the system from an operate mode to a reset mode and then finally again into the operate mode.
The next generalization concerns the order of states in which a synchronizing word transitions the automaton. Here, a DFA A and a partial order R is given as input and the question is whether there exists a word that synchronizes A and for which the induced state order is consistent with R. Thereby, we study different ways for a word to induce an order on the state set.
Then, we change our focus from DFAs to push-down automata and generalize the synchronization problem to push-down automata and in the following work, to visibly push-down automata. Here, a synchronizing word still needs to map each state of the automaton to one state but it further needs to fulfill some constraints on the stack. We study three different types of stack constraints where after reading the synchronizing word, the stacks associated to each run in the automaton must be (1) empty, (2) identical, or (3) can be arbitrary.
We observe that the synchronization problem for general push-down automata is undecidable and study restricted sub-classes of push-down automata where the problem becomes decidable. For visibly push-down automata we even obtain efficient algorithms for some settings.
The second part of this work studies the intersection non-emptiness problem for DFAs. This problem is related to the problem of whether a given DFA A can be synchronized into a state q as we can see the set of words synchronizing A into q as the intersection of languages accepted by automata obtained by copying A with different initial states and q as their final state.
For the intersection non-emptiness problem, we first study the complexity of the, in general PSPACE-complete, problem restricted to subclasses of DFAs associated with the two well known Straubing-Thérien and Cohen-Brzozowski dot-depth hierarchies.
Finally, we study the problem whether a given minimal DFA A can be represented as the intersection of a finite set of smaller DFAs such that the language L(A) accepted by A is equal to the intersection of the languages accepted by the smaller DFAs. There, we focus on the subclass of permutation and commutative permutation DFAs and improve known complexity bounds.
For decades, academics and practitioners aim to understand whether and how (economic) events affect firm value. Optimally, these events occur exogenously, i.e. suddenly and unexpectedly, so that an accurate evaluation of the effects on firm value can be conducted. However, recent studies show that even the evaluation of exogenous events is often prone to many challenges that can lead to diverse interpretations, resulting in heated debates. Recently, there have been intense debates in particular on the impact of takeover defenses and of Covid-19 on firm value. The announcements of takeover defenses and the propagation of Covid-19 are exogenous events that occur worldwide and are economically important, but have been insufficiently examined. By answering open research questions, this dissertation aims to provide a greater understanding about the heterogeneous effects that exogenous events such as the announcements of takeover defenses and the propagation of Covid-19 have on firm value. In addition, this dissertation analyzes the influence of certain firm characteristics on the effects of these two exogenous events and identifies influencing factors that explain contradictory results in the existing literature and thus can reconcile different views.
Hybrid Modelling in general, describes the combination of at least two different methods to solve one specific task. As far as this work is concerned, Hybrid Models describe an approach to combine sophisticated, well-studied mathematical methods with Deep Neural Networks to solve parameter estimation tasks. To combine these two methods, the data structure of artifi- cially generated acceleration data of an approximate vehicle model, the Quarter-Car-Model, is exploited. Acceleration of individual components within a coupled dynamical system, can be described as a second order ordinary differential equation, including velocity and dis- placement of coupled states, scaled by spring - and damping-coefficient of the system. An appropriate numerical integration scheme can then be used to simulate discrete acceleration profiles of the Quarter-Car-Model with a random variation of the parameters of the system. Given explicit knowledge about the data structure, one can then investigate under which con- ditions it is possible to estimate the parameters of the dynamical system for a set of randomly generated data samples. We test, if Neural Networks are capable to solve parameter estima- tion problems in general, or if they can be used to solve several sub-tasks, which support a state-of-the-art parameter estimation method. Hybrid Models are presented for parameter estimation under uncertainties, including for instance measurement noise or incompleteness of measurements, which combine knowledge about the data structure and several Neural Networks for robust parameter estimation within a dynamical system.
Der vorliegende Text ist als Mantelpapier im Rahmen einer kumulativen Dissertation an der Universität Trier angenommen worden. Er dient der Zusammenfassung, Reflexion und erweiterten theoretischen Betrachtung der empirischen Einzelbeiträge, die alle einen Einzelaspekt des Gesamtgeschehens „Innovationslabor zur Unterstützung unternehmerischen Lernens und der Entwicklung sozialer Dienstleistungsinnovationen“ behandeln. Dabei wird das Innovationslabor grundsätzlich als Personalentwicklungsmaßnahme aufgefasst. In einem gedanklichen Experiment werden die Ergebnisse auf Organisationen der Erwachsenen- und Weiterbildung übertragen.
Das Besondere dieses Rahmenpapiers ist die Verbindung eines relationalen Raumverständnisses mit der lerntheoretischen Untermauerung des Gegenstandes „Innovationslabor“ aus der Perspektive der Organisationspädagogik und Erwachsenenbildung. Die Ergebnisse zeigen den Lernraum Labor als abseits des Arbeitslebens, als semi-autonom angebundenen Raum, wo Lernprozesse auf unterschiedlichen Ebenen stattfinden und angestoßen werden. Das Labor wird als heterotoper (Lern-)Raum diskutiert. Neu ist auch der Einbezug einer kritischen Perspektive, die bislang im Diskurs um Innovationslabore fehlte: Das Labor wird als prekärer Lernraum charakterisiert. Somit liegt mit dieser Arbeit nun eine grundlegende Ausarbeitung des Labors als Lernraum vor, die zahlreiche weitere Anschlussmöglichkeiten für Forschung ermöglicht.
Issues in Price Measurement
(2022)
This thesis focuses on the issues in price measurement and consists of three chapters. Due to outdated weighting information, a Laspeyres-based consumer price index (CPI) is prone to accumulating upward bias. Therefore, chapter 1 introduces and examines simple and transparent revision approaches that retrospectively address the source of the bias. They provide a consistent long-run time series of the CPI and require no additional information. Furthermore, a coherent decomposition of the bias into the contributions of individual product groups is developed. In a case study, the approaches are applied to a Laspeyres-based CPI. The empirical results confirm the theoretical predictions. The proposed revision approaches are adoptable not only to most national CPIs but also to other price-level measures such as the producer price index or the import and export price indices.
Chapter 2 is dedicated to the measurement of import and export price indices. Such indices are complicated by the impact of exchange rates. These indices are usually also compiled by some Laspeyres type index. Therefore, substitution bias is an issue. The terms of trade (ratio of export and import price index) are therefore also likely to be distorted. The underlying substitution bias accumulates over time. The present article applies a simple and transparent retroactive correction approach that addresses the source of the substitution bias and produces meaningful long-run time series of import and export price levels and, therefore, of the terms of trade. Furthermore, an empirical case study is conducted that demonstrates the efficacy and versatility of the correction approach.
Chapter 3 leaves the field of index revision and studies another issue in price measurement, namely, the economic evaluation of digital products in monetary terms that have zero market prices. This chapter explores different methods of economic valuation and pricing of free digital products and proposes an alternative way to calculate the economic value and a shadow price of free digital products: the Usage Cost Model (UCM). The goal of the chapter is, first of all, to formulate a theoretical framework and incorporate an alternative measure of the value of free digital products. However, an empirical application is also made to show the work of the theoretical model. Some conclusions on applicability are drawn at the end of the chapter.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
Even though in most cases time is a good metric to measure costs of algorithms, there are cases where theoretical worst-case time and experimental running time do not match. Since modern CPUs feature an innate memory hierarchy, the location of data is another factor to consider. When most operations of an algorithm are executed on data which is already in the CPU cache, the running time is significantly faster than algorithms where most operations have to load the data from the memory. The topic of this thesis is a new metric to measure costs of algorithms called memory distance—which can be seen as an abstraction of the just mentioned aspect. We will show that there are simple algorithms which show a discrepancy between measured running time and theoretical time but not between measured time and memory distance. Moreover we will show that in some cases it is sufficient to optimize the input of an algorithm with regard to memory distance (while treating the algorithm as a black box) to improve running times. Further we show the relation between worst-case time, memory distance and space and sketch how to define "the usual" memory distance complexity classes.
Let K be a compact subset of the complex plane. Then the family of polynomials P is dense in A(K), the space of all continuous functions on K that are holomorphic on the interior of K, endowed with the uniform norm, if and only if the complement of K is connected. This is the statement of Mergelyan's celebrated theorem.
There are, however, situations where not all polynomials are required to approximate every f ϵ A(K) but where there are strict subspaces of P that are still dense in A(K). If, for example, K is a singleton, then the subspace of all constant polynomials is dense in A(K). On the other hand, if 0 is an interior point of K, then no strict subspace of P can be dense in A(K).
In between these extreme cases, the situation is much more complicated. It turns out that it is mostly determined by the geometry of K and its location in the complex plane which subspaces of P are dense in A(K). In Chapter 1, we give an overview of the known results.
Our first main theorem, which we will give in Chapter 3, deals with the case where the origin is not an interior point of K. We will show that if K is a compact set with connected complement and if 0 is not an interior point of K, then any subspace Q ⊂ P which contains the constant functions and all but finitely many monomials is dense in A(K).
There is a close connection between lacunary approximation and the theory of universality. At the end of Chapter 3, we will illustrate this connection by applying the above result to prove the existence of certain universal power series. To be specific, if K is a compact set with connected complement, if 0 is a boundary point of K and if A_0(K) denotes the subspace of A(K) of those functions that satisfy f(0) = 0, then there exists an A_0(K)-universal formal power series s, where A_0(K)-universal means that the family of partial sums of s forms a dense subset of A_0(K).
In addition, we will show that no formal power series is simultaneously universal for all such K.
The condition on the subspace Q in the main result of Chapter 3 is quite restrictive, but this should not be too surprising: The result applies to the largest possible class of compact sets.
In Chapter 4, we impose a further restriction on the compact sets under consideration, and this will allow us to weaken the condition on the subspace Q. The result that we are going to give is similar to one of those presented in the first chapter, namely the one due to Anderson. In his article “Müntz-Szasz type approximation and the angular growth of lacunary integral functions”, he gives a criterion for a subspace Q of P to be dense in A(K) where K is entirely contained in some closed sector with vertex at the origin.
We will consider compact sets with connected complement that are -- with the possible exception of the origin -- entirely contained in some open sector with vertex at the origin. What we are going to show is that if K\{0} is contained in an open sector of opening angle 2α and if Λ is some subset of the nonnegative integers, then the span of {z → z^λ : λ ϵ Λ} is dense in A(K) whenever 0 ϵ Λ and some Müntz-type condition is satisfied.
Conversely, we will show that if a similar condition is not satisfied, then we can always find a compact set K with connected complement such that K\{0} is contained in some open sector of opening angle 2α and such that the span of {z → z^λ : λ ϵ Λ} fails to be dense in A(K).
This socio-pragmatic study investigates organisational conflict talk between superiors and subordinates in three medical dramas from China, Germany and the United States. It explores what types of sociolinguistic realities the medical dramas construct by ascribing linguistic behaviour to different status groups. The study adopts an enhanced analytical framework based on John Gumperz’ discourse strategies and Spencer-Oatey’s rapport management theory. This framework detaches directness from politeness, defines directness based on preference and polarity and explains the use of direct and indirect opposition strategies in context.
The findings reveal that the three hospital series draw on 21 opposition strategies which can be categorised into mitigating, intermediate and intensifying strategies. While the status identity of superiors is commonly characterised by a higher frequency of direct strategies than that of subordinates, both status groups manage conflict in a primarily direct manner across all three hospital shows. The high percentage of direct conflict management is related to the medical context, which is characterised by a focus on transactional goals, complex role obligations and potentially severe consequences of medical mistakes and delays. While the results reveal unexpected similarities between the three series with regard to the linguistic directness level, cross-cultural differences between the Chinese and the two Western series are obvious from particular sociopragmatic conventions. These conventions particularly include the use of humour, imperatives, vulgar language and incorporated verbal and para-verbal/multimodal opposition. Noteworthy differences also appear in the underlying patterns of strategy use. They show that the Chinese series promotes a greater tolerance of hierarchical structures and a partially closer social distance in asymmetrical professional relationships. These disparities are related to different perceptions of power distance, role relationships, face and harmony.
The findings challenge existing stereotypes of Chinese, US American and German conflict management styles and emphasise the context-specific nature of verbal conflict management in every culture. Although cinematic aspects affect the conflict management in the fictional data, the results largely comply with recent research on conflict talk in real-life workplaces. As such, the study contributes to intercultural trainings in medical contexts and provides an enhanced analytical framework for further cross-cultural studies on linguistic strategies.
In common shape optimization routines, deformations of the computational mesh
usually suffer from decrease of mesh quality or even destruction of the mesh.
To mitigate this, we propose a theoretical framework using so-called pre-shape
spaces. This gives an opportunity for a unified theory of shape optimization, and of
problems related to parameterization and mesh quality. With this, we stay in the
free-form approach of shape optimization, in contrast to parameterized approaches
that limit possible shapes. The concept of pre-shape derivatives is defined, and
according structure and calculus theorems are derived, which generalize classical
shape optimization and its calculus. Tangential and normal directions are featured
in pre-shape derivatives, in contrast to classical shape derivatives featuring only
normal directions on shapes. Techniques from classical shape optimization and
calculus are shown to carry over to this framework, and are collected in generality
for future reference.
A pre-shape parameterization tracking problem class for mesh quality is in-
troduced, which is solvable by use of pre-shape derivatives. This class allows for
non-uniform user prescribed adaptations of the shape and hold-all domain meshes.
It acts as a regularizer for classical shape objectives. Existence of regularized solu-
tions is guaranteed, and corresponding optimal pre-shapes are shown to correspond
to optimal shapes of the original problem, which additionally achieve the user pre-
scribed parameterization.
We present shape gradient system modifications, which allow simultaneous nu-
merical shape optimization with mesh quality improvement. Further, consistency
of modified pre-shape gradient systems is established. The computational burden
of our approach is limited, since additional solution of possibly larger (non-)linear
systems for regularized shape gradients is not necessary. We implement and com-
pare these pre-shape gradient regularization approaches for a 2D problem, which
is prone to mesh degeneration. As our approach does not depend on the choice of
forms to represent shape gradients, we employ and compare weak linear elasticity
and weak quasilinear p-Laplacian pre-shape gradient representations.
We also introduce a Quasi-Newton-ADM inspired algorithm for mesh quality,
which guarantees sufficient adaption of meshes to user specification during the rou-
tines. It is applicable in addition to simultaneous mesh regularization techniques.
Unrelated to mesh regularization techniques, we consider shape optimization
problems constrained by elliptic variational inequalities of the first kind, so-called
obstacle-type problems. In general, standard necessary optimality conditions cannot
be formulated in a straightforward manner for such semi-smooth shape optimization
problems. Under appropriate assumptions, we prove existence and convergence of
adjoints for smooth regularizations of the VI-constraint. Moreover, we derive shape
derivatives for the regularized problem and prove convergence to a limit object.
Based on this analysis, an efficient optimization algorithm is devised and tested
numerically.
All previous pre-shape regularization techniques are applied to a variational
inequality constrained shape optimization problem, where we also create customized
targets for increased mesh adaptation of changing embedded shapes and active set
boundaries of the constraining variational inequality.