Filtern
Erscheinungsjahr
Dokumenttyp
- Dissertation (341)
- Wissenschaftlicher Artikel (123)
- Arbeitspapier (19)
- Buch (Monographie) (15)
- Konferenzveröffentlichung (9)
- Ausgabe (Heft) zu einer Zeitschrift (5)
- Beitrag zu einer (nichtwissenschaftlichen) Zeitung oder Zeitschrift (4)
- Habilitation (3)
- Sonstiges (3)
- Masterarbeit (2)
- Teil eines Buches (Kapitel) (1)
- Retrodigitalisat (1)
Sprache
- Englisch (526) (entfernen)
Schlagworte
- Stress (27)
- Modellierung (20)
- Fernerkundung (18)
- Optimierung (18)
- Deutschland (16)
- Hydrocortison (13)
- Satellitenfernerkundung (13)
- Cortisol (9)
- Europäische Union (9)
- Finanzierung (9)
Institut
- Raum- und Umweltwissenschaften (99)
- Psychologie (94)
- Fachbereich 4 (57)
- Mathematik (47)
- Fachbereich 6 (39)
- Wirtschaftswissenschaften (29)
- Fachbereich 1 (25)
- Informatik (19)
- Anglistik (15)
- Rechtswissenschaft (14)
The Belt and Road Initiative (BRI) has had a significant impact on China in political, economic, and cultural terms. This study focuses on the cultural domain, especially on scholarship students from the countries that signed bilateral cooperation agreements with China under the BRI. Using an integrated approach combining the difference-in-differences method and the gravity model, we explore the correlation between the BRI and the increasing number of international scholarship students funded by the Chinese government, as well as the determinants of students' decision to study in China. The panel data from 2010 to 2018 show that the launch of BRI has had a positive impact on the number of scholarship students from BRI countries. The number of scholarship recipients from non-BRI countries also increased, but at a much slower rate than those from BRI countries. The sole exception is the United States, which has trended downward for both state-funded and self-funded students.
The outbreak of the COVID-19 pandemic has also led to many conspiracy theories. While the origin of the pandemic in China led some, including former US president Donald Trump, to dub the pathogen “Chinese virus” and to support anti-Chinese conspiracy narratives, it caused Chinese state officials to openly support anti-US conspiracy theories about the “true” origin of the virus. In this article, we study whether nationalism, or more precisely uncritical patriotism, is related to belief in conspiracy theories among normal people. We hypothesize based on group identity theory and motivated reasoning that for the particular case of conspiracy theories related to the origin of COVID-19, such a relation should be stronger for Chinese than for Germans. To test this hypothesis, we use survey data from Germany and China, including data from the Chinese community in Germany. We also look at relations to other factors, in particular media consumption and xenophobia.
Despite significant advances in terms of the adoption of formal Intellectual Property Rights (IPR) protection, enforcement of and compliance with IPR regulations remains a contested issue in one of the world's major contemporary economies—China. The present review seeks to offer insights into possible reasons for this discrepancy as well as possible paths of future development by reviewing prior literature on IPR in China. Specifically, it focuses on the public's perspective, which is a crucial determinant of the effectiveness of any IPR regime. It uncovers possible differences with public perspectives in other countries and points to mechanisms (e.g., political, economic, cultural, and institutional) that may foster transitions over time in both formal IPR regulation and in the public perception of and compliance with IPR in China. On this basis, the review advances suggestions for future research in order to improve scholars' understanding of the public's perspective of IPR in China, its antecedents and implications.
Similarity-based retrieval of semantic graphs is a core task of Process-Oriented Case-Based Reasoning (POCBR) with applications in real-world scenarios, e.g., in smart manufacturing. The involved similarity computation is usually complex and time-consuming, as it requires some kind of inexact graph matching. To tackle these problems, we present an approach to modeling similarity measures based on embedding semantic graphs via Graph Neural Networks (GNNs). Therefore, we first examine how arbitrary semantic graphs, including node and edge types and their knowledge-rich semantic annotations, can be encoded in a numeric format that is usable by GNNs. Given this, the architecture of two generic graph embedding models from the literature is adapted to enable their usage as a similarity measure for similarity-based retrieval. Thereby, one of the two models is more optimized towards fast similarity prediction, while the other model is optimized towards knowledge-intensive, more expressive predictions. The evaluation examines the quality and performance of these models in preselecting retrieval candidates and in approximating the ground-truth similarities of a graph-matching-based similarity measure for two semantic graph domains. The results show the great potential of the approach for use in a retrieval scenario, either as a preselection model or as an approximation of a graph similarity measure.
A model-based temperature adjustment scheme for wintertime sea-ice production retrievals from MODIS
(2022)
Knowledge of the wintertime sea-ice production in Arctic polynyas is an important requirement for estimations of the dense water formation, which drives vertical mixing in the upper ocean. Satellite-based techniques incorporating relatively high resolution thermal-infrared data from MODIS in combination with atmospheric reanalysis data have proven to be a strong tool to monitor large and regularly forming polynyas and to resolve narrow thin-ice areas (i.e., leads) along the shelf-breaks and across the entire Arctic Ocean. However, the selection of the atmospheric data sets has a large influence on derived polynya characteristics due to their impact on the calculation of the heat loss to the atmosphere, which is determined by the local thin-ice thickness. In order to overcome this methodical ambiguity, we present a MODIS-assisted temperature adjustment (MATA) algorithm that yields corrections of the 2 m air temperature and hence decreases differences between the atmospheric input data sets. The adjustment algorithm is based on atmospheric model simulations. We focus on the Laptev Sea region for detailed case studies on the developed algorithm and present time series of polynya characteristics in the winter season 2019/2020. It shows that the application of the empirically derived correction decreases the difference between different utilized atmospheric products significantly from 49% to 23%. Additional filter strategies are applied that aim at increasing the capability to include leads in the quasi-daily and persistence-filtered thin-ice thickness composites. More generally, the winter of 2019/2020 features high polynya activity in the eastern Arctic and less activity in the Canadian Arctic Archipelago, presumably as a result of the particularly strong polar vortex in early 2020.
This socio-pragmatic study investigates organisational conflict talk between superiors and subordinates in three medical dramas from China, Germany and the United States. It explores what types of sociolinguistic realities the medical dramas construct by ascribing linguistic behaviour to different status groups. The study adopts an enhanced analytical framework based on John Gumperz’ discourse strategies and Spencer-Oatey’s rapport management theory. This framework detaches directness from politeness, defines directness based on preference and polarity and explains the use of direct and indirect opposition strategies in context.
The findings reveal that the three hospital series draw on 21 opposition strategies which can be categorised into mitigating, intermediate and intensifying strategies. While the status identity of superiors is commonly characterised by a higher frequency of direct strategies than that of subordinates, both status groups manage conflict in a primarily direct manner across all three hospital shows. The high percentage of direct conflict management is related to the medical context, which is characterised by a focus on transactional goals, complex role obligations and potentially severe consequences of medical mistakes and delays. While the results reveal unexpected similarities between the three series with regard to the linguistic directness level, cross-cultural differences between the Chinese and the two Western series are obvious from particular sociopragmatic conventions. These conventions particularly include the use of humour, imperatives, vulgar language and incorporated verbal and para-verbal/multimodal opposition. Noteworthy differences also appear in the underlying patterns of strategy use. They show that the Chinese series promotes a greater tolerance of hierarchical structures and a partially closer social distance in asymmetrical professional relationships. These disparities are related to different perceptions of power distance, role relationships, face and harmony.
The findings challenge existing stereotypes of Chinese, US American and German conflict management styles and emphasise the context-specific nature of verbal conflict management in every culture. Although cinematic aspects affect the conflict management in the fictional data, the results largely comply with recent research on conflict talk in real-life workplaces. As such, the study contributes to intercultural trainings in medical contexts and provides an enhanced analytical framework for further cross-cultural studies on linguistic strategies.
Extension of an Open GEOBIA Framework for Spatially Explicit Forest Stratification with Sentinel-2
(2022)
Spatially explicit information about forest cover is fundamental for operational forest management and forest monitoring. Although open-satellite-based earth observation data in a spatially high resolution (i.e., Sentinel-2, ≤10 m) can cover some information needs, spatially very high-resolution imagery (i.e., aerial imagery, ≤2 m) is needed to generate maps at a scale suitable for regional and local applications. In this study, we present the development, implementation, and evaluation of a Geographic Object-Based Image Analysis (GEOBIA) framework to stratify forests (needleleaved, broadleaved, non-forest) in Luxembourg. The framework is exclusively based on open data and free and open-source geospatial software. Although aerial imagery is used to derive image objects with a 0.05 ha minimum size, Sentinel-2 scenes of 2020 are the basis for random forest classifications in different single-date and multi-temporal feature setups. These setups are compared with each other and used to evaluate the framework against classifications based on features derived from aerial imagery. The highest overall accuracies (89.3%) have been achieved with classification on a Sentinel-2-based vegetation index time series (n = 8). Similar accuracies have been achieved with classification based on two (88.9%) or three (89.1%) Sentinel-2 scenes in the greening phase of broadleaved forests. A classification based on color infrared aerial imagery and derived texture measures only achieved an accuracy of 74.5%. The integration of the texture measures into the Sentinel-2-based classification did not improve its accuracy. Our results indicate that high resolution image objects can successfully be stratified based on lower spatial resolution Sentinel-2 single-date and multi-temporal features, and that those setups outperform classifications based on aerial imagery only. The conceptual framework of spatially high-resolution image objects enriched with features from lower resolution imagery facilitates the delivery of frequent and reliable updates due to higher spectral and temporal resolution. The framework additionally holds the potential to derive additional information layers (i.e., forest disturbance) as derivatives of the features attached to the image objects, thus providing up-to-date information on the state of observed forests.
We study planned changes in protective routines after the COVID-19 pandemic: in a survey in Germany among >650 respondents, we find that the majority plans to use face masks in certain situations even after the end of the pandemic. We observe that this willingness is strongly related to the perception that there is something to be learned from East Asians’ handling of pandemics, even when controlling for perceived protection by wearing masks. Given strong empirical evidence that face masks help prevent the spread of respiratory diseases and given the considerable estimated health and economic costs of such diseases even pre-Corona, this would be a very positive side effect of the current crisis.
Soil organic matter (SOM) is an indispensable component of terrestrial ecosystems. Soil organic carbon (SOC) dynamics are influenced by a number of well-known abiotic factors such as clay content, soil pH, or pedogenic oxides. These parameters interact with each other and vary in their influence on SOC depending on local conditions. To investigate the latter, the dependence of SOC accumulation on parameters and parameter combinations was statistically assessed that vary on a local scale depending on parent material, soil texture class, and land use. To this end, topsoils were sampled from arable and grassland sites in south-western Germany in four regions with different soil parent material. Principal component analysis (PCA) revealed a distinct clustering of data according to parent material and soil texture that varied largely between the local sampling regions, while land use explained PCA results only to a small extent. The PCA clusters were differentiated into total clusters that contain the entire dataset or major proportions of it and local clusters representing only a smaller part of the dataset. All clusters were analysed for the relationships between SOC concentrations (SOC %) and mineral-phase parameters in order to assess specific parameter combinations explaining SOC and its labile fractions hot water-extractable C (HWEC) and microbial biomass C (MBC). Analyses were focused on soil parameters that are known as possible predictors for the occurrence and stabilization of SOC (e.g. fine silt plus clay and pedogenic oxides). Regarding the total clusters, we found significant relationships, by bivariate models, between SOC, its labile fractions HWEC and MBC, and the applied predictors. However, partly low explained variances indicated the limited suitability of bivariate models. Hence, mixed-effect models were used to identify specific parameter combinations that significantly explain SOC and its labile fractions of the different clusters. Comparing measured and mixed-effect-model-predicted SOC values revealed acceptable to very good regression coefficients (R2=0.41–0.91) and low to acceptable root mean square error (RMSE = 0.20 %–0.42 %). Thereby, the predictors and predictor combinations clearly differed between models obtained for the whole dataset and the different cluster groups. At a local scale, site-specific combinations of parameters explained the variability of organic carbon notably better, while the application of total models to local clusters resulted in less explained variance and a higher RMSE. Independently of that, the explained variance by marginal fixed effects decreased in the order SOC > HWEC > MBC, showing that labile fractions depend less on soil properties but presumably more on processes such as organic carbon input and turnover in soil.
The process of land degradation needs to be understood at various spatial and temporal scales in order to protect ecosystem services and communities directly dependent on it. This is especially true for regions in sub-Saharan Africa, where socio economic and political factors exacerbate ecological degradation. This study identifies spatially explicit land change dynamics in the Copperbelt province of Zambia in a local context using satellite vegetation index time series derived from the MODIS sensor. Three sets of parameters, namely, monthly series, annual peaking magnitude, and annual mean growing season were developed for the period 2000 to 2019. Trend was estimated by applying harmonic regression on monthly series and linear least square regression on annually aggregated series. Estimated spatial trends were further used as a basis to map endemic land change processes. Our observations were as follows: (a) 15% of the study area dominant in the east showed positive trends, (b) 3% of the study area dominant in the west showed negative trends, (c) natural regeneration in mosaic landscapes (post shifting cultivation) and land management in forest reserves were chiefly responsible for positive trends, and (d) degradation over intact miombo woodland and cultivation areas contributed to negative trends. Additionally, lower productivity over areas with semi-permanent agriculture and shift of new encroachment into woodlands from east to west of Copperbelt was observed. Pivot agriculture was not a main driver in land change. Although overall greening trends prevailed across the study site, the risk of intact woodlands being exposed to various disturbances remains high. The outcome of this study can provide insights about natural and assisted landscape restoration specifically addressing the miombo ecoregion.
Measurements of dust emissions and the modeling of dissipation dynamics and total values are related to great uncertainties. Agricultural activity, especially soil cultivation, may be an essential component to calculate and model local and regional dust dynamics and even connect to the global dust cycle. To budget total dust and to assess the impact of tillage, measurement of mobilized and transported dust is an essential but rare basis. In this study, a simple measurement concept with Modified Wilson and Cook samplers was applied for dust measurements on a small temporal and spatial scale on steep-slope vineyards in the Moselle area. Without mechanical impact, a mean horizontal flux of 0.01 g m2 min−1 was measured, while row tillage produced a mean horizontal flux of 5.92 g m2 min−1 of mobilized material and 4.18 g m2 min−1 emitted dust from site (=soil loss). Compared on this singular-event basis, emissions during tillage operations generated 99.89% of total emitted dust from the site under low mean wind velocities. The results also indicate a differing impact of specific cultivation operations, mulching, and tillage tools as well as the additional influence of environmental conditions, with highest emissions on dry soil and with additional wind impact. The dust source function is strongly associated with cultivation operations, implying highly dynamic but also regular and thus predictable and projectable emission peaks of total suspended particles. Detailed knowledge of the effects of mechanical impulses and reliable quantification of the local dust emission inventory are a basis for analysis of risk potential and choice of adequate management options.
The larval stage of the European fire salamander (Salamandra salamandra) inhabits both lentic and lotic habitats. In the latter, they are constantly exposed to unidirectional water flow, which has been shown to cause downstream drift in a variety of taxa. In this study, a closed artificial creek, which allowed us to keep the water flow constant over time and, at the same time, to simulates with predefined water quantities and durations, was used to examine the individual movement patterns of marked larval fire salamanders exposed to unidirectional flow. Movements were tracked by marking the larvae with VIAlpha tags individually and by using downstream and upstream traps. Most individuals showed stationarity, while downstream drift dominated the overall movement pattern. Upstream movements were rare and occurred only on small distances of about 30 cm; downstream drift distances exceeded 10 m (until next downstream trap). The simulated flood events increased drift rates significantly, even several days after the flood simulation experiments. Drift probability increased with decreasing body size and decreasing nutritional status. Our results support the production hypothesis as an explanation for the movements of European fire salamander larvae within creeks.
Low-level jets (LLJs) are climatological features in polar regions. It is well known that katabatic winds over the slopes of the Antarctic ice sheet are associated with strong LLJs. Barrier winds occurring, e.g., along the Antarctic Peninsula may also show LLJ structures. A few observational studies show that LLJs occur over sea ice regions. We present a model-based climatology of the wind field, of low-level inversions and of LLJs in the Weddell Sea region of the Antarctic for the period 2002–2016. The sensitivity of the LLJ detection on the selection of the wind speed maximum is investigated. The common criterion of an anomaly of at least 2 m/s is extended to a relative criterion of wind speed decrease above and below the LLJ. The frequencies of LLJs are sensitive to the choice of the relative criterion, i.e., if the value for the relative decrease exceeds 15%. The LLJs are evaluated with respect to the frequency distributions of height, speed, directional shear and stability for different regions. LLJs are most frequent in the katabatic wind regime over the ice sheet and in barrier wind regions. During winter, katabatic LLJs occur with frequencies of more than 70% in many areas. Katabatic LLJs show a narrow range of heights (mostly below 200 m) and speeds (typically 10–20 m/s), while LLJs over the sea ice cover a broad range of speeds and heights. LLJs are associated with surface inversions or low-level lifted inversions. LLJs in the katabatic wind and barrier wind regions can last several days during winter. The duration of LLJs is sensitive to the LLJ definition criteria. We propose to use only the absolute criterion for model studies.
Digital technologies have become central to social interaction and accessing goods and services. Development strategies and approaches to governance have increasingly deployed self-labelled ‘smart’ technologies and systems at various spatial scales, often promoted as rectifying social and geographic inequalities and increasing economic and environmental efficiencies. These have also been accompanied with similarly digitalized commercial and non-profit offers, particularly within the sharing economy. Concern has grown, however, over possible inequalities linked to their introduction. In this paper we critically analyse the role of sharing economies’ contribution to more inclusive, socially equitable
and spatially just transitions. Conceptually, this paper brings together literature on sharing economies, smart urbanism
and just transitions. Drawing on an explorative database of sharing initiatives within the cross-border region of Luxembourg and Germany, we discuss aspects of sustainability as they relate to distributive justice through spatial accessibility, intended benefits, and their operationalization. The regional analysis shows the diversity of sharing models, how they are appropriated in different ways and how intent and operationalization matter in terms of potential benefits.
Results emphasize the need for more fine-grained, qualitative research revealing who is, and is not, participating and
benefitting from sharing economies.
The present study examined associations between fathers’ masculinity orientation and their anticipated reaction toward their child’s coming out as lesbian or gay (LG). Participants were 134 German fathers (28 to 60years) of a minor child. They were asked how they would personally react if, one day, their child disclosed their LG identity to them. As hypothesized, fathers with a stronger masculinity orientation (i.e., adherence to traditional male gender norms, such as independence, assertiveness, and physical strength) reported that they would be more likely to reject their LG child. This association was serially mediated by two factors: fathers’ general anti-LG attitudes (i.e., level of homophobia) and their emotional distress due to their child’s coming out (e.g., feelings of anger, shame, or sadness). The result pattern was independent of the child’s gender or age. The discussion centers on the problematic role of traditional masculinity when it comes to fathers’ acceptance of their non-heterosexual child.
Amphibian diversity in the Amazonian floating meadows: a Hanski core-satellite species system
(2021)
The Amazon catchment is the largest river basin on earth, and up to 30% of its waters flow across floodplains. In its open waters, floating plants known as floating meadows abound. They can act as vectors of dispersal for their associated fauna and, therefore, can be important for the spatial structure of communities. Here, we focus on amphibian diversity in the Amazonian floating meadows over large spatial scales. We recorded 50 amphibian species over 57 sites, covering around 7000 km along river courses. Using multi-site generalised dissimilarity modelling of zeta diversity, we tested Hanski's core-satellite hypothesis and identified the existence of two functional groups of species operating under different ecological processes in the floating meadows. ‘Core' species are associated with floating meadows, while ‘satellite' species are associated with adjacent environments, being only occasional or accidental occupants of the floating vegetation. At large scales, amphibian diversity in floating meadows is mostly determined by stochastic (i.e. random/neutral) processes, whereas at regional scales, climate and deterministic (i.e. niche-based) processes are central drivers. Compared with the turnover of ‘core' species, the turnover of ‘satellite' species increases much faster with distances and is also controlled by a wider range of climatic features. Distance is not a limiting factor for ‘core' species, suggesting that they have a stronger dispersal ability even over large distances. This is probably related to the existence of passive long-distance dispersal of individuals along rivers via vegetation rafts. In this sense, Amazonian rivers can facilitate dispersal, and this effect should be stronger for species associated with riverine habitats such as floating meadows.
Background
The morphology of anuran larvae is suggested to differ between species with tadpoles living in standing (lentic) and running (lotic) waters. To explore which character combinations within the general tadpole morphospace are associated with these habitats, we studied categorical and metric larval data of 123 (one third of which from lotic environments) Madagascan anurans.
Results
Using univariate and multivariate statistics, we found that certain combinations of fin height, body musculature and eye size prevail either in larvae from lentic or lotic environments.
Conclusion
Evidence for adaptation to lotic conditions in larvae of Madagascan anurans is presented. While lentic tadpoles typically show narrow to moderate oral discs, small to medium sized eyes, convex or moderately low fins and non-robust tail muscles, tadpoles from lotic environments typically show moderate to broad oral discs, medium to big sized eyes, low fins and a robust tail muscle.
The main focus of this work is to study the computational complexity of generalizations of the synchronization problem for deterministic finite automata (DFA). This problem asks for a given DFA, whether there exists a word w that maps each state of the automaton to one state. We call such a word w a synchronizing word. A synchronizing word brings a system from an unknown configuration into a well defined configuration and thereby resets the system.
We generalize this problem in four different ways.
First, we restrict the set of potential synchronizing words to a fixed regular language associated with the synchronization under regular constraint problem.
The motivation here is to control the structure of a synchronizing word so that, for instance, it first brings the system from an operate mode to a reset mode and then finally again into the operate mode.
The next generalization concerns the order of states in which a synchronizing word transitions the automaton. Here, a DFA A and a partial order R is given as input and the question is whether there exists a word that synchronizes A and for which the induced state order is consistent with R. Thereby, we study different ways for a word to induce an order on the state set.
Then, we change our focus from DFAs to push-down automata and generalize the synchronization problem to push-down automata and in the following work, to visibly push-down automata. Here, a synchronizing word still needs to map each state of the automaton to one state but it further needs to fulfill some constraints on the stack. We study three different types of stack constraints where after reading the synchronizing word, the stacks associated to each run in the automaton must be (1) empty, (2) identical, or (3) can be arbitrary.
We observe that the synchronization problem for general push-down automata is undecidable and study restricted sub-classes of push-down automata where the problem becomes decidable. For visibly push-down automata we even obtain efficient algorithms for some settings.
The second part of this work studies the intersection non-emptiness problem for DFAs. This problem is related to the problem of whether a given DFA A can be synchronized into a state q as we can see the set of words synchronizing A into q as the intersection of languages accepted by automata obtained by copying A with different initial states and q as their final state.
For the intersection non-emptiness problem, we first study the complexity of the, in general PSPACE-complete, problem restricted to subclasses of DFAs associated with the two well known Straubing-Thérien and Cohen-Brzozowski dot-depth hierarchies.
Finally, we study the problem whether a given minimal DFA A can be represented as the intersection of a finite set of smaller DFAs such that the language L(A) accepted by A is equal to the intersection of the languages accepted by the smaller DFAs. There, we focus on the subclass of permutation and commutative permutation DFAs and improve known complexity bounds.
The endemic argan tree (Argania spinosa) populations in southern Morocco are highly degraded due to overbrowsing, illegal firewood extraction and the expansion of intensive agriculture. Bare areas between the isolated trees increase due to limited regrowth; however, it is unknown if the trees influence the soil of the intertree areas. Hypothetically, spatial differences in soil parameters of the intertree area should result from the translocation of litter or soil particles (by runoff and erosion or wind drift) from canopy-covered areas to the intertree areas. In total, 385 soil samples were taken around the tree from the trunk along the tree drip line (within and outside the tree area) and the intertree area between two trees in four directions (upslope, downslope and in both directions parallel to the slope) up to 50 m distance from the tree. They were analysed for gravimetric soil water content, pH, electrical conductivity, percolation stability, total nitrogen content (TN), content of soil organic carbon (SOC) and C/N ratio. A total of 74 tension disc infiltrometer experiments were performed near the tree drip line, within and outside the tree area, to measure the unsaturated hydraulic conductivity. We found that the tree influence on its surrounding intertree area is limited, with, e.g., SOC and TN content decreasing significantly from tree trunk (4.4 % SOC and 0.3 % TN) to tree drip line (2.0 % SOC and 0.2 % TN). However, intertree areas near the tree drip line (1.3 % SOC and 0.2 % TN) differed significantly from intertree areas between two trees (1.0 % SOC and 0.1 % TN) yet only with a small effect. Trends for spatial patterns could be found in eastern and downslope directions due to wind drift and slope wash. Soil water content was highest in the north due to shade from the midday sun; the influence extended to the intertree areas. The unsaturated hydraulic conductivity also showed significant differences between areas within and outside the tree area near the tree drip line. This was the case on sites under different land usages (silvopastoral and agricultural), slope gradients or tree densities. Although only limited influence of the tree on its intertree area was found, the spatial pattern around the tree suggests that reforestation measures should be aimed around tree shelters in northern or eastern directions with higher soil water content or TN or SOC content to ensure seedling survival, along with measures to prevent overgrazing.
This paper mainly studies two topics: linear complementarity problems for modeling electricity market equilibria and optimization under uncertainty. We consider both perfectly competitive and Nash–Cournot models of electricity markets and study their robustifications using strict robustness and the -approach. For three out of the four combinations of economic competition and robustification, we derive algorithmically tractable convex optimization counterparts that have a clear-cut economic interpretation. In the case of perfect competition, this result corresponds to the two classic welfare theorems, which also apply in both considered robust cases that again yield convex robustified problems. Using the mentioned counterparts, we can also prove the existence and, in some cases, uniqueness of robust equilibria. Surprisingly, it turns out that there is no such economic sensible counterpart for the case of -robustifications of Nash–Cournot models. Thus, an analog of the welfare theorems does not hold in this case. Finally, we provide a computational case study that illustrates the different effects of the combination of economic competition and uncertainty modeling.
Institutional and cultural determinants of speed of government responses during COVID-19 pandemic
(2021)
This article examines institutional and cultural determinants of the speed of government responses during the COVID-19 pandemic. We define the speed as the marginal rate of stringency index change. Based on cross-country data, we find that collectivism is associated with higher speed of government response. We also find a moderating role of trust in government, i.e., the association of individualism-collectivism on speed is stronger in countries with higher levels of trust in government. We do not find significant predictive power of democracy, media freedom and power distance on the speed of government responses.
The state-of-the-art finite element software Plaxis 3D was applied in a real-world study site of the Turaida castle mound to investigate the slope stability of the mound and understand the mechanisms triggering landslides there. During the simulation, the stability of the castle mound was analysed and the most landslide-susceptible zones of hillslopes were determined. The 3D finite-element stability analysis has significant advantages over conventional 2D limit-equilibrium methods where locations of 2D stability sections are arbitrarily selected. Two modelling scenarios of the slope stability were elaborated considering deep-seated slides in bedrock and shallow landslides in the colluvial material of slopes. The model shows that shallow slides in colluvium are more probable. In the finite-element model, slope failure occurs along the weakest zone in colluvium, similarly to the situation observed in previous landslides in the study site. The physical basis of the model allows results to be obtained very close to natural conditions and delivers valuable insight in triggering mechanisms of landslides.
Background: The body-oriented therapeutic approach Somatic Experiencing® (SE) treats posttraumatic symptoms by changing the interoceptive and proprioceptive sensations associated with the traumatic experience. Filling a gap in the landscape of trauma treatments, SE has attracted growing interest in research and therapeutic practice, recently.
Objective: To date, there is no literature review of the effectiveness and key factors of SE. This review aims to summarize initial findings on the effectiveness of SE and to outline methodspecific key factors of SE.
Method: To gain a first overview of the literature, we conducted a scoping review including studies until 13 August 2020. We identified 83 articles of which 16 fit inclusion criteria and were systematically analysed.
Results: Findings provide preliminary evidence for positive effects of SE on PTSD-related symptoms. Moreover, initial evidence suggests that SE has a positive impact on affective and somatic symptoms and measures of well-being in both traumatized and non-traumatized
samples. Practitioners and clients identified resource-orientation and use of touch as methodspecific key factors of SE. Yet, an overall studies quality assessment as well as a Cochrane analysis of risk of bias indicate that the overall study quality is mixed.
Conclusions: The results concerning effectiveness and method-specific key factors of SE are promising; yet, require more support from unbiased RCT-research. Future research should focus on filling this gap.
Intense, southward low-level winds are common in Nares Strait, between Ellesmere Island and northern Greenland. The steep topography along Nares Strait leads to channelling effects, resulting in an along-strait flow. This research study presents a 30-year climatology of the flow regime from simulations of the COSMO-CLM climate model. The simulations are available for the winter periods (November–April) 1987/88 to 2016/17, and thus, cover a period long enough to give robust long-term characteristics of Nares Strait. The horizontal resolution of 15 km is high enough to represent the complex terrain and the meteorological conditions realistically. The 30-year climatology shows that LLJs associated with gap flows are a climatological feature of Nares Strait. The maximum of the mean 10-m wind speed is around 12 m s-1 and is located at the southern exit of Smith Sound. The wind speed is strongly related to the pressure gradient. Single events reach wind speeds of 40 m s-1 in the daily mean. The LLJs are associated with gap flows within the narrowest parts of the strait under stably stratified conditions, with the main LLJ occurring at 100–250 m height. With increasing mountain Froude number, the LLJ wind speed and height increase. The frequency of strong wind events (>20 m s-1 in the daily mean) for the 10 m wind shows a strong interannual variability with an average of 15 events per winter. Channelled winds have a strong impact on the formation of the North Water polynya.
Introduction:In patients with common variable immunodeficiency (CVID),immunological response is compromised. Knowledge about COVID‐19 in CVIDpatients is sparse. We, here, synthesize current research addressing the level ofthreat COVID‐19posestoCVIDpatientsandthebest‐known treatments.
Method:Review of 14 publications.
Results:The number of CVID patients with moderate to severe (~29%) andcritical infection courses (~10%), and the number of fatal cases (~13%), areincreased compared to the general picture of COVID‐19 infection. However,this might be an overestimate. Systematic cohort‐wide studies are lacking, andasymptomatic or mild cases among CVID patients occur that can easily remainunnoticed. Regular immunoglobulin replacement therapy was administered inalmost all patients, potentially explaining why the numbers of critical and fatalcases were not higher. In addition, the application of convalescent plasma wasdemonstrated to have positive effects.
Conclusions:COVID‐19 poses an elevated threat to CVID patients. However,only systematic studies can provide robust information on the extent of thisthreat. Regular immunoglobulin replacement therapy is beneficial to combatCOVID‐19 in CVID patients, and best treatment after infection includes theuse of convalescent plasma in addition to common medication.
This intervention study explored the effects of a newly developed intergenerational encounter program on cross-generational age stereotyping (CGAS). Based on a biographical-narrative approach, participants (secondary school students and nursing home residents) were invited to share ideas about existential questions of life (e.g., about one’s core experiences, future plans, and personal values). Therefore, the dyadic Life Story Interview (LSI) had been translated into a group format (the Life Story Encounter Program, LSEP), consisting of 10 90-min sessions. Analyses verified that LSEP participants of both generations showed more favorable CGAS immediately after, but also 3 months after the program end. Such change in CGAS was absent in a control group (no LSEP participation). The LSEP-driven short- and long-term effects on CGAS could be partially explained by two program benefits, the feeling of comfort with and the experience of learning from the other generation.
Food waste is the origin of major social and environmental issues. In industrial societies, domestic households are the biggest contributors to this problem. But why do people waste food although they buy and value it? Answering this question is mandatory to design effective interventions against food waste. So far, however, many interventions have not been based on theoretical knowledge. Integrating food waste literature and ambivalence research, we propose that domestic food waste can be understood via the concept of ambivalence—the simultaneous presence of positive and negative associations towards the same attitude object. In support of this notion, we demonstrated in three pre-registered experiments that people experienced ambivalence towards non-perishable food products with expired best before dates. The experience of ambivalence was in turn associated with an increased willingness to waste food. However, two informational interventions aiming to prevent people from experiencing ambivalence did not work as intended (Experiment 3). We hope that the outlined conceptualization inspires theory-driven research on why and when people dispose of food and on how to design effective interventions.
Background
Identifying pain-related response patterns and understanding functional mechanisms of symptom formation and recovery are important for improving treatment.
Objectives
We aimed to replicate pain-related avoidance-endurance response patterns associated with the Fear-Avoidance Model, and its extension, the Avoidance-Endurance Model, and examined their differences in secondary measures of stress, action control (i.e., dispositional action vs. state orientation), coping, and health.
Methods
Latent profile analysis (LPA) was conducted on self-report data from 536 patients with chronic non-specific low back pain at the beginning of an inpatient rehabilitation program. Measures of stress (i.e., pain, life stress) and action control were analyzed as covariates regarding their influence on the formation of different pain response profiles. Measures of coping and health were examined as dependent variables.
Results
Partially in line with our assumptions, we found three pain response profiles of distress-avoidance, eustress-endurance, and low-endurance responses that are depending on the level of perceived stress and action control. Distress-avoidance responders emerged as the most burdened, dysfunctional patient group concerning measures of stress, action control, maladaptive coping, and health. Eustress-endurance responders showed one of the highest levels of action versus state orientation, as well as the highest levels of adaptive coping and physical activity. Low-endurance responders reported lower levels of stress as well as equal levels of action versus state orientation, maladaptive coping, and health compared to eustress-endurance responders; however, equally low levels of adaptive coping and physical activity compared to distress-avoidance responders.
Conclusions
Apart from the partially supported assumptions of the Fear-Avoidance and Avoidance-Endurance Model, perceived stress and dispositional action versus state orientation may play a crucial role in the formation of pain-related avoidance-endurance response patterns that vary in degree of adaptiveness. Results suggest tailoring interventions based on behavioral and functional analysis of pain responses in order to more effectively improve patients quality of life.
Evaluation of an eye tracking setup for studying visual attention in face-to-face conversations
(2021)
Many eye tracking studies use facial stimuli presented on a display to investigate attentional processing of social stimuli. To introduce a more realistic approach that allows interaction between two real people, we evaluated a new eye tracking setup in three independent studies in terms of data quality, short-term reliability and feasibility. Study 1 measured the robustness, precision and accuracy for calibration stimuli compared to a classical display-based setup. Study 2 used the identical measures with an independent study sample to compare the data quality for a photograph of a face (2D) and the face of the real person (3D). Study 3 evaluated data quality over the course of a real face-to-face conversation and examined the gaze behavior on the facial features of the conversation partner. Study 1 provides evidence that quality indices for the scene-based setup were comparable to those of a classical display-based setup. Average accuracy was better than 0.4° visual angle. Study 2 demonstrates that eye tracking quality is sufficient for 3D stimuli and robust against short interruptions without re-calibration. Study 3 confirms the long-term stability of tracking accuracy during a face-to-face interaction and demonstrates typical gaze patterns for facial features. Thus, the eye tracking setup presented here seems feasible for studying gaze behavior in dyadic face-to-face interactions. Eye tracking data obtained with this setup achieves an accuracy that is sufficient for investigating behavior such as eye contact in social interactions in a range of populations including clinical conditions, such as autism spectrum and social phobia.
Optimal mental workload plays a key role in driving performance. Thus, driver-assisting systems that automatically adapt to a drivers current mental workload via brain–computer interfacing might greatly contribute to traffic safety. To design economic brain computer interfaces that do not compromise driver comfort, it is necessary to identify brain areas that are most sensitive to mental workload changes. In this study, we used functional near-infrared spectroscopy and subjective ratings to measure mental workload in two virtual driving environments with distinct demands. We found that demanding city environments induced both higher subjective workload ratings as well as higher bilateral middle frontal gyrus activation than less demanding country environments. A further analysis with higher spatial resolution revealed a center of activation in the right anterior dorsolateral prefrontal cortex. The area is highly involved in spatial working memory processing. Thus, a main component of drivers’ mental workload in complex surroundings might stem from the fact that large amounts of spatial information about the course of the road as well as other road users has to constantly be upheld, processed and updated. We propose that the right middle frontal gyrus might be a suitable region for the application of powerful small-area brain computer interfaces.
Detection of Preferential Water Flow by Electrical Resistivity Tomography and Self-Potential Method
(2021)
This study explores the hydrogeological conditions of a landslide-prone hillslope in the Upper Mosel valley, Luxembourg. The investigation program included the monitoring of piezometer wells, hydrogeological field tests, analysis of drillcore records, and geophysical surveys. Monitoring and field testing in some of the observation wells indicated very pronounced preferential flow. Electrical resistivity tomography (ERT) and self-potential geophysical methods were employed in the study area for exploration of the morphology of preferential flowpaths. Possible signals associated with flowing groundwater in the subsurface were detected; however, they were diffusively spread over a relatively large zone, which did not allow for the determination of an exact morphology of the conduit. Analysis of drillcore records indicated that flowpaths are caused by the dissolution of thin gypsum interlayers in marls. For better understanding of the site’s hydrogeological settings, a 3D hydrogeological model was compiled. By applying different subsurface flow mechanisms, a hydrogeological model with thin, laterally extending flowpaths embedded in a porous media matrix showed the best correspondence with field observations. Simulated groundwater heads in a preferential flow conduit exactly corresponded with the observed heads in the piezometer wells. This study illustrates how hydrogeological monitoring and geophysical surveys in conjunction with the newest hydrogeological models allow for better conceptualization and parametrization of preferential flow.
Using a dendrochronological approach, we determined the resistance, recovery and resilience of the radial stem increment towards episodes of growth decline, and the accompanying variation of 13C discrimination against atmospheric CO2 (Δ13C) in tree rings of two palaeotropical pine species. These species co-occur in the mountain ranges of south–central Vietnam (1500–1600 m a.s.l.), but differ largely in their areas of distribution (Pinus kesiya from northeast India to the Philippines; P. dalatensis only in south and central Vietnam and in some isolated populations in Laos). For P. dalatensis, a robust growth chronology covering the past 290 years could be set up for the first time in the study region. For P. kesiya, the 140-year chronology constructed was the longest that could be established to date in that region for this species. In the first 40 years of the trees’ lives, the stem diameter increment was significantly larger in P. kesiya, but levelled off and even decreased after 100 years, whereas P. dalatensis exhibited a continuous growth up to an age of almost 300 years. Tree-ring growth of P. kesiya was negatively related to temperature in the wet months and season of the current year and in October (humid transition period) of the preceding year and to precipitation in August (monsoon season), but positively to precipitation in December (dry season) of the current year. The P. dalatensis chronologies exhibited no significant correlation with temperature or precipitation. Negative correlations between BAI and Δ13C indicate a lack of growth impairment by drought in both species. Regression analyses revealed a lower resilience of P. dalatensis upon episodes of growth decline compared to P. kesiya, but, contrary to our hypothesis, mean values of the three sensitivity parameters did not differ significantly between these species. Nevertheless, the vigorous growth of P. kesiya, which does not fall behind that of P. dalatensis even at the margin of its distribution area under below-optimum edaphic conditions, is indicative of a relatively high plasticity of this species towards environmental factors compared to P. dalatensis, which, in tendency, is less resilient upon environmental stress even in the “core” region of its occurrence.
In 2014/2015 a one-year field campaign at the Tiksi observatory in the Laptev Sea area was carried out using Sound Detection and Ranging/Radio Acoustic Sounding System (SODAR/RASS) measurements to investigate the atmospheric boundary layer (ABL) with a focus on low-level jets (LLJ) during the winter season. In addition to SODAR/RASS-derived vertical profiles of temperature, wind speed and direction, a suite of complementary measurements at the Tiksi observatory was available. Data of a regional atmospheric model were used to put the local data into the synoptic context. Two case studies of LLJ events are presented. The statistics of LLJs for six months show that in about 23% of all profiles LLJs were present with a mean jet speed and height of about 7 m/s and 240 m, respectively. In 3.4% of all profiles LLJs exceeding 10 m/s occurred. The main driving mechanism for LLJs seems to be the baroclinicity, since no inertial oscillations were found. LLJs with heights below 200 m are likely influenced by local topography.
Perennial energy crops (PECs) are increasingly used as feedstock to produce energy in an environmental friendly way. Compared to traditional conversion strategies like thermal use, sophisticated technologies such as biomethanation defined different re-quirements of the feedstock. Whereas the first concept relies on dry, woody mate-rial, biomethanation requires a moist feedstock. Thus, over time, the spectrum of species used as PECs has widened. Moreover, harvest dates were adjusted to pro-vide the feedstock at suitable moisture contents. It is well known that perennial, lignocellulose- based energy crops, compared to annual, sugar- and starch- based ones, offer ecological advantages such as, inter alia, improving biodiversity in landscape, protecting soil against erosion, and protecting groundwater from nutrient inputs. However, one of the main arguments for PEC cultivation was their undemanding nature concerning external inputs. With respect to the broader spectrum of PEC spe-cies and changed harvest dates, the question arises whether the concept of PECs being low- input energy crops is still valid. This also implies the question of suitable grow-ing conditions and sustainable management. The aims of this opinion paper were to classify different PECs according to their life- form strategy, compare nutrient exports when harvested in different maturation stages, and to discuss the results in the context of sustainable PEC cultivation on marginal land. This study revealed that nutrient exports with yield biomass of PECs harvested in green state are in the same range than those of annual energy crops and therewith several times higher than those of PECs harvested in brown state or of woody short rotation coppices. Thus, PECs can-not universally be claimed as low- input energy crops. These results also imply the consequences of cultivation of PECs on marginal land. Finally, the question has to be raised whether the term PECs should prospectively be better specified in written and spoken words.
Surveys play a major role in studying social and behavioral phenomena that are difficult to
observe. Survey data provide insights into the determinants and consequences of human
behavior and social interactions. Many domains rely on high quality survey data for decision
making and policy implementation including politics, health, business, and the social
sciences. Given a certain research question in a specific context, finding the most appropriate
survey design to ensure data quality and keep fieldwork costs low at the same time is a
difficult task. The aim of examining survey research methodology is to provide the best
evidence to estimate the costs and errors of different survey design options. The goal of this
thesis is to support and optimize the accumulation and sustainable use of evidence in survey
methodology in four steps:
(1) Identifying the gaps in meta-analytic evidence in survey methodology by a systematic
review of the existing evidence along the dimensions of a central framework in the
field
(2) Filling in these gaps with two meta-analyses in the field of survey methodology, one
on response rates in psychological online surveys, the other on panel conditioning
effects for sensitive items
(3) Assessing the robustness and sufficiency of the results of the two meta-analyses
(4) Proposing a publication format for the accumulation and dissemination of metaanalytic
evidence
Natural hazards are diverse and uneven in time and space, therefore, understanding its complexity is key to save human lives and conserve natural ecosystems. Reducing the outputs obtained after each modelling analysis is key to present the results for stakeholders, land managers and policymakers. So, the main goal of this survey was to present a method to synthesize three natural hazards in one multi-hazard map and its evaluation for hazard management and land use planning. To test this methodology, we took as study area the Gorganrood Watershed, located in the Golestan Province (Iran). First, an inventory map of three different types of hazards including flood, landslides, and gullies was prepared using field surveys and different official reports. To generate the susceptibility maps, a total of 17 geo-environmental factors were selected as predictors using the MaxEnt (Maximum Entropy) machine learning technique. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic-ROC curves and calculating the area under the ROC curve-AUCROC. The MaxEnt model not only implemented superbly in the degree of fitting, but also obtained significant results in predictive performance. Variables importance of the three studied types of hazards showed that river density, distance from streams, and elevation were the most important factors for flood, respectively. Lithological units, elevation, and annual mean rainfall were relevant for detecting landslides. On the other hand, annual mean rainfall, elevation, and lithological units were used for gully erosion mapping in this study area. Finally, by combining the flood, landslides, and gully erosion susceptibility maps, an integrated multi-hazard map was created. The results demonstrated that 60% of the area is subjected to hazards, reaching a proportion of landslides up to 21.2% in the whole territory. We conclude that using this type of multi-hazard map may be a useful tool for local administrators to identify areas susceptible to hazards at large scales as we demonstrated in this research.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
The daily dose of health information: A psychological view on the health information seeking process
(2021)
The search for health information is becoming increasingly important in everyday life, as well as socially and scientifically relevant Previous studies have mainly focused on the design and communication of information. However, the view of the seeker as well as individual
differences in skills and abilities has been a neglected topic so far. A psychological perspective on the process of searching for health information would provide important starting points for promoting the general dissemination of relevant information and thus improving health behaviour and health status. Within the present dissertation, the process of seeking health information was thus divided into sequential stages to identify relevant personality traits and skills. Accordignly, three studies are presented that focus on one stage
of the process respectively and empirically test potential crucial traits and skills: Study I investigates possible determinants of an intention for a comprehensive search for health information. Building an intention is considered as the basic step of the search process.
Motivational dispositions and self-regulatory skills were related to each other in a structural equation model and empirically tested based on theoretical investigations. Model fit showed an overall good fit and specific direct and indirect effects from approach and avoidance
motivation on the intention to seek comprehensively could be found, which supports the theoretical assumptions. The results show that as early as the formation of intention, the psychological perspective reveals influential personality traits and skills. Study II deals with the subsequent step, the selection of information sources. The preference for basic characteristics of information sources (i.e., accessibility, expertise, and interaction) is related to health information literacy as a collective term for relevant skills and intelligence as a personality trait. Furthermore, the study considers the influence of possible over- or underestimation of these characteristics. The results show not only a different predictive
contribution of health literacy and intelligence, but also the relevance of subjective and objective measurement.
Finally, Study III deals with the selection and evaluation of the health information previously found. The phenomenon of selective exposure is analysed, as this can be considered problematic in the health context. For this purpose, an experimental design was implemented in which a varying health threat was suggested to the participants. Relevant information was presented and the selective choice of this information was assessed. Health literacy was tested
as a moderator in a function of the induced threat and perceived vulnerability, triggering defence motives on the degree of bias. Findings show the importance of the consideration of the defence motives, which could cause a bias in the form of selective exposure. Furthermore, health literacy even seems to amplify this effect.
Results of the three studies are synthesized, discussed and general conclusions are drawn and implications for further research are determined.
The ability to acquire knowledge helps humans to cope with the demands of the environment. Supporting knowledge acquisition processes is among the main goals of education. Empirical research in educational psychology has identified several processes mediated through that prior knowledge affects learning. However, the majority of studies investigated cognitive mechanisms mediating between prior knowledge and learning and neglected that motivational processes might also mediate the influence. In addition, the impact of successful knowledge acquisition on patients’ health has not been comprehensively studied. This dissertation aims at closing knowledge gaps on these topics with the use of three studies. The first study is a meta-analysis that examined motivation as a mediator of individual differences in knowledge before and after learning. The second study investigated in greater detail the extent to which motivation mediated the influence of prior knowledge on knowledge gains in a sample of university students. The third study is a second-order meta-analysis synthesizing the results of previous meta-analyses on the effects of patient education on several health outcomes. The findings of this dissertation show that (a) motivation mediates individual differences in knowledge before and after learning; (b) interest and academic self-concept stabilize individual differences in knowledge more than academic self-efficacy, intrinsic motivation, and extrinsic motivation; (c) test-oriented instruction closes knowledge gaps between students; (d) students’ motivation can be independent of prior knowledge in high aptitude students; (e) knowledge acquisition affects motivational and health-related outcomes; and (f) evidence on prior knowledge and motivation can help develop effective interventions in patient education. The results of the dissertation provide insights into prerequisites, processes, and outcomes of knowledge acquisition. Future research should address covariates of learning and environmental impacts for a better understanding of knowledge acquisition processes.
Teamwork is ubiquitous in the modern workplace. However, it is still unclear whether various behavioral economic factors de- or increase team performance. Therefore, Chapters 2 to 4 of this thesis aim to shed light on three research questions that address different determinants of team performance.
Chapter 2 investigates the idea of an honest workplace environment as a positive determinant of performance. In a work group, two out of three co-workers can obtain a bonus in a dice game. By misreporting a secret die roll, cheating without exposure is an option in the game. Contrary to claims on the importance of honesty at work, we do not observe a reduction in the third co-worker's performance, who is an uninvolved bystander when cheating takes place.
Chapter 3 analyzes the effect of team size on performance in a workplace environment in which either two or three individuals perform a real-effort task. Our main result shows that the difference in team size is not harmful to task performance on average. In our discussion of potential mechanisms, we provide evidence on ongoing peer effects. It appears that peers are able to alleviate the potential free-rider problem emerging out of working in a larger team.
In Chapter 4, the role of perceived co-worker attractiveness for performance is analyzed. The results show that task performance is lower, the higher the perceived attractiveness of co-workers, but only in opposite-sex constellations.
The following Chapter 5 analyzes the effect of offering an additional payment option in a fundraising context. Chapter 6 focuses on privacy concerns of research participants.
In Chapter 5, we conduct a field experiment in which, participants have the opportunity to donate for the continuation of an art exhibition by either cash or cash and an additional cashless payment option (CPO). The treatment manipulation is completed by framing the act of giving either as a donation or pay-what-you-want contribution. Our results show that donors shy away from using the CPO in all treatment conditions. Despite that, there is no negative effect of the CPO on the frequency of financial support and its magnitude.
In Chapter 6, I conduct an experiment to test whether increased transparency of data processing affects data disclosure and whether the results change if it is indicated that the implementation of the GDPR happened involuntarily. I find that increased transparency raises the number of participants who do not disclose personal data by 21 percent. However, this is not the case in the involuntary-signal treatment, where the share of non-disclosures is relatively high in both conditions.
This thesis contributes to the economic literature on India and specifically focuses on investment project (IP) location choice. I study three topics that naturally arise in sequence: geographic concentration of investment projects, the determinants of the location choices, and the impact these choices have on project success.
In Chapter 2, I provide the analysis of geographic concentration of IPs. I find that investments were concentrated over the period of observation (1996–2015), although the degree of concentration was decreasing. Additionally, I analyze different subsamples of the data set by ownership (Indian private, Indian public and foreign) and project status (completed or dropped). Foreign projects in all industries are more concentrated than private and public, while for the latter categories I identify only minor differences in concentration levels. Additionally, I find that the location patterns of completed and dropped investments are similar to that of the overall distribution and the distributions of their respective industries with completed IPs being somewhat more concentrated.
In Chapter 3, I study the determinants of project location choices with the focus on an important highway upgrade, the Golden Quadrilateral (GQ). In line with the existing literature, the GQ construction is connected to higher levels of investment in the affected non-nodal GQ districts in 2002–2016. I also provide suggestive evidence on changes in firm behavior after the GQ construction: Firms located in the non-nodal GQ districts became less likely to invest in their neighbor districts after the GQ completion compared to firms located in districts unaffected by the GQ construction.
Finally, in Chapter 4, I investigate the characteristics of IPs that may contribute to discontinuation of their implementation by comparing completed investments to dropped ones, defined as abandoned, shelved, and stalled investments as identified on the date of the data download. Controlling for local and business cycle conditions, as well as various investor and project characteristics, I show that projects located in close proximity to the investor offices (i.e., in the same district) are more likely to achieve the completion stage than more remote projects.
While women's evolving contribution to entrepreneurship is irrefutable, in almost all nations, gender disparity is an existing reality of entrepreneurship. Social and economic outcomes make women entrepreneurship an important area for scholars and governments. In attempts to find reasons for this gender disparity, academic scholars evaluated various factors and recognised perceptual variables as having outstanding explanatory value in understanding women's entrepreneurship. To advance our knowledge of gender disparity in entrepreneurship, the present study explores the influence of entrepreneurial perceptual variables on women's entrepreneurship and considers the critical role of country-level institutional contexts on the women's entrepreneurial propensity. Therefore, this study examines the impact of perceptual variables in different nations. It also offers connections between entrepreneurial perceptions, women entrepreneurship, and institutional contexts as a critical topic for future studies.
Drawing on the importance of perceptual factors, this dissertation investigates whether and how their perception of entrepreneurial networks influences the individuals' decision to initiate a new venture. Prior scholars considered exposure to entrepreneurial role models as one of the most influential factors on the women's inclination towards entrepreneurship; thus, a systemized analysis makes it possible to identify existing research gaps related to this perception. Hence, to draw a clear picture of the relationship between entrepreneurial role models and entrepreneurship, this dissertation provides a systemized overview of prior studies. Subsequently, Chapter 2 structures the existing literature on entrepreneurial role models and reveals that past literature has focused on the different types of role models, the stage of life at which the exposure to role models occurs, and the context of the exposure. Current discourse argues that the women's lower access to entrepreneurial role models negatively influences their inclination towards entrepreneurship.
Additionally, although the research on women entrepreneurship has proliferated in recent years, little is known about how entrepreneurial perceptual variables form women's propensity towards entrepreneurship in various institutional contexts. The work of Koellinger et al. (2013), hereafter KMS, is one of the most influential papers that investigated the influence of perceptual variables, and it showed that a lower rate of women entrepreneurship is associated with a lower level of their entrepreneurial network, perceived entrepreneurial capability, and opportunity evaluation and with a higher fear of entrepreneurial failure. Thus, this dissertation replicates the work of KMS. Chapter 3 explicitly investigates the influence of the above perceptions on women's entrepreneurial propensity. This research has drawn data from the Global Entrepreneurship Monitor, a cross-national individual-level data set (2001-2006) covering 236,556 individuals across 17 countries. The results of this chapter suggest that gender disparities in entrepreneurial propensity are conditioned by differences in entrepreneurial perceptual variables. Women's lower levels of perceived entrepreneurial capability, entrepreneurial role models and opportunity evaluation and their higher fear of failure lead to lower entrepreneurial propensity.
To extend and generalise the relationship between perceptions and women's entrepreneurial propensity, in Chapter 4, two studies are conducted based on replicated research. Extension 1 generalises the results of KMS by using the same analysis on more recent data. Accordingly, this research implemented the same analysis on 372,069 individuals across the same countries (2011-2016). The recent data show that although gender disparity became significantly weaker, the gender gap is still in men's favour. However, similarly to the replicated study, this research revealed that perceptual factors explain a larger part of the gender disparity. To strengthen prior empirical evidence, in extension 2, utilising a sample of 1,029,863 individuals from 71 countries (2011-2016), the study conducted the same measures and analysis in a more global setting. By including developing countries, gender disparity in entrepreneurial propensity decreased significantly. The study revealed that the relative significance of the influences of perceptions' differs significantly across nations; however, perceptions have a worldwide effect. Moreover, this research found that the ratio of nascent women entrepreneurs in less developed countries to those in more developed nations is 2. More precisely, a higher level of economic development negatively influences the impact of perceptions on women's entrepreneurial propensity.
Whereas prior scholars increasingly underlined the importance of perceptions in explaining a large part of gender disparities in entrepreneurship, most of the prior investigations focused on nascent (early-stage) entrepreneurship, and evidence on the relationship between perceptions and other types of self-employment, such as innovative entrepreneurship, is scant. Innovation is a confirmed key driver of a firm's sustainability, higher competitive capability, and growth. Therefore, Chapter 5 investigates the influence of perceptions on women's innovative entrepreneurship. The chapter points out that entrepreneurial perceptions are the main determinants of the women's decision to offer a new product or service. This chapter also finds that women's innovative entrepreneurship is associated with the country's specific economic setting.
Overall, by underlining the critical role of institutional contexts, this dissertation provides considerable insights into the interaction between perceptions and women entrepreneurship, and its results have implications for policymakers and practitioners, who may find it helpful to consider women entrepreneurship in systemized challenges. Formal and informal barriers affect women's entrepreneurial perceptions and can differ from one country to the other. In this sense, it is crucial to design operational plans to mitigate formal and stereotypical challenges, and thus, more women will be able to start a business, particularly in developing countries in which women significantly comprise a smaller portion of the labour markets. This type of policy could write the "rules of the game" such that these rules enhance the women's propensity towards entrepreneurship.
The present work explores how theories of motivation can be used to enhance video game research. Currently, Flow-Theory and Self-Determination Theory are the most common approaches in the field of Human-Computer Interaction. The dissertation provides an in-depth look into Motive Disposition Theory and how to utilize it to explain interindividual differences in motivation. Different players have different preferences and make different choices when playing games, and not every player experiences the same outcomes when playing the same game. I provide a short overview of the current state of the research on motivation to play video games. Next, Motive Disposition Theory is applied in the context of digital games in four different research papers, featuring seven studies, totaling 1197 participants. The constructs of explicit and implicit motives are explained in detail while focusing on the two social motives (i.e., affiliation and power). As dependent variables, behaviour, preferences, choices, and experiences are used in different game environments (i.e., Minecraft, League of Legends, and Pokémon). The four papers are followed by a general discussion about the seven studies and Motive Disposition Theory in general. Finally, a short overview is provided about other theories of motivation and how they could be used to further our understanding of the motivation to play digital games in the future. This thesis proposes that 1) Motive Disposition Theory represents a valuable approach to understand individual motivations within the context of digital games; 2) there is a variety of motivational theories that can and should be utilized by researchers in the field of Human-Computer Interaction to broaden the currently one-sided perspective on human motivation; 3) researchers should aim to align their choice of motivational theory with their research goals by choosing the theory that best describes the phenomenon in question and by carefully adjusting each study design to the theoretical assumptions of that theory.
In her poems, Tawada constructs liminal speaking subjects – voices from the in-between – which disrupt entrenched binary thought processes. Synthesising relevant concepts from theories of such diverse fields as lyricology, performance studies, border studies, cultural and postcolonial studies, I develop ‘voice’ and ‘in-between space’ as the frameworks to approach Tawada’s multifaceted poetic output, from which I have chosen 29 poems and two verse novels for analysis. Based on the body speaking/writing, sensuality is central to Tawada’s use of voice, whereas the in-between space of cultures and languages serves as the basis for the liminal ‘exophonic’ voices in her work. In the context of cultural alterity, Tawada focuses on the function of language, both its effect on the body and its role in subject construction, while her feminist poetry follows the general development of feminist academia from emancipation to embodiment to queer representation. Her response to and transformation of écriture féminine in her verse novels transcends the concept of the body as the basis of identity, moving to literary and linguistic, plural self-construction instead. While few poems are overtly political, the speaker’s personal and contextual involvement in issues of social conflict reveal the poems’ potential to speak of, and to, the multiply identified citizens of a globalised world, who constantly negotiate physical as well as psychological borders.
With the ongoing trend towards deep learning in the remote sensing community, classical pixel based algorithms are often outperformed by convolution based image segmentation algorithms. This performance was mostly validated spatially, by splitting training and validation pixels for a given year. Though generalizing models temporally is potentially more difficult, it has been a recent trend to transfer models from one year to another, and therefore to validate temporally. The study argues that it is always important to check both, in order to generate models that are useful beyond the scope of the training data. It shows that convolutional neural networks have potential to generalize better than pixel based models, since they do not rely on phenological development alone, but can also consider object geometry and texture. The UNET classifier was able to achieve the highest F1 scores, averaging 0.61 in temporal validation samples, and 0.77 in spatial validation samples. The theoretical potential for overfitting geometry and just memorizing the shape of fields that are maize has been shown to be insignificant in practical applications. In conclusion, kernel based convolutions can offer a large contribution in making agricultural classification models more transferable, both to other regions and to other years.
Many people are aware of the negative consequences of plastic use on the environment. Nevertheless, they use plastic due to its functionality. In the present paper, we hypothesized that this leads to the experience of ambivalence—the simultaneous existence of positive and negative evaluations of plastic. In two studies, we found that participants showed greater ambivalence toward plastic packed food than unpacked food. Moreover, they rated plastic packed food less favorably than unpacked food in response evaluations. In Study 2, we tested whether one-sided (only positive vs. only negative) information interventions could effectively influence ambivalence. Results showed that ambivalence is resistant to (social) influence. Directions for future research were discussed.
Energy transition strategies in Germany have led to an expansion of energy crop cultivation in landscape, with silage maize as most valuable feedstock. The changes in the traditional cropping systems, with increasing shares of maize, raised concerns about the sustainability of agricultural feedstock production regarding threats to soil health. However, spatially explicit data about silage maize cultivation are missing; thus, implications for soil cannot be estimated in a precise way. With this study, we firstly aimed to track the fields cultivated with maize based on remote sensing data. Secondly, available soil data were target-specifically processed to determine the site-specific vulnerability of the soils for erosion and compaction. The generated, spatially-explicit data served as basis for a differentiated analysis of the development of the agricultural biogas sector, associated maize cultivation and its implications for soil health. In the study area, located in a low mountain range region in Western Germany, the number and capacity of biogas producing units increased by 25 installations and 10,163 kW from 2009 to 2016. The remote sensing-based classification approach showed that the maize cultivation area was expanded by 16% from 7305 to 8447 hectares. Thus, maize cultivation accounted for about 20% of the arable land use; however, with distinct local differences. Significant shares of about 30% of the maize cultivation was done on fields that show at least high potentials for soil erosion exceeding 25 t soil ha−1 a−1. Furthermore, about 10% of the maize cultivation was done on fields that pedogenetically show an elevated risk for soil compaction. In order to reach more sustainable cultivation systems of feedstock for anaerobic digestion, changes in cultivated crops and management strategies are urgently required, particularly against first signs of climate change. The presented approach can regionally be modified in order to develop site-adapted, sustainable bioenergy cropping systems.
The parameterization of ocean/sea-ice/atmosphere interaction processes is a challenge for regional climate models (RCMs) of the Arctic, particularly for wintertime conditions, when small fractions of thin ice or open water cause strong modifications of the boundary layer. Thus, the treatment of sea ice and sub-grid flux parameterizations in RCMs is of crucial importance. However, verification data sets over sea ice for wintertime conditions are rare. In the present paper, data of the ship-based experiment Transarktika 2019 during the end of the Arctic winter for thick one-year ice conditions are presented. The data are used for the verification of the regional climate model COSMO-CLM (CCLM). In addition, Moderate Resolution Imaging Spectroradiometer (MODIS) data are used for the comparison of ice surface temperature (IST) simulations of the CCLM sea ice model. CCLM is used in a forecast mode (nested in ERA5) for the Norwegian and Barents Seas with 5 km resolution and is run with different configurations of the sea ice model and sub-grid flux parameterizations. The use of a new set of parameterizations yields improved results for the comparisons with in-situ data. Comparisons with MODIS IST allow for a verification over large areas and show also a good performance of CCLM. The comparison with twice-daily radiosonde ascents during Transarktika 2019, hourly microwave water vapor measurements of first 5 km in the atmosphere and hourly temperature profiler data show a very good representation of the temperature, humidity and wind structure of the whole troposphere for CCLM.
Social innovation became a widely discussed topic in politics, research funding programs, and business development. Recent European and US economic and science policies have set aside significant funds to generate and foster social innovation. In view of current challenges such as digitization, Work 4.0, inclusion or migrant integration, the question of how organizations can be empowered to develop new and innovative approaches and service models to social challenges is becoming increasingly urgent. This especially applies to organizations in the fields of education and social services. In education, implementing new ideas and concepts is usually discussed as educational reform, which mostly addresses changes in policy agendas with consequences for national and international education systems. The concept of social innovation however has a different starting point: the source of new ideas and services are identified new, emergent needs in society or re-conceptualized. Such need-based perspectives might bring new impulses to the field of education. Therefore, this paper identifies important existing strands of social innovation research, which need to be considered in the emerging academic discourse on social innovation in education. Looking at social innovation through an education research lens reveals the close relation between learning, creativity, and innovation. Individuals, teams, and even organizations learn, engage in creative problem solving to create new and innovative products and services. From an organizational education perspective, the questions arise, how social innovation emerges and even more important, how the process of developing social innovation can be supported. After a brief introduction in the concept of social innovation, the paper discusses therefore the sites, where social innovation emerges, social innovators, approaches to foster social innovation as well as promoting and hindering factors for social innovation.
Designing a Randomized Trial with an Age Simulation Suit—Representing People with Health Impairments
(2020)
Due to demographic change, there is an increasing demand for professional care services, whereby this demand cannot be met by available caregivers. To enable adequate care by relieving informal and formal care, the independence of people with chronic diseases has to be preserved for as long as possible. Assistance approaches can be used that support promoting physical activity, which is a main predictor of independence. One challenge is to design and test such approaches without affecting the people in focus. In this paper, we propose a design for a randomized trial to enable the use of an age simulation suit to generate reference data of people with health impairments with young and healthy participants. Therefore, we focus on situations of increased physical activity.
Digitalization primarily takes place in and through organizations. Despite this prominent role, however, the importance of organizational structure-building processes in the digital transformation is still underexposed in discourse. The fact that ongoing digitalization is linked to an established phenomenon and its own logic, is regularly not addressed due to the attraction potential of the semantics of the digital revolution. Digital revolution and the reordering of societal relationships, though, manifest themselves primarily in processes of reorganization. Structural automation processes in the ongoing digital transformation are limiting the scope for action, necessitating forms of structural structurelessness in organizations that cultivate opportunities for chance. Since organizations realize their operations as a dual of structure and individual, and the principle of organization is therefore based on the complementarity of structural formality and unpredictable informality. The paper discusses the topicality of the classical form of modern organization in the digital age and reflects on approaches to a contemporary design of spaces of opportunity. The reflexive handling of future openness is the central task of management and leadership in order to enable variation and innovation in organizations.
Primary focal hyperhidrosis (PFH, OMIM %144110) is a genetically influenced condition characterised by excessive sweating. Prevalence varies between 1.0–6.1% in the general population, dependent on ethnicity. The aetiology of PFH remains unclear but an autosomal dominant mode of inheritance, incomplete penetrance and variable phenotypes have been reported. In our study, nine pedigrees (50 affected, 53 non-affected individuals) were included. Clinical characterisation was performed at the German Hyperhidrosis Centre, Munich, by using physiological and psychological questionnaires. Genome-wide parametric linkage analysis with GeneHunter was performed based on the Illumina genome-wide SNP arrays. Haplotypes were constructed using easyLINKAGE and visualised via HaploPainter. Whole-exome sequencing (WES) with 100x coverage in 31 selected members (24 affected, 7 non-affected) from our pedigrees was achieved by next generation sequencing. We identified four genome-wide significant loci, 1q41-1q42.3, 2p14-2p13.3, 2q21.2-2q23.3 and 15q26.3-15q26.3 for PFH. Three pedigrees map to a shared locus at 2q21.2-2q23.3, with a genome-wide significant LOD score of 3.45. The chromosomal region identified here overlaps with a locus at chromosome 2q22.1-2q31.1 reported previously. Three families support 1q41-1q42.3 (LOD = 3.69), two families share a region identical by descent at 2p14-2p13.3 (LOD = 3.15) and another two families at 15q26.3 (LOD = 3.01). Thus, our results point to considerable genetic heterogeneity. WES did not reveal any causative variants, suggesting that variants or mutations located outside the coding regions might be involved in the molecular pathogenesis of PFH. We suggest a strategy based on whole-genome or targeted next generation sequencing to identify causative genes or variants for PFH.
Laboratory landslide experiments enable the observation of specific properties of these natural hazards. However, these observations are limited by traditional techniques: frequently used high-speed video analysis and wired sensors (e.g. displacement). These techniques lead to the drawback that either only the surface and 2D profiles can be observed or wires confine the motion behaviour. In contrast, an unconfined observation of the total spatiotemporal dynamics of landslides is needed for an adequate understanding of these natural hazards.
The present study introduces an autonomous and wireless probe to characterize motion features of single clasts within laboratory-scale landslides. The Smartstone probe is based on an inertial measurement unit (IMU) and records acceleration and rotation at a sampling rate of 100 Hz. The recording ranges are ±16 g (accelerometer) and ±2000∘ s−1 (gyroscope). The plastic tube housing is 55 mm long with a diameter of 10 mm. The probe is controlled, and data are read out via active radio frequency identification (active RFID) technology. Due to this technique, the probe works under low-power conditions, enabling the use of small button cell batteries and minimizing its size.
Using the Smartstone probe, the motion of single clasts (gravel size, median particle diameter d50 of 42 mm) within approx. 520 kg of a uniformly graded pebble material was observed in a laboratory experiment. Single pebbles were equipped with probes and placed embedded and superficially in or on the material. In a first analysis step, the data of one pebble are interpreted qualitatively, allowing for the determination of different transport modes, such as translation, rotation and saltation. In a second step, the motion is quantified by means of derived movement characteristics: the analysed pebble moves mainly in the vertical direction during the first motion phase with a maximal vertical velocity of approx. 1.7 m s−1. A strong acceleration peak of approx. 36 m s−2 is interpreted as a pronounced hit and leads to a complex rotational-motion pattern. In a third step, displacement is derived and amounts to approx. 1.0 m in the vertical direction. The deviation compared to laser distance measurements was approx. −10 %. Furthermore, a full 3D spatiotemporal trajectory of the pebble is reconstructed and visualized supporting the interpretations. Finally, it is demonstrated that multiple pebbles can be analysed simultaneously within one experiment. Compared to other observation methods Smartstone probes allow for the quantification of internal movement characteristics and, consequently, a motion sampling in landslide experiments.
Currently, new business models created in the sharing economy differ considerably and they differ in the formation of trust as well. If and how trust can be created is shown by a comparison of two examples which diverge in their founding philosophy. The chosen example of community-based economy, Community Supported Agriculture (CSA), no longer trusts the capitalist system and therefore distances itself and creates its own environment including a new business model. It is implemented within rather small groups where trust is created by personal relations and face-to-face communication. On the contrary, the example of a platform economy, the accommodation-provider company Airbnb, shows trust in the system and pushes technological innovations through the use of platform applications. It promotes trust and confidence in the progress of technology. For the conceptual analysis, the distinction between personal trust and system trust defined by Niklas Luhmann is adopted. The analysis describes two different modes of trust formation and how they push distrust or improve trust. Grounded on these analyses, assumptions on the process of trust formation within varying models of the sharing economy are formulated as well as a hypothesis about possible developments is introduced for further research.
The study analyzes the long-term trends (1998–2019) of concentrations of the air pollutants ozone (O3) and nitrogen oxides (NOx) as well as meteorological conditions at forest sites in German midrange mountains to evaluate changes in O3 uptake conditions for trees over time at a plot scale. O3 concentrations did not show significant trends over the course of 22 years, unlike NO2 and NO, whose concentrations decreased significantly since the end of the 1990s. Temporal analyses of meteorological parameters found increasing global radiation at all sites and decreasing precipitation, vapor pressure deficit (VPD), and wind speed at most sites (temperature did not show any trend). A principal component analysis revealed strong correlations between O3 concentrations and global radiation, VPD, and temperature. Examination of the atmospheric water balance, a key parameter for O3 uptake, identified some unusually hot and dry years (2003, 2011, 2018, and 2019). With the help of a soil water model, periods of plant water stress were detected. These periods were often in synchrony with periods of elevated daytime O3 concentrations and usually occurred in mid and late summer, but occasionally also in spring and early summer. This suggests that drought protects forests against O3 uptake and that, in humid years with moderate O3 concentrations, the O3 flux was higher than in dry years with higher O3 concentrations.
Although gravitropism forces trees to grow vertically, stems have shown to prefer specific orientations. Apart from wind deforming the tree shape, lateral light can result in prevailing inclination directions. In recent years a species dependent interaction between gravitropism and phototropism, resulting in trunks leaning down-slope, has been confirmed, but a terrestrial investigation of such factors is limited to small scale surveys. ALS offers the opportunity to investigate trees remotely. This study shall clarify whether ALS detected tree trunks can be used to identify prevailing trunk inclinations. In particular, the effect of topography, wind, soil properties and scan direction are investigated empirically using linear regression models. 299.000 significantly inclined stems were investigated. Species-specific prevailing trunk orientations could be observed. About 58% of the inclination and 19% of the orientation could be explained by the linear models, while the tree species, tree height, aspect and slope could be identified as significant factors. The models indicate that deciduous trees tend to lean down-slope, while conifers tend to lean leeward. This study has shown that ALS is suitable to investigate the trunk orientation on larger scales. It provides empirical evidence for the effect of phototropism and wind on the trunk orientation.
Soil degradation due to erosion is a significant worldwide problem at different spatial (from pedon to watershed) and temporal scales. All stages and factors in the erosion process must be detected and evaluated to reduce this environmental issue and protect existing fertile soils and natural ecosystems. Laboratory studies using rainfall simulators allow single factors and interactive effects to be investigated under controlled conditions during extreme rainfall events. In this study, three main factors (rainfall intensity, inclination, and rainfall duration) were assessed to obtain empirical data for modeling water erosion during single rainfall events. Each factor was divided into three levels (− 1, 0, + 1), which were applied in different combinations using a rainfall simulator on beds (6 × 1 m) filled with soil from a study plot located in the arid Sistan region, Iran. The rainfall duration levels tested were 3, 5, and 7 min, the rainfall intensity levels were 30, 60, and 90 mm/h, and the inclination levels were 5, 15, and 25%. The results showed that the highest rainfall intensity tested (90 mm/h) for the longest duration (7 min) caused the highest runoff (62 mm3/s) and soil loss (1580 g/m2/h). Based on the empirical results, a quadratic function was the best mathematical model (R2 = 0.90) for predicting runoff (Q) and soil loss. Single-factor analysis revealed that rainfall intensity was more influential for runoff production than changes in time and inclination, while rainfall duration was the most influential single factor for soil loss. Modeling and three-dimensional depictions of the data revealed that sediment production was high and runoff production lower at the beginning of the experiment, but this trend was reversed over time as the soil became saturated. These results indicate that avoiding the initial stage of erosion is critical, so all soil protection measures should be taken to reduce the impact at this stage. The final stages of erosion appeared too complicated to be modeled, because different factors showed differing effects on erosion.
Up-to-date information about the type and spatial distribution of forests is an essential element in both sustainable forest management and environmental monitoring and modelling. The OpenStreetMap (OSM) database contains vast amounts of spatial information on natural features, including forests (landuse=forest). The OSM data model includes describing tags for its contents, i.e., leaf type for forest areas (i.e., leaf_type=broadleaved). Although the leaf type tag is common, the vast majority of forest areas are tagged with the leaf type mixed, amounting to a total area of 87% of landuse=forests from the OSM database. These areas comprise an important information source to derive and update forest type maps. In order to leverage this information content, a methodology for stratification of leaf types inside these areas has been developed using image segmentation on aerial imagery and subsequent classification of leaf types. The presented methodology achieves an overall classification accuracy of 85% for the leaf types needleleaved and broadleaved in the selected forest areas. The resulting stratification demonstrates that through approaches, such as that presented, the derivation of forest type maps from OSM would be feasible with an extended and improved methodology. It also suggests an improved methodology might be able to provide updates of leaf type to the OSM database with contributor participation.
Climate change is expected to cause mountain species to shift their ranges to higher elevations. Due to the decreasing amounts of habitats with increasing elevation, such shifts are likely to increase their extinction risk. Heterogeneous mountain topography, however, may reduce this risk by providing microclimatic conditions that can buffer macroclimatic warming or provide nearby refugia. As aspect strongly influences the local microclimate, we here assess whether shifts from warm south-exposed aspects to cool north-exposed aspects in response to climate change can compensate for an upward shift into cooler elevations.
A lack of ability to inhibit prepotent responses, or more generally a lack of impulse control, is associated with several disorders such as attention-deficit/hyperactivity disorder and schizophrenia as well as general damage to the prefrontal cortex. A stop-signal task (SST) is a reliable and established measure of response inhibition. However, using the SST as an objective assessment in diagnostic or research-focused settings places significant stress on participants as the task itself requires concentration and cognitive effort and is not particularly engaging. This can lead to decreased motivation to follow task instructions and poor data quality, which can affect assessment efficacy and might increase drop-out rates. Gamification—the application of game-based elements in nongame settings—has shown to improve engaged attention to a cognitive task, thus increasing participant motivation and data quality.
Ability self-concept (SC) and self-efficacy (SE) are central competence-related self-perceptions that affect students’ success in educational settings. Both constructs show conceptual differences but their empirical differentiation in higher education has not been sufficiently demonstrated. In the present study, we investigated the empirical differentiation of SC and SE in higher education with N = 1,243 German psychology students (81% female; age M = 23.62 years), taking into account central methodological requirements that, in part, have been neglected in prior studies. SC and SE were assessed at the same level of specificity, only cognitive SC items were used, and multiple academic domains were considered. We modeled the structure of SC and SE taking into account a multidimensional and/or hierarchical structure and investigated the empirical differentiation of both constructs on different levels of generality (i.e., domain-specific and domain-general). Results supported the empirical differentiation of SC and SE with medium-sized positive latent correlations (range r = .57 - .68) between SC and SE on different levels of generality. The knowledge about the internal structure of students’ SC and SE and the differentiation of both constructs can help us to develop construct-specific and domain-specific intervention strategies. Future empirical comparisons of the predictive power of SC and SE can provide further evidence that both represent empirical different constructs.
The presence of sea ice leads in the sea ice cover represents a key feature in polar regions by controlling the heat exchange between the relatively warm ocean and cold atmosphere due to increased fluxes of turbulent sensible and latent heat. Sea ice leads contribute to the sea ice production and are sources for the formation of dense water which affects the ocean circulation. Atmospheric and ocean models strongly rely on observational data to describe the respective state of the sea ice since numerical models are not able to produce sea ice leads explicitly. For the Arctic, some lead datasets are available, but for the Antarctic, no such data yet exist. Our study presents a new algorithm with which leads are automatically identified in satellite thermal infrared images. A variety of lead metrics is used to distinguish between true leads and detection artefacts with the use of fuzzy logic. We evaluate the outputs and provide pixel-wise uncertainties. Our data yield daily sea ice lead maps at a resolution of 1 km2 for the winter months November– April 2002/03–2018/19 (Arctic) and April–September 2003–2019 (Antarctic), respectively. The long-term average of the lead frequency distributions show distinct features related to bathymetric structures in both hemispheres.
The parameterization of the boundary layer is a challenge for regional climate models of the Arctic. In particular, the stable boundary layer (SBL) over Greenland, being the main driver for substantial katabatic winds over the slopes, is simulated differently by different regional climate models or using different parameterizations of the same model. However, verification data sets with high-resolution profiles of the katabatic wind are rare. In the present paper, detailed aircraft measurements of profiles in the katabatic wind and automatic weather station data during the experiment KABEG (Katabatic wind and boundary-layer front experiment around Greenland) in April and May 1997 are used for the verification of the regional climate model COSMO-CLM (CCLM) nested in ERA-Interim reanalyses. CCLM is used in a forecast mode for the whole Arctic with 15 km resolution and is run in the standard configuration of SBL parameterization and with modified SBL parameterization. In the modified version, turbulent kinetic energy (TKE) production and the transfer coefficients for turbulent fluxes in the SBL are reduced, leading to higher stability of the SBL. This leads to a more realistic representation of the daily temperature cycle and of the SBL structure in terms of temperature and wind profiles for the lowest 200 m.
Roof and wall slates are fine-grained rocks with slaty cleavage, and it is often difficult to determine their mineral composition. A new norm mineral calculation called slatecalculation allows the determination of a virtual mineral composition based on full chemical analysis, including the amounts of carbon dioxide (CO2), carbon (C), and sulfur (S). Derived norm minerals include feldspars, carbonates, micas, hydro-micas, chlorites, ore-minerals, and quartz. The mineral components of the slate are assessed with superior accuracy compared to the petrographic analysis based on the European Standard EN 12326. The inevitable methodical inaccuracies in the calculations are limited and transparent. In the present paper, slates, shales, and phyllites from worldwide occurrences were examined. This also gives an overview of the rocks used for discontinuous roofing and external cladding.
This study investigated correlative, factorial, and structural relationships between scores for ability emotional intelligence in the workplace (measured with the Geneva Emotional Competence Test), as well as fluid and crystallized abilities (measured with the Intelligence Structure Battery), carried out by a 188-participant student sample. Confirming existing research, recognition, understanding, and management of emotions were related primarily to crystallized ability tests measuring general knowledge, verbal fluency, and knowledge of word meaning. Meanwhile, emotion regulation was the least correlated with any other cognitive or emotional ability. In line with research on the trainability of emotional intelligence, these results may support the notion that emotional abilities are subject to acquired knowledge, where situational (i.e., workplace-specific) emotional intelligence may depend on accumulating relevant experiences.
The nonhydrostatic regional climate model CCLM was used for a long-term hindcast run (2002–2016) for the Weddell Sea region with resolutions of 15 and 5 km and two different turbulence parametrizations. CCLM was nested in ERA-Interim data and used in forecast mode (suite of consecutive 30 h long simulations with 6 h spin-up). We prescribed the sea ice concentration from satellite data and used a thermodynamic sea ice model. The performance of the model was evaluated in terms of temperature and wind using data from Antarctic stations, automatic weather stations (AWSs), an operational forecast model and reanalyses data, and lidar wind profiles. For the reference run we found a warm bias for the near-surface temperature over the Antarctic Plateau. This bias was removed in the second run by adjusting the turbulence parametrization, which results in a more realistic representation of the surface inversion over the plateau but resulted in a negative bias for some coastal regions. A comparison with measurements over the sea ice of the Weddell Sea by three AWS buoys for 1 year showed small biases for temperature around ±1 K and for wind speed of 1 m s−1. Comparisons of radio soundings showed a model bias around 0 and a RMSE of 1–2 K for temperature and 3–4 m s−1 for wind speed. The comparison of CCLM simulations at resolutions down to 1 km with wind data from Doppler lidar measurements during December 2015 and January 2016 yielded almost no bias in wind speed and a RMSE of ca. 2 m s−1. Overall CCLM shows a good representation of temperature and wind for the Weddell Sea region. Based on these encouraging results, CCLM at high resolution will be used for the investigation of the regional climate in the Antarctic and atmosphere–ice–ocean interactions processes in a forthcoming study.
In order to classify smooth foliated manifolds, which are smooth maifolds equipped with a smooth foliation, we introduce the de Rham cohomologies of smooth foliated manifolds. These cohomologies are build in a similar way as the de Rham cohomologies of smooth manifolds. We develop some tools to compute these cohomologies. For example we proof a Mayer Vietoris theorem for foliated de Rham cohomology and show that these cohomologys are invariant under integrable homotopy. A generalization of a known Künneth formula, which relates the cohomologies of a product foliation with its factors, is discussed. In particular, this envolves a splitting theory of sequences between Frechet spaces and a theory of projective spectrums. We also prove, that the foliated de Rham cohomology is isomorphic to the Cech-de Rham cohomology and the Cech cohomology of leafwise constant functions of an underlying so called good cover.
Soils in forest ecosystems bear a high potential as carbon (C) sinks in the mitigation of climate change. The amount and characteristics of soil organic matter (SOM) are driven by inputs, transformation, degradation and stabilization of organic substances. While tree species fuel the C cycle by producing aboveground and belowground litter, soil microorganisms are crucial for litter degradation as well as the formation and stabilization of SOM. Nonetheless, our knowledge about the tree species effect on the SOM status is limited, inconsistent and blurred. The investigation of tree species effects on SOM is challenging because in long-established forest ecosystems the spatial distribution of tree species is a result of the interplay of environmental factors including climate, geomorphology and soil chemistry. Moreover, tree distribution can further vary with forest successional stage and silvicultural management. Since these factors also directly affect the soil C-status, it is difficult to identify a pure “tree species effect” on the SOM status at regular forested sites. It therefore remains unclear in how far tree species-specific litter with different quality influences the microbial driven turnover and formation of SOM.
Tree species effects on SOM and related soil microbial properties were investigated by examining soil profiles (comprising organic forest floor horizons and mineral soil layers) in different forest stands at the recultivated spoil heap ‘Sophienhöhe’ located at the lignite open-cast mine Hambach near Jülich, Germany. The afforested sites comprised monocultural stands of Douglas fir (Pseudotsuga menziesii), black pine (Pinus nigra), European beech (Fagus sylvatica) and red oak (Quercus rubra) as well as a mixed deciduous stand site planted mainly with hornbeam (Carpinus betulus), lime (Tilia cordata) and common oak (Quercus robur) that were grown for 35 years under identical soil and geomorphological conditions. Because the parent material used for site recultivation was free from organic matter or coal material, the SOM accumulation is entirely the result of in situ soil development due to the impact of tree species.
The first study revealed that tree species had a significant effect on soil organic carbon (SOC) stocks, stoichiometric patterns of C, nitrogen (N), sulfur (S), hydrogen (H) and oxygen (O) as well as the microbial biomass carbon (MBC) content in the forest floor and the top mineral soil layers (0-5 cm, 5-10 cm, 10-30 cm). In general, forest floor SOC stocks were significantly higher at coniferous forest stands compared to deciduous tree species, whereas in mineral soil layers the differences were smaller. Thus, the impact of tree species decreased with increasing soil depth. By investigating the linkage of the natural abundance of 13C and 15N in the soil depth gradients with C:N and O:C stoichiometry, the second study showed that differences in SOC stocks and SOM quality resulted from a tree species-dependent turnover of SOM. Significantly higher turnover of organic matter in soils under deciduous tree species depended to 46 % on the quality of litterfall and root inputs (N content, C:N, O:C ratio), and on the initial isotopic signatures of litterfall. Hence, SOM composition and turnover also depends on additional – presumably microbially driven – factors. The subsequent results of the third study revealed that differences in SOM composition and related soil microbial properties were linked to different microbial communities. Phospholipid fatty acid (PLFA) patterns in the soil profiles indicated that the supply and availability of C and nutrient-rich substrates drive the distribution of fungi, Gram-positive (G+) bacteria and Gram-negative (G−) bacteria between tree species and along the soil depth gradients. The fourth study investigated the molecular composition of extractable soil microbial biomass-derived (SMB) and SOM-derived compounds by electrospray ionization Fourier transformation ion cyclotron resonance mass spectrometry (ESI-FT-ICR-MS). This was complemented by the analysis of nine monosaccharides representing microbial or plant origin. Microbially derived compounds substantially contributed to SOM and the contribution increased with soil depth. The supply of tree species-specific substrates resulted in different chemical composition of SMB with largest differences between deciduous and coniferous stands. At the same time, microorganisms contributed to SOM resulting in a strong similarity in the composition of SOM and SMB.
Overall, the complex interplay of tree species-specific litter inputs and the ability, activity and efficiency of the associated soil fauna and microbial community in metabolizing the organic substrates leads to significant differences in the amount, distribution, quality and consequently, the stability of SOM. These findings are useful for a targeted cultivation of tree species to optimize soil C sequestration and other forest ecosystems services.
This intervention study explored the effects of a newly developed intergenerational encounter program on cross-generational age stereotyping (CGAS). Based on a biographical-narrative approach, participants (secondary school students and nursing home residents) were invited to share ideas about existential questions of life (e.g., about one’s core experiences, future plans, and personal values). Therefore, the dyadic Life Story Interview (LSI) had been translated into a group format (the Life Story Encounter Program, LSEP), consisting of 10 90-min sessions. Analyses verified that LSEP participants of both generations showed more favorable CGAS immediately after, but also 3 months after the program end. Such change in CGAS was absent in a control group (no LSEP participation). The LSEP-driven short- and long-term effects on CGAS could be partially explained by two program benefits, the feeling of comfort with and the experience of learning from the other generation.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
Retirement, fertility and sexuality are three key life stage events that are embedded in the framework of population economics in this dissertation. Each topic implies economic relevance. As retirement entry shifts labour supply of experienced workers to zero, this issue is particularly relevant for employers, retirees themselves as well as policymakers who are in charge of the design of the pension system. Giving birth has comprehensive economic relevance for women. Parental leave and subsequent part-time work lead to a direct loss of income. Lower levels of employment, work experience, training and career opportunities result in indirect income losses. Sexuality has decisive influence on the quality of partnerships, subjective well-being and happiness. Well-being and happiness, in turn, are significant key determinants not only in private life but also in the work domain, for example in the area of job performance. Furthermore, partnership quality determines the duration of a partnership. And in general, partnerships enable the pooling of (financial) resources - compared to being single. The contribution of this dissertation emerges from the integration of social and psychological concepts into economic analysis as well as the application of economic theory in non-standard economic research topics. The results of the three chapters show that the multidisciplinary approach yields better prediction of human behaviour than the single disciplines on their own. The results in the first chapter show that both interpersonal conflict with superiors and the individual’s health status play a significant role in retirement decisions. The chapter further contributes to existing literature by showing the moderating role of health within the retirement decision-making: On the one hand, all employees are more likely to retire when they are having conflicts with their superior. On the other hand, among healthy employees, the same conflict raises retirement intentions even more. That means good health is a necessary, but not a sufficient condition for continued working. It may be that conflicts with superiors raise retirement intentions more if the worker is healthy. The key findings of the second chapter reveal significant influence of religion on contraceptive and fertility-related decisions. A large part of research on religion and fertility is originated in evidence from the US. This chapter contrasts evidence from Germany. Additionally, the chapter contributes by integrating miscarriages and abortions, rather than limiting the analysis to births and it gains from rich prospective data on fertility biography of women. The third chapter provides theoretical insights on how to incorporate psychological variables into an economic framework which aims to analyse sexual well-being. According to this theory, personality may play a dual role by shaping a person’s preferences for sex as well as the person’s behaviour in a sexual relationship. Results of econometric analysis reveal detrimental effects of neuroticism on sexual well-being while conscientiousness seems to create a win-win situation for a couple. Extraversions and Openness have ambiguous effects on romantic relationships by enhancing sexual well-being on the one hand but raising commitment problems on the other. Agreeable persons seem to gain sexual satisfaction even if they perform worse in sexual communication.
The Islamic State is arguably the most prominent Islamist insurgent group to have attracted increased international attention in recent years, although it first emerged in the late 20th century, and this is largely a result of its significant territorial conquests in Iraq and Syria and the proclamation of its own global caliphate in June 2014 (Tønnessen 2018: 60). While research on the Islamic State's ideology, propaganda, financing, military strategy, recruitment of foreign fighters, and use of the Internet and social media has been conducted extensively in a variety of disciplines, including political science, sociology, media science, criminology, Islamic studies, history, and many others, systematic and in-depth analysis of the Islamic State's rebel governance, though not entirely unexplored, has remained comparatively under-researched.
This thesis builds on the above-mentioned issues and employs existing insights and concepts from Rebel Governance to systematically examine the transformation of the Islamic State’s territorial control into functional governance. In addition, through a comprehensive analysis of Islamic State administrative documents, which are continuously contextualized using secondary literature, this thesis develops a comprehensive portrait of the Islamic State's rebel governance. The following research questions are consequently derived from this approach: in what ways did the Islamic State engage in rebel governance during the height of its territorial control in Iraq and Syria between 2014 and 2017, and how can the utilization of concepts and insights from Rebel Governance, and the qualitative analysis of Islamic State administrative documents, improve our knowledge of the Islamic State's rebel governance and help to generate new insights into it?
Our goal is to approximate energy forms on suitable fractals by discrete graph energies and certain metric measure spaces, using the notion of quasi-unitary equivalence. Quasi-unitary equivalence generalises the two concepts of unitary equivalence and norm resolvent convergence to the case of operators and energy forms defined in varying Hilbert spaces.
More precisely, we prove that the canonical sequence of discrete graph energies (associated with the fractal energy form) converges to the energy form (induced by a resistance form) on a finitely ramified fractal in the sense of quasi-unitary equivalence. Moreover, we allow a perturbation by magnetic potentials and we specify the corresponding errors.
This aforementioned approach is an approximation of the fractal from within (by an increasing sequence of finitely many points). The natural step that follows this realisation is the question whether one can also approximate fractals from outside, i.e., by a suitable sequence of shrinking supersets. We partly answer this question by restricting ourselves to a very specific structure of the approximating sets, namely so-called graph-like manifolds that respect the structure of the fractals resp. the underlying discrete graphs. Again, we show that the canonical (properly rescaled) energy forms on such a sequence of graph-like manifolds converge to the fractal energy form (in the sense of quasi-unitary equivalence).
From the quasi-unitary equivalence of energy forms, we conclude the convergence of the associated linear operators, convergence of the spectra and convergence of functions of the operators – thus essentially the same as in the case of the usual norm resolvent convergence.
Die Polargebiete sind geprägt von harschen Umweltbedingungen mit extrem kalten Temperaturen und Winden. Besonders während der polaren Nacht werden Temperaturen von bis zu -89.2°C}$ auf dem Antarktischen Plateau beobachtet. Infolge der starken Abkühlung beginnt das Ozeanwasser zu gefrieren und die Eisproduktion beginnt. Der Antarktische Ozean ist dabei von einer ausgeprägten zwischen- und innerjährlichen Variabilität geprägt und die Eisbedeckung variiert zwischen 2.07 * 10^6 km^2 im Sommer und 20.14 * 10^6 km^2 im Winter. Die Eisproduktion und Eisschmelze beeinflussen die atmosphärische und ozeanische Zirkulation. Dynamische Prozesse führen zur Bildung von Rissen im Eis und letztlich zum Entstehen von Eisrinnen (leads). Leads sind langgestreckte Risse die mindestens einige Meter breit und hunderte Meter bis hunderte Kilometer lang sein können. In diesen Eisrinnen ist das warme Ozeanwasser in Kontakt mit der kalten Atmosphäre, wodurch die Austauschraten fühlbarer und latenter Wärme, Feuchtigkeit und von Gasen stark erhöht sind. Eisrinnen tragen zur Eisproduktion in den Polargebieten bei und sind Habitat für zahlreiche Tiere. Eisrinnen, zentraler Bestandteil der präsentierten Studie, sind bis heute nur unzureichend im Südpolarmeer erforscht und beobachtet. Daher ist es Ziel einen Algorithmus zu entwickeln, um Eisrinnen in Fernerkundungsdaten automatisiert zu identifizieren. Dabei kommen thermal-Infrarot Satellitendaten des Moderate-Resolution Imaging Spectroradiometer (MODIS) zum Einsatz, welches auf den beiden Satelliten Aqua und Terra montiert ist und seit 2000 (Terra) bzw. 2002 (Aqua) Satellitenbilder bereitstellt. Die einzelnen Satellitenbilder beinhalten die Eisoberflächentemperatur des MOD/MYD 29 Produktes, welche in einem zweistufigen Algorithmus für den Zeitraum April bis September 2003 bis 2019 prozessiert werden.
Im ersten Schritt werden potentielle Eisrinnen anhand der lokalen positiven Temperaturanomalie identifiziert. Aufgrund von Artefakten werden weitere temperatur- und texturbasierte Parameter abgeleitet und zu täglichen Kompositen zusammengefügt. Diese werden in der zweiten Prozessierungsstufe verwendet, um Wolkenartefakte von echten Eisrinnen-Observationen zu trennen. Hier wird Fuzzy Logic genutzt und eine Antarktis-spezifische Konfiguration wird definiert. In diesem werden ausgewählte Eingabedaten aus dem ersten Prozessierungslevel genutzt, um einen finalen Proxy, den Lead Score (LS), zu berechnen. Der LS wird abschließend mittels manueller Qualitätskontrolle in eine Unsicherheit überführt. Die darüber identifizierten Artefakte können so zusätzlich zur MODIS-Wolkenmaske genutzt werden.
Auf Basis der Eisrinnenbeobachtungen wird ein klimatologischer Referenzdatensatz erstellt, der die repräsentative Eisrinnenverteilung im Antarktischen Ozean für die Wintermonate April bis September, 2003 bis 2019 zeigt. In diesem ist sichtbar, dass Eisrinnen in manchen Gegenden systematischer auftreten als in anderen. Das sind vor allem die Regionen entlang der Küstenregion, des kontinentalen Schelfabhangs und einigen Erhebungen und Kanälen in der Tiefsee. Dabei sind die erhöhten Frequenzen entlang des Schelfabhangs besonders interessant und der Einfluss von atmosphärischen und ozeanischen Einflüssen wird untersucht. Ein regionales Eis-Ozeanmodell wird genutzt, um ozeanische Einflüsse in Zusammenhang mit erhöhten Eisrinnenfrequenzen zu setzen.
In der vorliegenden Studie wird außerdem ein umfangreicher Überblick über die großskalige Variabilität von Antarktischem Meereis gegeben. Tägliche Eiskonzentrationsdaten, abgeleitet aus passiven Mikrowellendaten, werden aus dem Zeitraum 1979 bis 2018 für die Klassifikation genutzt. Der dk-means Algorithmus wird verwendet, um zehn repräsentative Eisklassen zu identifizieren. Die geographische Verteilung dieser Klassen wird als Karte dargestellt, in der der typische jährliche Eiszyklus je Klasse sichtbar ist.
Veränderungen in dem räumlichen Auftreten von Eisklassen werden identifiziert und qualitativ interpretiert. Positive Abweichungen hin zu höheren Eisklassen werden im Weddell- und dem Ross-Meer und einigen Regionen in der Ostantarktis identifiziert. Negative Abweichungen sind im Amundsen-Bellingshausen-Meer vorhanden. Der neu entwickelte (Climatological Sea Ice Anomaly Index) wird genutzt, um Klassenabweichungen in der Zeitreihe zu identifizieren. Damit werden drei Jahre (1986, 2007, 2014) für eine Fallstudie ausgewählt und in Relation zu atmosphärischen Daten aus ERA-Interim und Eisdrift-Daten untersucht. Für die beiden Jahre 1986 und 2007 können bestimmte atmosphärische Zirkulationsmuster identifiziert werden, die die entsprechende Eisklassifikation beeinflusst haben. Für das Jahr 2014 können keine besonders ausgeprägten atmosphärischen Anomalien ausgemacht werden.
Der Eisklassen-Datensatz kann in Zukunft als Ergänzung zu vorhandenen Studien und für die Validierung von Meereismodellen genutzt werden. Dabei sind vor allem Anwendungen in Bezug auf den Eisrinnen-Datensatz möglich.
Phylogeographic analyses point to long-term survival on the spot in micro-endemic Lycian salamanders
(2020)
Lycian salamanders (genus Lyciasalamandra) constitute an exceptional case of microendemism of an amphibian species on the Asian Minor mainland. These viviparous salamanders are confined to karstic limestone formations along the southern Anatolian coast and some islands. We here study the genetic differentiation within and among 118 populations of all seven Lyciasalamandra species across the entire genus’ distribution. Based on circa 900 base pairs of fragments of the mitochondrial 16SrDNA and ATPase genes, we analysed the spatial haplotype distribution as well as the genetic structure and demographic history of populations. We used 253 geo-referenced populations and CHELSA climate data to infer species distribution models which we projected on climatic conditions of the Last Glacial Maximum (LGM). Within all but one species, distinct phyloclades were identified, which only in parts matched current taxonomy. Most haplotypes (78%) were private to single populations. Sometimes population genetic parameters showed contradicting results, although in several cases they indicated recent population expansion of phyloclades. Climatic suitability of localities currently inhabited by salamanders was significantly lower during the LGM compared to recent climate. All data indicated a strong degree of isolation among Lyciasalamandra populations, even within phyloclades. Given the sometimes high degree of haplotype differentiation between adjacent populations, they must have survived periods of deteriorated climates during the Quaternary on the spot. However, the alternative explanation of male biased dispersal combined with a pronounced female philopatry can only be excluded if independent nuclear data confirm this result.
Auf politischer Ebene hat die Finanzierung von Kleinstunternehmen, kleinen und mittleren Unternehmen (KMU) durch die europäische Finanz- und Wirtschaftskrise eine hohe Bedeutung erhalten, da mehr als 99% aller europäischen Unternehmen in Europa dieser Kategorie angehören. Als Reaktion auf die oftmals schwierige Finanzierungssituation von KMU, die maßgeblich zur Gefährdung der Innovationsfähigkeit und der Entwicklung der europäischen Wirtschaft beitragen kann, wurden spezielle staatliche Programme aufgelegt. Trotz des vermehrten Interesses auf politischer und akademischer Ebene bezüglich KMU-Finanzierung, gibt es jedoch auf europäischer Ebene nur wenig empirische Evidenz. Diese Dissertation beschäftigt sich daher in fünf verschiedenen empirischen Studien zu aktuellen Forschungslücken hinsichtlich der Finanzierung von Kleinstunternehmen, kleinen und mittleren Unternehmen in Europa und mit neuen Finanzierungsinstrumenten für innovative Unternehmen oder Start-Ups.
Zunächst wird basierend auf zwei empirischen Untersuchungen (Kapitel 2 und 3) der Status Quo der KMU-Finanzierung in Europa dargelegt. Die Finanzierung von KMU in Europa ist sehr heterogen. Einerseits sind KMU als Gruppe keine homogene Gruppe, da Kleinstunternehmen (< 10 Mitarbeiter), kleine (10–49 Mitarbeiter) und mittlere (50–249 Mitarbeiter) Unternehmen sich nicht nur in ihren Charakteristiken unterscheiden, sondern auch unterschiedliche Finanzierungsmöglichkeiten und -bedürfnisse besitzen. Andererseits existieren Länderunterschiede in der Finanzierung von KMU in Europa. Die Ergebnisse dieser beiden Studien (Kapitel 2 und 3), die auf einer Umfrage der Europäischen Zentralbank und der Europäischen Kommission („SAFE survey“) beruhen, verdeutlichen dies: KMU in Europa verwenden unterschiedliche Finanzierungsmuster und nutzen Finanzierungsmuster komplementär oder substitutiv zueinander. Die verschiedenen Finanzierungsmuster sind wiederum gekennzeichnet durch firmen-, produkt-, und länderspezifische Charakteristika, aber auch durch makroökonomische Variablen (z. B. Inflationsraten).
In Kapitel 3 der Dissertation werden gezielt die Unterschiede zwischen der Finanzierung von Kleinstunternehmen im Vergleich zu kleinen und mittleren Unternehmen untersucht. Während kleine und mittlere Unternehmen eine Vielzahl an verschiedenen Finanzierungsinstrumenten parallel zueinander nutzen (z. B. subventionierte Bankkredite parallel zu Banken-, Überziehungs- und Lieferantenkrediten), greifen Kleinstunternehmen auf wenige Instrumente gleichzeitig zurück (insbesondere kurzfristiges Fremdkapital). Folglich finanzieren sich Kleinstunternehmen entweder intern oder über Überziehungskredite. Die Ergebnisse der Dissertation zeigen somit, dass die Finanzierung der KMU nicht homogen ist. Insbesondere Kleinstunternehmen sollten als eine eigenständige Gruppe innerhalb der KMU mit charakteristischen Finanzierungsmustern behandelt werden.
Innovative Firmen und Start-Ups gelten als wichtiger Motor für die Entwicklung der regionalen Wirtschaft. Auch sie werden in der akademischen Literatur häufig mit Finanzierungsschwierigkeiten in Verbindung gebracht, die das Wachstum und Überleben dieser Unternehmen erschwert. Der zweite Teil der Dissertation beinhaltet daher zwei empirische Studien zu dieser Thematik. Zunächst werden in Kapitel 4 in einer ersten Studie die regionalen und firmenspezifischen Faktoren untersucht, die den Output des geistigen Eigentums erhöhen. Insbesondere regionale Faktoren wurden bisher unzureichend untersucht, welche jedoch speziell für die politischen Entscheidungsträger von besonderer Relevanz sind. Die Ergebnisse dieser Studie zeigen, dass der Erhalt von Venture Capital neben der Firmengröße einen signifikanten Einfluss auf die Höhe des geistigen Eigentums haben. Zwar spielen technische Universitäten keine Rolle bezüglich des Outputs, jedoch zeigt sich ein signifikant positiver Effekt der Studentenrate auf den jeweiligen Output des geistigen Eigentums. Basierend auf diesen Ergebnissen wird in einer zweiten Studie gezielt auf das Finanzierungsinstrument Venture Capital eingegangen und zwischen verschiedenen VC Typen unterschieden: staatliche, unabhängige und Corporate Venture Capital Firmen. Die Ergebnisse zeigen, dass insbesondere Regionen mit einem Angebot an qualifiziertem Humankapital staatliche Venture Capital Investitionen anziehen. Des Weiteren investieren insbesondere Corporate und staatliche Venture Capital Firmen vermehrt in ländliche Regionen.
Als neues Finanzierungsinstrument für besonders innovative Unternehmer hat sich das „Initial Coin Offering (ICO)“ in den letzten Jahren herauskristallisiert, womit sich Kapitel 5 näher beschäftigt. Mithilfe einer Zeitreihenanalyse werden Marktzyklen von ICO Kampagnen, bitcoin und Ether Preisen analysiert. Die Ergebnisse dieser Studie zeigen, dass vergangene ICOs die folgenden ICOs positiv beeinflussen. Zudem haben ICOs einen negativen Einfluss auf die Kryptowährungen Bitcoin und Ether, wohingegen sich der Preis des bitcoin positiv auf den Preis des Ethers auswirkt.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
In recent decades, Border Studies have gained importance and have seen a noticeable increase in development. This manifests itself in an increased institutionalization, a differentiation of the areas of research interest and a conceptual reorientation that is interested in examining processes. So far, however, little attention has been paid to questions about (inter)disciplinary self-perception and methodological foundations of Border Studies and the associated consequences for research activities. This thematic issue addresses these desiderata and brings together articles that deal with their (inter)disciplinary foundations as well as method(olog)ical and practical research questions. The authors also provide sound insights into a disparate field of work, disclose practical research strategies, and present methodologically sophisticated systematizations.
B/ordering the Anthropocene: Inter- and Transdisciplinary Perspectives on Nature-Culture Relations
(2020)
In and with this thematic issue we would like to invite you to engage in productive boundary work and to critically examine the relationship between nature and culture in the Anthropocene. A few years ago, the term Anthropocene was proposed by Paul Crutzen as a term for the current geological epoch, in which humankind (the ‘anthropos’) is seen as the central driving force for global changes in ecological systems. This epoch is characterized by the blurring of boundaries between society and nature, science and politics, as well as by the increased drawing of boundaries between social groups, lifestyles, and the Global North and Global South. With this issue, we would like to give an impetus to explore boundary phenomena in the relationship between nature and society, which up to now have not been the focus of Border Studies. The challenges and problems of the Anthropocene require cross-border thinking and research that stimulates a new reflexivity and commitment, to which the multidisciplinary field of Border Studies can contribute.
Human behavior in regard to financial issues has long been explained in the light of the efficient market hypothesis. Following the strict interpretation of this theory, investors in the financial markets take into account that all relevant information is already included in the market price of an asset. Accordingly, information from the past does not affect future prices as all information is instantly incorporated. However, focussing on the actual behavior of humans, our empirical results indicate that the existing market conditions influence the behavior of stock market investors.
In the introductory chapter, we describe the difficulties of the efficient markets hypothesis in explaining the behavior of investors within a strictly rational frame. In the second chapter, we show that investors do consider the previous market development for their upcoming investment decisions. First, stock market patterns with predominantly positive days trigger significantly more trades than patterns with negative days. And second, after recent upward movements, investors sell proportionally more stocks than they buy. In the third chapter, we expound a theoretical framework that connects investment-related triggers of arousal, such as the performance of own stocks and the general market environment, with investors’ risk appetite in the decision-making processes. Our model predicts that aroused investors accept higher risks by holding stocks longer in comparison to their less aroused peers. In the fourth chapter, we show how two extreme market environments, the bull and the bear market, affect the disposition effect and especially learning to avoid this behavioral bias. Investors are subject to the bias in each market phase but with a far stronger propensity during the bear market. However, we show that investors also make the greatest progress in avoiding the disposition effect during this period.
These results suggest that future studies about investors’ behavior in the financial markets should consider the market environment as an important determinant.
This thesis sheds light on the heterogeneous hedging behavior of airlines. The focus lies on financial hedging, operational hedging and selective hedging. The unbalanced panel data set includes 74 airlines from 39 countries. The period of analysis is 2005 until 2014, resulting in 621 firm years. The random effects probit and fixed effects OLS models provide strong evidence of a convex relation between derivative usage and a firm’s leverage, opposing the existing financial distress theory. Airlines with lower leverage had higher hedge ratios. In addition, the results show that airlines with interest rate and currency derivatives were more likely to engage in fuel price hedging. Moreover, the study results support the argument that operational hedging is a complement to financial hedging. Airlines with more heterogeneous fleet structures exhibited higher hedge ratios.
Also, airlines which were members of a strategic alliance were more likely to be hedging airlines. As alliance airlines are rather financially sound airlines, the positive relation between alliance membership and hedging reflects the negative results on the leverage
ratio. Lastly, the study presents determinants of an airlines’ selective hedging behavior. Airlines with prior-period derivative losses, recognized in income, changed their hedge portfolios more frequently. Moreover, the sample airlines acted in accordance with herd behavior theory. Changes in the regional hedge portfolios influenced the hedge portfolio of the individual airline in the same direction.
The formerly communist countries in Central and Eastern Europe (transitional economies in Europe and the Soviet Union – for example, East Germany, Czech Republic, Hungary, Lithuania, Poland, Russia) and transitional economies in Asia – for example, China, Vietnam had centrally planned economies, which did not allow entrepreneurship activities. Despite the political-socioeconomic transformations in transitional economies around 1989, they still had an institutional heritage that affects individuals’ values and attitudes, which, in turn, influence intentions, behaviors, and actions, including entrepreneurship.
While prior studies on the long-lasting effects of socialist legacy on entrepreneurship have focused on limited geographical regions (e.g., East-West Germany, and East-West Europe), this dissertation focuses on the Vietnamese context, which offers a unique quasi-experimental setting. In 1954, Vietnam was divided into the socialist North and the non-socialist South, and it was then reunified under socialist rule in 1975. Thus, the intensity of differences in socialist treatment in North-South Vietnam (about 21 years) is much shorter than that in East-West Germany (about 40 years) and East-West Europe (about 70 years when considering former Soviet Union countries).
To assess the relationship between socialist history and entrepreneurship in this unique setting, we survey more than 3,000 Vietnamese individuals. This thesis finds that individuals from North Vietnam have lower entrepreneurship intentions, are less likely to enroll in entrepreneurship education programs, and display lower likelihood to take over an existing business, compared to those from the South of Vietnam. The long-lasting effect of formerly socialist institutions on entrepreneurship is apparently deeper than previously discovered in the prominent case of East-West Germany and East-West Europe as well.
In the second empirical investigation, this dissertation focuses on how succession intentions differ from others (e.g., founding, and employee intentions) regarding career choice motivation, and the effect of three main elements of the theory of planned behavior (e.g., entrepreneurial attitude, subjective norms, and perceived behavioral control) in transition economy – Vietnam context. The findings of this thesis suggest that an intentional founder is labeled with innovation, an intentional successor is labeled with roles motivation, and an intentional employee is labeled with social mission. Additionally, this thesis reveals that entrepreneurial attitude and perceived behavioral control are positively associated with the founding intention, whereas there is no difference in this effect between succession and employee intentions.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.
The World's second oldest system of judicial review of national legislation emerged through court practice from the very first years after the adoption of the Constitution of Norway in 1814. The review is exercised by the ordinary courts at all levels with the single Supreme Court as the last instance. No specialized constitutional court has been established. The independence of the judiciary is generally recognized as high. But what degree of legitimacy should judges appointed in order to ensure ordinary judicial business enjoy when exercising a basically political function like reviewing and possibly setting aside acts of Parliament?
Evidence points to autonomy as having a place next to affiliation, achievement, and power as one of the basic implicit motives; however, there is still some research that needs to be conducted to support this notion.
The research in this dissertation aimed to address this issue. I have specifically focused on two issues that help solidify the foundation of work that has already been conducted on the implicit autonomy motive, and will also be a foundation for future studies. The first issue is measurement. Implicit motives should be measured using causally valid instruments (McClelland, 1980). The second issue addresses the function of motives. Implicit motives orient, select, and energize behavior (McClelland, 1980). If autonomy is an implicit motive, then we need a valid instrument to measure it and we also need to show that it orients, selects, and energizes behavior.
In the following dissertation, I address these two issues in a series of ten studies. Firstly, I present studies that examine the causal validity of the Operant Motive Test (OMT; Kuhl, 2013) for the implicit affiliation and power motives using established methods. Secondly, I developed and empirically tested pictures to specifically assess the implicit autonomy motive and examined their causal validity. Thereafter, I present two studies that investigated the orienting and energizing effects of the implicit autonomy motive. The results of the studies solidified the foundation of the OMT and how it measures nAutonomy. Furthermore, this dissertation demonstrates that nAutonomy fulfills the criteria for two of the main functions of implicit motives. Taken together, the findings of this dissertation provide further support for autonomy as an implicit motive and a foundation for intriguing future studies.
Imagery-based techniques have received increasing interest in psychotherapy research. Whereas their effectiveness has been shown for various psychological disorders, their underlying mechanisms remain unclear. Current research predominantly investigates intrapersonal processes, while interpersonal processes have received no attention to date. The aim of the current dissertation was to fill this lacuna. The three interrelated studies comprising this dissertation were the first to examine the effectiveness of imagery-based techniques in the treatment of test anxiety, relate physiological arousal to emotional processing, and investigate the association between physiological synchrony and multiple process measures.
Study I investigated the feasibility of a newly developed protocol, which integrates imagery-based and cognitive-behavioral components, to treat test anxiety in a sample of 31 students. The results indicated the protocol as acceptable, feasible, and effective in the treatment of test anxiety. Additionally, the imagery-based component was positively associated with therapeutic bond, session evaluation, and emotional experience.
Study II shifted the focus from the effectiveness of imagery-based techniques to client-therapist physiological synchrony as a putative mechanism of change in the same sample. The results suggested that physiological synchrony was greater than chance during both imagery-based and cognitive-behavioral components. Variability of physiological synchrony on the session-level during the imagery-based components and variability on both levels (session and dyad) during the cognitive-behavioral components were demonstrated. Furthermore, physiological synchrony of the imagery-based segments was positively assocatied with therapeutic bond. No association was found for the cognitive-behavioral components.
Study III examined both intrapersonal (i.e., clients’ electrodermal activity) and interpersonal (i.e., client-therapist electrodermal activity synchrony) processes and their associations with emotional processing in a sample of 49 client-therapist-dyads. The results suggested that higher client physiological arousal and a moderate level of physiological synchrony were associated with deeper emotional processing.
Taken together, the results highlight the effectiveness of imagery-based techniques in the treatment of test anxiety. Furthermore, the results of Studies II and III support the idea of physiological synchrony as a mechanism of change in imagery with and without rescripting. The current dissertation takes an important step towards optimizing process research within psychotherapy and contributes to a better understanding of the potency and mechanisms of change of imagery-based techniques. We hope that these studies’ implications will support everyday clinical practice.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
In this thesis we study structure-preserving model reduction methods for the efficient and reliable approximation of dynamical systems. A major focus is the approximation of a nonlinear flow problem on networks, which can, e.g., be used to describe gas network systems. Our proposed approximation framework guarantees so-called port-Hamiltonian structure and is general enough to be realizable by projection-based model order reduction combined with complexity reduction. We divide the discussion of the flow problem into two parts, one concerned with the linear damped wave equation and the other one with the general nonlinear flow problem on networks.
The study around the linear damped wave equation relies on a Galerkin framework, which allows for convenient network generalizations. Notable contributions of this part are the profound analysis of the algebraic setting after space-discretization in relation to the infinite dimensional setting and its implications for model reduction. In particular, this includes the discussion of differential-algebraic structures associated to the network-character of our problem and the derivation of compatibility conditions related to fundamental physical properties. Amongst the different model reduction techniques, we consider the moment matching method to be a particularly well-suited choice in our framework.
The Galerkin framework is then appropriately extended to our general nonlinear flow problem. Crucial supplementary concepts are required for the analysis, such as the partial Legendre transform and a more careful discussion of the underlying energy-based modeling. The preservation of the port-Hamiltonian structure after the model-order- and complexity-reduction-step represents a major focus of this work. Similar as in the analysis of the model order reduction, compatibility conditions play a crucial role in the analysis of our complexity reduction, which relies on a quadrature-type ansatz. Furthermore, energy-stable time-discretization schemes are derived for our port-Hamiltonian approximations, as structure-preserving methods from literature are not applicable due to our rather unconventional parametrization of the solution.
Apart from the port-Hamiltonian approximation of the flow problem, another topic of this thesis is the derivation of a new extension of moment matching methods from linear systems to quadratic-bilinear systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored towards user-defined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem rather than to a multi-dimensional interpolation as for the multivariate approaches, i.e., it involves fewer interpolation frequencies to be chosen. The notion of signal-generator-driven systems, variational expansions of the resulting autonomous systems as well as the derivation of convenient tensor-structured approximation conditions are the main ingredients of this part. Notably, our approach allows for the incorporation of general input relations in the state equations, not only affine-linear ones as in existing system-theoretic methods.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
The present thesis is devoted to a construction which defies generalisations about the prototypical English noun phrase (NP) to such an extent that it has been termed the Big Mess Construction (Berman 1974). As illustrated by the examples in (1) and (2), the NPs under study involve premodifying adjective phrases (APs) which precede the determiner (always realised in the form of the indefinite article a(n)) rather than following it.
(1) NoS had not been hijacked – that was too strong a word. (BNC: CHU 1766)
(2) He was prepared for a battle if the porter turned out to be as difficult a customer as his wife. (BNC: CJX 1755)
Previous research on the construction is largely limited to contributions from the realms of theoretical syntax and a number of cursory accounts in reference grammars. No comprehensive investigation of its realisations and uses has as yet been conducted. My thesis fills this gap by means of an exhaustive analysis of the construction on the basis of authentic language data retrieved from the British National Corpus (BNC). The corpus-based approach allows me to examine not only the possible but also the most typical uses of the construction. Moreover, while previous work has almost exclusively focused on the formal realisations of the construction, I investigate both its forms and functions.
It is demonstrated that, while the construction is remarkably flexible as concerns its possible realisations, its use is governed by probabilistic constraints. For example, some items occur much more frequently inside the degree item slot than others (as, too and so stand out for their particularly high frequency). Contrary to what is assumed in most previous descriptions, the slot is not restricted in its realisation to a fixed number of items. Rather than representing a specialised structure, the construction is furthermore shown to be distributed over a wide range of possible text types and syntactic functions. On the other hand, it is found to be much less typical of spontaneous conversation than of written language; Big Mess NPs further display a strong preference for the function of subject complement. Investigations of the internal structural complexity of the construction indicate that its obligatory components can optionally be enriched by a remarkably wide range of optional (if infrequent) elements. In an additional analysis of the realisations of the obligatory but lexically variable slots (head noun and head of AP), the construction is highlighted to represent a productive pattern. With the help of the methods of Collexeme Analysis (Stefanowitsch and Gries 2003) and Co-varying Collexeme Analysis (Gries and Stefanowitsch 2004b, Stefanowitsch and Gries 2005), the two slots are, however, revealed to be strongly associated with general nouns and ‘evaluative’ and ‘dimension’ adjectives, respectively. On the basis of an inspection of the most typical adjective-noun combinations, I identify the prototypical semantics of the Big Mess Construction.
The analyses of the constructional functions centre on two distinct functional areas. First, I investigate Bolinger’s (1972) hypothesis that the construction fulfils functions in line with the Principle of Rhythmic Alternation (e.g. Selkirk 1984: 11, Schlüter 2005). It is established that rhythmic preferences co-determine the use of the construction to some extent, but that they clearly do not suffice to explain the phenomenon under study. In a next step, the discourse-pragmatic functions of the construction are scrutinised. Big Mess NPs are demonstrated to perform distinct information-structural functions in that the non-canonical position of the AP serves to highlight focal information (compare De Mönnink 2000: 134-35). Additionally, the construction is shown to place emphasis on acts of evaluation. I conclude the construction to represent a contrastive focus construction.
My investigations of the formal and functional characteristics of Big Mess NPs each include analyses which compare individual versions of the construction to one another (e.g. the As Big a Mess, Too Big a Mess and So Big a Mess Constructions). It is revealed that the versions are united by a shared core of properties while differing from one another at more abstract levels of description. The question of the status of the constructional versions as separate constructions further receives special emphasis as part of a discussion in which I integrate my results into the framework of usage-based Construction Grammar (e.g. Goldberg 1995, 2006).
The dissertation includes three published articles on which the development of a theoretical model of motivational and self-regulatory determinants of the intention to comprehensively search for health information is based. The first article focuses on building a solid theoretical foundation as to the nature of a comprehensive search for health information and enabling its integration into a broader conceptual framework. Based on subjective source perceptions, a taxonomy of health information sources was developed. The aim of this taxonomy was to identify most fundamental source characteristics to provide a point of reference when it comes to relating to the target objects of a comprehensive search. Three basic source characteristics were identified: expertise, interaction and accessibility. The second article reports on the development and evaluation of an instrument measuring the goals individuals have when seeking health information: the ‘Goals Associated with Health Information Seeking’ (GAINS) questionnaire. Two goal categories (coping focus and regulatory focus) were theoretically derived, based on which four goals (understanding, action planning, hope and reassurance) were classified. The final version of the questionnaire comprised four scales representing the goals, with four items per scale (sixteen items in total). The psychometric properties of the GAINS were analyzed in three independent samples, and the questionnaire was found to be reliable and sufficiently valid as well as suitable for a patient sample. It was concluded that the GAINS makes it possible to evaluate goals of health information seeking (HIS) which are likely to inform the intention building on how to organize the search for health information. The third article describes the final development and a first empirical evaluation of a model of motivational and self-regulatory determinants of an intentionally comprehensive search for health information. Based on the insights and implications of the previous two articles and an additional rigorous theoretical investigation, the model included approach and avoidance motivation, emotion regulation, HIS self-efficacy, problem and emotion focused coping goals and the intention to seek comprehensively (as outcome variable). The model was analyzed via structural equation modeling in a sample of university students. Model fit was good and hypotheses with regard to specific direct and indirect effects were confirmed. Last, the findings of all three articles are synthesized, the final model is presented and discussed with regard to its strengths and weaknesses, and implications for further research are determined.
Entrepreneurial ventures are associated with economic growth, job creation, and innovation. Most entrepreneurial ventures need external funding to succeed. However, they often find it difficult to access traditional forms of financing, such as bank loans. To overcome this hurdle and to provide entrepreneurial ventures with badly-needed external capital, many types of entrepreneurial finance have emerged over the past decades and continue to emerge today. Inspired by these dynamics, this postdoctoral thesis contains five empirical studies that address novel questions regarding established (e.g., venture capital, business angels) and new types of entrepreneurial finance (i.e., initial coin offerings).
In the course of the COVID-19 pandemic, borders have become relevant (again) in political action and in people's everyday lives within a very short time. This was especially true for the inhabitants of border regions, whose cross-border life worlds were suddenly irritated by closed borders and police controls. However, the COVID-19 pandemic also led to an increased evidence of social, cultural, economic, health and mobility boundaries beyond national borders, which raised pressing questions about social inequalities. The authors shed light on these dynamics from the perspective of territorial borders, social boundaries and (dis)continuities in border regions through a variety of thematic and spatial approaches. The critical observations and scientific comments were made during the lockdown in April and May 2020 and provide insights into the events during the global pandemic.
With two-thirds to three-quarters of all companies, family firms are the most common firm type worldwide and employ around 60 percent of all employees, making them of considerable importance for almost all economies. Despite this high practical relevance, academic research took notice of family firms as intriguing research subjects comparatively late. However, the field of family business research has grown eminently over the past two decades and has established itself as a mature research field with a broad thematic scope. In addition to questions relating to corporate governance, family firm succession and the consideration of entrepreneurial families themselves, researchers mainly focused on the impact of family involvement in firms on their financial performance and firm strategy. This dissertation examines the financial performance and capital structure of family firms in various meta-analytical studies. Meta-analysis is a suitable method for summarizing existing empirical findings of a research field as well as identifying relevant moderators of a relationship of interest.
First, the dissertation examines the question whether family firms show better financial performance than non-family firms. A replication and extension of the study by O’Boyle et al. (2012) based on 1,095 primary studies reveals a slightly better performance of family firms compared to non-family firms. Investigating the moderating impact of methodological choices in primary studies, the results show that outperformance holds mainly for large and publicly listed firms and with regard to accounting-based performance measures. Concerning country culture, family firms show better performance in individualistic countries and countries with a low power distance.
Furthermore, this dissertation investigates the sensitivity of family firm performance with regard to business cycle fluctuations. Family firms show a pro-cyclical performance pattern, i.e. their relative financial performance compared to non-family firms is better in economically good times. This effect is particularly pronounced in Anglo-American countries and emerging markets.
In the next step, a meta-analytic structural equation model (MASEM) is used to examine the market valuation of public family firms. In this model, profitability and firm strategic choices are used as mediators. On the one hand, family firm status itself does not have an impact on firms‘ market value. On the other hand, this study finds a positive indirect effect via higher profitability levels and a negative indirect effect via lower R&D intensity. A split consideration of family ownership and management shows that these two effects are mainly driven by family ownership, while family management results in less diversification and internationalization.
Finally, the dissertation examines the capital structure of public family firms. Univariate meta-analyses indicate on average lower leverage ratios in family firms compared to non-family firms. However, there is significant heterogeneity in mean effect sizes across the 45 countries included in the study. The results of a meta-regression reveal that family firms use leverage strategically to secure their controlling position in the firm. While strong creditor protection leads to lower leverage ratios in family firms, strong shareholder protection has the opposite effect.
Die vorgelegte Dissertation trägt den Titel Regularization Methods for Statistical Modelling in Small Area Estimation. In ihr wird die Verwendung regularisierter Regressionstechniken zur geographisch oder kontextuell hochauflösenden Schätzung aggregatspezifischer Kennzahlen auf Basis kleiner Stichproben studiert. Letzteres wird in der Fachliteratur häufig unter dem Begriff Small Area Estimation betrachtet. Der Kern der Arbeit besteht darin die Effekte von regularisierter Parameterschätzung in Regressionsmodellen, welche gängiger Weise für Small Area Estimation verwendet werden, zu analysieren. Dabei erfolgt die Analyse primär auf theoretischer Ebene, indem die statistischen Eigenschaften dieser Schätzverfahren mathematisch charakterisiert und bewiesen werden. Darüber hinaus werden die Ergebnisse durch numerische Simulationen veranschaulicht, und vor dem Hintergrund empirischer Anwendungen kritisch verortet. Die Dissertation ist in drei Bereiche gegliedert. Jeder Bereich behandelt ein individuelles methodisches Problem im Kontext von Small Area Estimation, welches durch die Verwendung regularisierter Schätzverfahren gelöst werden kann. Im Folgenden wird jedes Problem kurz vorgestellt und im Zuge dessen der Nutzen von Regularisierung erläutert.
Das erste Problem ist Small Area Estimation in der Gegenwart unbeobachteter Messfehler. In Regressionsmodellen werden typischerweise endogene Variablen auf Basis statistisch verwandter exogener Variablen beschrieben. Für eine solche Beschreibung wird ein funktionaler Zusammenhang zwischen den Variablen postuliert, welcher durch ein Set von Modellparametern charakterisiert ist. Dieses Set muss auf Basis von beobachteten Realisationen der jeweiligen Variablen geschätzt werden. Sind die Beobachtungen jedoch durch Messfehler verfälscht, dann liefert der Schätzprozess verzerrte Ergebnisse. Wird anschließend Small Area Estimation betrieben, so sind die geschätzten Kennzahlen nicht verlässlich. In der Fachliteratur existieren hierfür methodische Anpassungen, welche in der Regel aber restriktive Annahmen hinsichtlich der Messfehlerverteilung benötigen. Im Rahmen der Dissertation wird bewiesen, dass Regularisierung in diesem Kontext einer gegen Messfehler robusten Schätzung entspricht - und zwar ungeachtet der Messfehlerverteilung. Diese Äquivalenz wird anschließend verwendet, um robuste Varianten bekannter Small Area Modelle herzuleiten. Für jedes Modell wird ein Algorithmus zur robusten Parameterschätzung konstruiert. Darüber hinaus wird ein neuer Ansatz entwickelt, welcher die Unsicherheit von Small Area Schätzwerten in der Gegenwart unbeobachteter Messfehler quantifiziert. Es wird zusätzlich gezeigt, dass diese Form der robusten Schätzung die wünschenswerte Eigenschaft der statistischen Konsistenz aufweist.
Das zweite Problem ist Small Area Estimation anhand von Datensätzen, welche Hilfsvariablen mit unterschiedlicher Auflösung enthalten. Regressionsmodelle für Small Area Estimation werden normalerweise entweder für personenbezogene Beobachtungen (Unit-Level), oder für aggregatsbezogene Beobachtungen (Area-Level) spezifiziert. Doch vor dem Hintergrund der stetig wachsenden Datenverfügbarkeit gibt es immer häufiger Situationen, in welchen Daten auf beiden Ebenen vorliegen. Dies beinhaltet ein großes Potenzial für Small Area Estimation, da somit neue Multi-Level Modelle mit großem Erklärungsgehalt konstruiert werden können. Allerdings ist die Verbindung der Ebenen aus methodischer Sicht kompliziert. Zentrale Schritte des Inferenzschlusses, wie etwa Variablenselektion und Parameterschätzung, müssen auf beiden Levels gleichzeitig durchgeführt werden. Hierfür existieren in der Fachliteratur kaum allgemein anwendbare Methoden. In der Dissertation wird gezeigt, dass die Verwendung ebenenspezifischer Regularisierungsterme in der Modellierung diese Probleme löst. Es wird ein neuer Algorithmus für stochastischen Gradientenabstieg zur Parameterschätzung entwickelt, welcher die Informationen von allen Ebenen effizient unter adaptiver Regularisierung nutzt. Darüber hinaus werden parametrische Verfahren zur Abschätzung der Unsicherheit für Schätzwerte vorgestellt, welche durch dieses Verfahren erzeugt wurden. Daran anknüpfend wird bewiesen, dass der entwickelte Ansatz bei adäquatem Regularisierungsterm sowohl in der Schätzung als auch in der Variablenselektion konsistent ist.
Das dritte Problem ist Small Area Estimation von Anteilswerten unter starken verteilungsbezogenen Abhängigkeiten innerhalb der Kovariaten. Solche Abhängigkeiten liegen vor, wenn eine exogene Variable durch eine lineare Transformation einer anderen exogenen Variablen darstellbar ist (Multikollinearität). In der Fachliteratur werden hierunter aber auch Situationen verstanden, in welchen mehrere Kovariate stark korreliert sind (Quasi-Multikollinearität). Wird auf einer solchen Datenbasis ein Regressionsmodell spezifiziert, dann können die individuellen Beiträge der exogenen Variablen zur funktionalen Beschreibung der endogenen Variablen nicht identifiziert werden. Die Parameterschätzung ist demnach mit großer Unsicherheit verbunden und resultierende Small Area Schätzwerte sind ungenau. Der Effekt ist besonders stark, wenn die zu modellierende Größe nicht-linear ist, wie etwa ein Anteilswert. Dies rührt daher, dass die zugrundeliegende Likelihood-Funktion nicht mehr geschlossen darstellbar ist und approximiert werden muss. Im Rahmen der Dissertation wird gezeigt, dass die Verwendung einer L2-Regularisierung den Schätzprozess in diesem Kontext signifikant stabilisiert. Am Beispiel von zwei nicht-linearen Small Area Modellen wird ein neuer Algorithmus entwickelt, welche den bereits bekannten Quasi-Likelihood Ansatz (basierend auf der Laplace-Approximation) durch Regularisierung erweitert und verbessert. Zusätzlich werden parametrische Verfahren zur Unsicherheitsmessung für auf diese Weise erhaltene Schätzwerte beschrieben.
Vor dem Hintergrund der theoretischen und numerischen Ergebnisse wird in der Dissertation demonstriert, dass Regularisierungsmethoden eine wertvolle Ergänzung der Fachliteratur für Small Area Estimation darstellen. Die hier entwickelten Verfahren sind robust und vielseitig einsetzbar, was sie zu hilfreichen Werkzeugen der empirischen Datenanalyse macht.
In current times, the coronavirus is spreading and taking its toll all over the world. Inspite of having developed into a global pandemic, COVID-19 is oftentimes met with local national(ist) reactions. Many states pursue iso-lationist politics by closing and enforcing borders and by focusing entirely on their own functioning in this mo-ment of crisis. This nationalist/nationally-oriented rebordering politics goes hand in hand with what might be termed ‘linguistic rebordering,’ i.e. the attempts of constructing the disease as something foreign-grown and by apportioning the blame to ‘the other.’ This paper aims at laying bare the interconnectedness of these geopoliti-cal and linguistic/discursive rebordering politics. It questions their efficacy and makes a plea for cross-border solidarity.
The object of the current Thematic Issue is not to focus on the individuals (the cross-border commuters) but on the organization of the cross-border labor markets. We move from a micro perspective to a macro perspective in order to underline the diversity of the cross-border labor markets (at the French borders, for example) and shed light on the many aspects that impact cross-border supply or demand. Trying to understand the whole system that goes beyond the cross-border flows, the question we address in this thematic issue is about the organization of the labor markets: is the system organized in a cross-border way? Or do the borders still prevent a genuinely integrated cross-border labor market?