Filtern
Erscheinungsjahr
- 2020 (36) (entfernen)
Dokumenttyp
Sprache
- Englisch (36) (entfernen)
Volltext vorhanden
- ja (36) (entfernen)
Schlagworte
- Satellitenfernerkundung (3)
- Antarktis (2)
- Gesundheit (2)
- Klima (2)
- Luxemburg (2)
- Meereis (2)
- Wald (2)
- ALS (1)
- Amtliche Statistik (1)
- Anatolien / Süd (1)
- Angststörung (1)
- Anpassung (1)
- Anthropocene (1)
- Anthropogene Klimaänderung (1)
- Anthropozän (1)
- Arbeitsplatz (1)
- Arctic (1)
- Atmosphärische Grenzschicht (1)
- Atmosphärische Turbulenz (1)
- Autonomie (1)
- Beregnung (1)
- Berufstätigkeit (1)
- Bevölkerungsökonomie (1)
- Bewegungsmessung (1)
- Bewertung (1)
- Bewältigung (1)
- Beziehung (1)
- Big Five personality traits (1)
- Bildverarbeitung (1)
- Bodenerosion (1)
- Bodennahe Luftschicht (1)
- Branching Diffusion (1)
- Chemische Analyse (1)
- Collexeme Analysis (1)
- Common Liability (1)
- Common Noise (1)
- Community Supported Agriculture (CSA) (1)
- Construction Grammar (1)
- Dachschiefer (1)
- Datenerhebung (1)
- Density Estimation (1)
- Depression (1)
- Deutschland (1)
- Discrete Optimization, Linear Programming, Integer Programming, Extended Formulation, Graph Theory, Branch & Bound (1)
- Discrete-Time Impulse Control (1)
- Einstrahlung (1)
- Emotionales Verhalten (1)
- Emotionsregulation (1)
- Englisch (1)
- Entrepreneurship (1)
- Erregung (1)
- Europäische Union (1)
- Exchange Rates (1)
- Experiment (1)
- Expertise (1)
- Fernerkundung (1)
- Fiskalpolitik (1)
- Fitness (1)
- Frankreich (1)
- GEOBIA (1)
- Gamification (1)
- Gefühl (1)
- Geldpolitik (1)
- Genanalyse (1)
- Generalized Variance Functions (1)
- Genetische Variabilität (1)
- Geneva Emotional Competence Test (1)
- Gesellschaft (1)
- Gesundheitsschutz (1)
- Greater Region SaarLorLux (1)
- Greenland (1)
- Grenzarbeitnehmer (1)
- Grenzgebiet (1)
- Grenzüberschreitung (1)
- Grönland (1)
- Haftung (1)
- Humangenetik (1)
- Imagination (1)
- Implizites Wissen (1)
- Information (1)
- Infrarotthermographie (1)
- Intelligence Structure Battery (1)
- Intelligenz (1)
- Interaktion (1)
- Internet (1)
- Interpersonal conflict (1)
- Junge Frau (1)
- Kalkulationsverfahren (1)
- Katabatischer Wind (1)
- Kognitive Psychologie (1)
- Kognitive Psychotherapie (1)
- Laubwald (1)
- Lebensplan (1)
- Lernen (1)
- LiDAR (1)
- Lidar (1)
- Luftbild (1)
- MODIS (1)
- Maschinelles Lernen (1)
- Mathematisches Modell (1)
- Mean Field Games (1)
- Mensch (1)
- Meteorologische Messung (1)
- Mineralogie (1)
- Mittelgebirge (1)
- Mixed Local-Nonlocal PDE (1)
- Modell (1)
- Modellierung (1)
- Motiv (1)
- Motivation (1)
- Nadelwald (1)
- Nationalpark Hunsrück-Hochwald (1)
- Nationalstaat (1)
- Natur (1)
- Nature and society (1)
- Navier-Stokes-Gleichung (1)
- Neuronales Netz (1)
- Niederschlag (1)
- Nominalphrase (1)
- OpenStreetMap (1)
- Optimierung (1)
- Orientierung (1)
- Ozon (1)
- Patient (1)
- Physiologische Psychotherapie (1)
- Polargebiete (1)
- Principle of Rhythmic Alternation (1)
- Prädetermination <Linguistik> (1)
- Prüfungsangst (1)
- Psychometrischer Intelligenztest (1)
- Psychotherapeut (1)
- Regressionsmodell (1)
- Religion (1)
- Retirement, Fertility, Sexuality (1)
- Rheinland-Pfalz (1)
- Risikomanagement (1)
- Robuste Statistik (1)
- Ruhestand (1)
- Rutschung (1)
- SARS-CoV-2 (1)
- Salamander (1)
- Schweiz (1)
- Schweizer Alpen (1)
- Schweißabsonderung (1)
- Schätzfunktion (1)
- Schätztheorie (1)
- Schätzung (1)
- Selbstwirksamkeit (1)
- Selbstüberwachung (1)
- Self-organizing Maps (1)
- Sexualität (1)
- Sharing Economy (1)
- Sistānbecken (1)
- Small area estimation (1)
- Socialism, Socialist values and attitudes, Socialist legacy, Literature review, Entrepreneurship intention, Business takeover, Career choice reasons, and TPB model. (1)
- Sozialismus (1)
- Staatsanleihe (1)
- Staatsgrenze (1)
- Stichprobe (1)
- Stickstoffoxide (1)
- Structured Eurobonds (1)
- Suche (1)
- Surveys (1)
- Switzerland (1)
- Taxonomie (1)
- Test (1)
- Therapieabbruch (1)
- Therapieerfolg (1)
- Umwelt (1)
- Unternehmen (1)
- Validierung (1)
- Vertrauen (1)
- Videospiel (1)
- Waldinventur (1)
- Waldtyp (1)
- Wasserbilanz (1)
- Wechselkurs (1)
- Wechselwarme (1)
- Weighted Regression (1)
- Wissen (1)
- Wohlbefinden (1)
- Währungsunion (1)
- Ziel (1)
- Zugang (1)
- activity cycle (1)
- adherence (1)
- asymptotic analysis (1)
- atmospheric water balance (1)
- border (1)
- border closure (1)
- border shifts (1)
- climate change (1)
- cognition (1)
- community-based production (1)
- complexity reduction (1)
- coronavirus (1)
- corpus linguistics (1)
- critical boundary work (1)
- cross-border labor market (1)
- cross-border trade union (1)
- crystallized abilities (1)
- daily mobility (1)
- dilute particle suspension (1)
- disciplinary borders (1)
- dropout (1)
- early change (1)
- ectotherms (1)
- emotional intelligence (1)
- employment (1)
- experimental design (1)
- fitness tracker (1)
- fluid abilities (1)
- forests (1)
- games, experimental (1)
- image segmentation (1)
- incompressible Newtonian fluid (1)
- internet intervention (1)
- katabatic wind (1)
- leads (1)
- mean field approximation (1)
- meteorology (1)
- microrefugia (1)
- mineralogy (1)
- model order reduction (1)
- mountain topography (1)
- naming practices (1)
- nationalism (1)
- nitrogen oxides (1)
- norm mineral calculation (1)
- open data (1)
- ozone (1)
- personal trust (1)
- phototropism (1)
- phyllites (1)
- physical activity (1)
- platform economy (1)
- port-Hamiltonian (1)
- predeterminer adjective phrases (1)
- proof of concept study (1)
- psychologische Beratung (1)
- psychology (1)
- questionnaires (1)
- region growing (1)
- roof slates (1)
- sea ice (1)
- shales (1)
- solidarity (1)
- stable boundary layer (1)
- stem detection (1)
- stochastic partial differential algebraic equation (1)
- structure-preserving (1)
- system trust (1)
- thermal infrared remote sensing (1)
- tree inclination (1)
- turbulence parameterization (1)
- video games (1)
- Ökonometrisches Modell (1)
Institut
- Raum- und Umweltwissenschaften (10)
- Fachbereich 4 (8)
- Psychologie (4)
- Fachbereich 1 (2)
- Fachbereich 2 (1)
- Fachbereich 6 (1)
- Informatik (1)
- Mathematik (1)
- Soziologie (1)
- Wirtschaftswissenschaften (1)
Designing a Randomized Trial with an Age Simulation Suit—Representing People with Health Impairments
(2020)
Due to demographic change, there is an increasing demand for professional care services, whereby this demand cannot be met by available caregivers. To enable adequate care by relieving informal and formal care, the independence of people with chronic diseases has to be preserved for as long as possible. Assistance approaches can be used that support promoting physical activity, which is a main predictor of independence. One challenge is to design and test such approaches without affecting the people in focus. In this paper, we propose a design for a randomized trial to enable the use of an age simulation suit to generate reference data of people with health impairments with young and healthy participants. Therefore, we focus on situations of increased physical activity.
Primary focal hyperhidrosis (PFH, OMIM %144110) is a genetically influenced condition characterised by excessive sweating. Prevalence varies between 1.0–6.1% in the general population, dependent on ethnicity. The aetiology of PFH remains unclear but an autosomal dominant mode of inheritance, incomplete penetrance and variable phenotypes have been reported. In our study, nine pedigrees (50 affected, 53 non-affected individuals) were included. Clinical characterisation was performed at the German Hyperhidrosis Centre, Munich, by using physiological and psychological questionnaires. Genome-wide parametric linkage analysis with GeneHunter was performed based on the Illumina genome-wide SNP arrays. Haplotypes were constructed using easyLINKAGE and visualised via HaploPainter. Whole-exome sequencing (WES) with 100x coverage in 31 selected members (24 affected, 7 non-affected) from our pedigrees was achieved by next generation sequencing. We identified four genome-wide significant loci, 1q41-1q42.3, 2p14-2p13.3, 2q21.2-2q23.3 and 15q26.3-15q26.3 for PFH. Three pedigrees map to a shared locus at 2q21.2-2q23.3, with a genome-wide significant LOD score of 3.45. The chromosomal region identified here overlaps with a locus at chromosome 2q22.1-2q31.1 reported previously. Three families support 1q41-1q42.3 (LOD = 3.69), two families share a region identical by descent at 2p14-2p13.3 (LOD = 3.15) and another two families at 15q26.3 (LOD = 3.01). Thus, our results point to considerable genetic heterogeneity. WES did not reveal any causative variants, suggesting that variants or mutations located outside the coding regions might be involved in the molecular pathogenesis of PFH. We suggest a strategy based on whole-genome or targeted next generation sequencing to identify causative genes or variants for PFH.
Laboratory landslide experiments enable the observation of specific properties of these natural hazards. However, these observations are limited by traditional techniques: frequently used high-speed video analysis and wired sensors (e.g. displacement). These techniques lead to the drawback that either only the surface and 2D profiles can be observed or wires confine the motion behaviour. In contrast, an unconfined observation of the total spatiotemporal dynamics of landslides is needed for an adequate understanding of these natural hazards.
The present study introduces an autonomous and wireless probe to characterize motion features of single clasts within laboratory-scale landslides. The Smartstone probe is based on an inertial measurement unit (IMU) and records acceleration and rotation at a sampling rate of 100 Hz. The recording ranges are ±16 g (accelerometer) and ±2000∘ s−1 (gyroscope). The plastic tube housing is 55 mm long with a diameter of 10 mm. The probe is controlled, and data are read out via active radio frequency identification (active RFID) technology. Due to this technique, the probe works under low-power conditions, enabling the use of small button cell batteries and minimizing its size.
Using the Smartstone probe, the motion of single clasts (gravel size, median particle diameter d50 of 42 mm) within approx. 520 kg of a uniformly graded pebble material was observed in a laboratory experiment. Single pebbles were equipped with probes and placed embedded and superficially in or on the material. In a first analysis step, the data of one pebble are interpreted qualitatively, allowing for the determination of different transport modes, such as translation, rotation and saltation. In a second step, the motion is quantified by means of derived movement characteristics: the analysed pebble moves mainly in the vertical direction during the first motion phase with a maximal vertical velocity of approx. 1.7 m s−1. A strong acceleration peak of approx. 36 m s−2 is interpreted as a pronounced hit and leads to a complex rotational-motion pattern. In a third step, displacement is derived and amounts to approx. 1.0 m in the vertical direction. The deviation compared to laser distance measurements was approx. −10 %. Furthermore, a full 3D spatiotemporal trajectory of the pebble is reconstructed and visualized supporting the interpretations. Finally, it is demonstrated that multiple pebbles can be analysed simultaneously within one experiment. Compared to other observation methods Smartstone probes allow for the quantification of internal movement characteristics and, consequently, a motion sampling in landslide experiments.
Currently, new business models created in the sharing economy differ considerably and they differ in the formation of trust as well. If and how trust can be created is shown by a comparison of two examples which diverge in their founding philosophy. The chosen example of community-based economy, Community Supported Agriculture (CSA), no longer trusts the capitalist system and therefore distances itself and creates its own environment including a new business model. It is implemented within rather small groups where trust is created by personal relations and face-to-face communication. On the contrary, the example of a platform economy, the accommodation-provider company Airbnb, shows trust in the system and pushes technological innovations through the use of platform applications. It promotes trust and confidence in the progress of technology. For the conceptual analysis, the distinction between personal trust and system trust defined by Niklas Luhmann is adopted. The analysis describes two different modes of trust formation and how they push distrust or improve trust. Grounded on these analyses, assumptions on the process of trust formation within varying models of the sharing economy are formulated as well as a hypothesis about possible developments is introduced for further research.
The study analyzes the long-term trends (1998–2019) of concentrations of the air pollutants ozone (O3) and nitrogen oxides (NOx) as well as meteorological conditions at forest sites in German midrange mountains to evaluate changes in O3 uptake conditions for trees over time at a plot scale. O3 concentrations did not show significant trends over the course of 22 years, unlike NO2 and NO, whose concentrations decreased significantly since the end of the 1990s. Temporal analyses of meteorological parameters found increasing global radiation at all sites and decreasing precipitation, vapor pressure deficit (VPD), and wind speed at most sites (temperature did not show any trend). A principal component analysis revealed strong correlations between O3 concentrations and global radiation, VPD, and temperature. Examination of the atmospheric water balance, a key parameter for O3 uptake, identified some unusually hot and dry years (2003, 2011, 2018, and 2019). With the help of a soil water model, periods of plant water stress were detected. These periods were often in synchrony with periods of elevated daytime O3 concentrations and usually occurred in mid and late summer, but occasionally also in spring and early summer. This suggests that drought protects forests against O3 uptake and that, in humid years with moderate O3 concentrations, the O3 flux was higher than in dry years with higher O3 concentrations.
Although gravitropism forces trees to grow vertically, stems have shown to prefer specific orientations. Apart from wind deforming the tree shape, lateral light can result in prevailing inclination directions. In recent years a species dependent interaction between gravitropism and phototropism, resulting in trunks leaning down-slope, has been confirmed, but a terrestrial investigation of such factors is limited to small scale surveys. ALS offers the opportunity to investigate trees remotely. This study shall clarify whether ALS detected tree trunks can be used to identify prevailing trunk inclinations. In particular, the effect of topography, wind, soil properties and scan direction are investigated empirically using linear regression models. 299.000 significantly inclined stems were investigated. Species-specific prevailing trunk orientations could be observed. About 58% of the inclination and 19% of the orientation could be explained by the linear models, while the tree species, tree height, aspect and slope could be identified as significant factors. The models indicate that deciduous trees tend to lean down-slope, while conifers tend to lean leeward. This study has shown that ALS is suitable to investigate the trunk orientation on larger scales. It provides empirical evidence for the effect of phototropism and wind on the trunk orientation.
Soil degradation due to erosion is a significant worldwide problem at different spatial (from pedon to watershed) and temporal scales. All stages and factors in the erosion process must be detected and evaluated to reduce this environmental issue and protect existing fertile soils and natural ecosystems. Laboratory studies using rainfall simulators allow single factors and interactive effects to be investigated under controlled conditions during extreme rainfall events. In this study, three main factors (rainfall intensity, inclination, and rainfall duration) were assessed to obtain empirical data for modeling water erosion during single rainfall events. Each factor was divided into three levels (− 1, 0, + 1), which were applied in different combinations using a rainfall simulator on beds (6 × 1 m) filled with soil from a study plot located in the arid Sistan region, Iran. The rainfall duration levels tested were 3, 5, and 7 min, the rainfall intensity levels were 30, 60, and 90 mm/h, and the inclination levels were 5, 15, and 25%. The results showed that the highest rainfall intensity tested (90 mm/h) for the longest duration (7 min) caused the highest runoff (62 mm3/s) and soil loss (1580 g/m2/h). Based on the empirical results, a quadratic function was the best mathematical model (R2 = 0.90) for predicting runoff (Q) and soil loss. Single-factor analysis revealed that rainfall intensity was more influential for runoff production than changes in time and inclination, while rainfall duration was the most influential single factor for soil loss. Modeling and three-dimensional depictions of the data revealed that sediment production was high and runoff production lower at the beginning of the experiment, but this trend was reversed over time as the soil became saturated. These results indicate that avoiding the initial stage of erosion is critical, so all soil protection measures should be taken to reduce the impact at this stage. The final stages of erosion appeared too complicated to be modeled, because different factors showed differing effects on erosion.
Up-to-date information about the type and spatial distribution of forests is an essential element in both sustainable forest management and environmental monitoring and modelling. The OpenStreetMap (OSM) database contains vast amounts of spatial information on natural features, including forests (landuse=forest). The OSM data model includes describing tags for its contents, i.e., leaf type for forest areas (i.e., leaf_type=broadleaved). Although the leaf type tag is common, the vast majority of forest areas are tagged with the leaf type mixed, amounting to a total area of 87% of landuse=forests from the OSM database. These areas comprise an important information source to derive and update forest type maps. In order to leverage this information content, a methodology for stratification of leaf types inside these areas has been developed using image segmentation on aerial imagery and subsequent classification of leaf types. The presented methodology achieves an overall classification accuracy of 85% for the leaf types needleleaved and broadleaved in the selected forest areas. The resulting stratification demonstrates that through approaches, such as that presented, the derivation of forest type maps from OSM would be feasible with an extended and improved methodology. It also suggests an improved methodology might be able to provide updates of leaf type to the OSM database with contributor participation.
Climate change is expected to cause mountain species to shift their ranges to higher elevations. Due to the decreasing amounts of habitats with increasing elevation, such shifts are likely to increase their extinction risk. Heterogeneous mountain topography, however, may reduce this risk by providing microclimatic conditions that can buffer macroclimatic warming or provide nearby refugia. As aspect strongly influences the local microclimate, we here assess whether shifts from warm south-exposed aspects to cool north-exposed aspects in response to climate change can compensate for an upward shift into cooler elevations.
A lack of ability to inhibit prepotent responses, or more generally a lack of impulse control, is associated with several disorders such as attention-deficit/hyperactivity disorder and schizophrenia as well as general damage to the prefrontal cortex. A stop-signal task (SST) is a reliable and established measure of response inhibition. However, using the SST as an objective assessment in diagnostic or research-focused settings places significant stress on participants as the task itself requires concentration and cognitive effort and is not particularly engaging. This can lead to decreased motivation to follow task instructions and poor data quality, which can affect assessment efficacy and might increase drop-out rates. Gamification—the application of game-based elements in nongame settings—has shown to improve engaged attention to a cognitive task, thus increasing participant motivation and data quality.
Ability self-concept (SC) and self-efficacy (SE) are central competence-related self-perceptions that affect students’ success in educational settings. Both constructs show conceptual differences but their empirical differentiation in higher education has not been sufficiently demonstrated. In the present study, we investigated the empirical differentiation of SC and SE in higher education with N = 1,243 German psychology students (81% female; age M = 23.62 years), taking into account central methodological requirements that, in part, have been neglected in prior studies. SC and SE were assessed at the same level of specificity, only cognitive SC items were used, and multiple academic domains were considered. We modeled the structure of SC and SE taking into account a multidimensional and/or hierarchical structure and investigated the empirical differentiation of both constructs on different levels of generality (i.e., domain-specific and domain-general). Results supported the empirical differentiation of SC and SE with medium-sized positive latent correlations (range r = .57 - .68) between SC and SE on different levels of generality. The knowledge about the internal structure of students’ SC and SE and the differentiation of both constructs can help us to develop construct-specific and domain-specific intervention strategies. Future empirical comparisons of the predictive power of SC and SE can provide further evidence that both represent empirical different constructs.
The presence of sea ice leads in the sea ice cover represents a key feature in polar regions by controlling the heat exchange between the relatively warm ocean and cold atmosphere due to increased fluxes of turbulent sensible and latent heat. Sea ice leads contribute to the sea ice production and are sources for the formation of dense water which affects the ocean circulation. Atmospheric and ocean models strongly rely on observational data to describe the respective state of the sea ice since numerical models are not able to produce sea ice leads explicitly. For the Arctic, some lead datasets are available, but for the Antarctic, no such data yet exist. Our study presents a new algorithm with which leads are automatically identified in satellite thermal infrared images. A variety of lead metrics is used to distinguish between true leads and detection artefacts with the use of fuzzy logic. We evaluate the outputs and provide pixel-wise uncertainties. Our data yield daily sea ice lead maps at a resolution of 1 km2 for the winter months November– April 2002/03–2018/19 (Arctic) and April–September 2003–2019 (Antarctic), respectively. The long-term average of the lead frequency distributions show distinct features related to bathymetric structures in both hemispheres.
The parameterization of the boundary layer is a challenge for regional climate models of the Arctic. In particular, the stable boundary layer (SBL) over Greenland, being the main driver for substantial katabatic winds over the slopes, is simulated differently by different regional climate models or using different parameterizations of the same model. However, verification data sets with high-resolution profiles of the katabatic wind are rare. In the present paper, detailed aircraft measurements of profiles in the katabatic wind and automatic weather station data during the experiment KABEG (Katabatic wind and boundary-layer front experiment around Greenland) in April and May 1997 are used for the verification of the regional climate model COSMO-CLM (CCLM) nested in ERA-Interim reanalyses. CCLM is used in a forecast mode for the whole Arctic with 15 km resolution and is run in the standard configuration of SBL parameterization and with modified SBL parameterization. In the modified version, turbulent kinetic energy (TKE) production and the transfer coefficients for turbulent fluxes in the SBL are reduced, leading to higher stability of the SBL. This leads to a more realistic representation of the daily temperature cycle and of the SBL structure in terms of temperature and wind profiles for the lowest 200 m.
Roof and wall slates are fine-grained rocks with slaty cleavage, and it is often difficult to determine their mineral composition. A new norm mineral calculation called slatecalculation allows the determination of a virtual mineral composition based on full chemical analysis, including the amounts of carbon dioxide (CO2), carbon (C), and sulfur (S). Derived norm minerals include feldspars, carbonates, micas, hydro-micas, chlorites, ore-minerals, and quartz. The mineral components of the slate are assessed with superior accuracy compared to the petrographic analysis based on the European Standard EN 12326. The inevitable methodical inaccuracies in the calculations are limited and transparent. In the present paper, slates, shales, and phyllites from worldwide occurrences were examined. This also gives an overview of the rocks used for discontinuous roofing and external cladding.
This study investigated correlative, factorial, and structural relationships between scores for ability emotional intelligence in the workplace (measured with the Geneva Emotional Competence Test), as well as fluid and crystallized abilities (measured with the Intelligence Structure Battery), carried out by a 188-participant student sample. Confirming existing research, recognition, understanding, and management of emotions were related primarily to crystallized ability tests measuring general knowledge, verbal fluency, and knowledge of word meaning. Meanwhile, emotion regulation was the least correlated with any other cognitive or emotional ability. In line with research on the trainability of emotional intelligence, these results may support the notion that emotional abilities are subject to acquired knowledge, where situational (i.e., workplace-specific) emotional intelligence may depend on accumulating relevant experiences.
The nonhydrostatic regional climate model CCLM was used for a long-term hindcast run (2002–2016) for the Weddell Sea region with resolutions of 15 and 5 km and two different turbulence parametrizations. CCLM was nested in ERA-Interim data and used in forecast mode (suite of consecutive 30 h long simulations with 6 h spin-up). We prescribed the sea ice concentration from satellite data and used a thermodynamic sea ice model. The performance of the model was evaluated in terms of temperature and wind using data from Antarctic stations, automatic weather stations (AWSs), an operational forecast model and reanalyses data, and lidar wind profiles. For the reference run we found a warm bias for the near-surface temperature over the Antarctic Plateau. This bias was removed in the second run by adjusting the turbulence parametrization, which results in a more realistic representation of the surface inversion over the plateau but resulted in a negative bias for some coastal regions. A comparison with measurements over the sea ice of the Weddell Sea by three AWS buoys for 1 year showed small biases for temperature around ±1 K and for wind speed of 1 m s−1. Comparisons of radio soundings showed a model bias around 0 and a RMSE of 1–2 K for temperature and 3–4 m s−1 for wind speed. The comparison of CCLM simulations at resolutions down to 1 km with wind data from Doppler lidar measurements during December 2015 and January 2016 yielded almost no bias in wind speed and a RMSE of ca. 2 m s−1. Overall CCLM shows a good representation of temperature and wind for the Weddell Sea region. Based on these encouraging results, CCLM at high resolution will be used for the investigation of the regional climate in the Antarctic and atmosphere–ice–ocean interactions processes in a forthcoming study.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
Retirement, fertility and sexuality are three key life stage events that are embedded in the framework of population economics in this dissertation. Each topic implies economic relevance. As retirement entry shifts labour supply of experienced workers to zero, this issue is particularly relevant for employers, retirees themselves as well as policymakers who are in charge of the design of the pension system. Giving birth has comprehensive economic relevance for women. Parental leave and subsequent part-time work lead to a direct loss of income. Lower levels of employment, work experience, training and career opportunities result in indirect income losses. Sexuality has decisive influence on the quality of partnerships, subjective well-being and happiness. Well-being and happiness, in turn, are significant key determinants not only in private life but also in the work domain, for example in the area of job performance. Furthermore, partnership quality determines the duration of a partnership. And in general, partnerships enable the pooling of (financial) resources - compared to being single. The contribution of this dissertation emerges from the integration of social and psychological concepts into economic analysis as well as the application of economic theory in non-standard economic research topics. The results of the three chapters show that the multidisciplinary approach yields better prediction of human behaviour than the single disciplines on their own. The results in the first chapter show that both interpersonal conflict with superiors and the individual’s health status play a significant role in retirement decisions. The chapter further contributes to existing literature by showing the moderating role of health within the retirement decision-making: On the one hand, all employees are more likely to retire when they are having conflicts with their superior. On the other hand, among healthy employees, the same conflict raises retirement intentions even more. That means good health is a necessary, but not a sufficient condition for continued working. It may be that conflicts with superiors raise retirement intentions more if the worker is healthy. The key findings of the second chapter reveal significant influence of religion on contraceptive and fertility-related decisions. A large part of research on religion and fertility is originated in evidence from the US. This chapter contrasts evidence from Germany. Additionally, the chapter contributes by integrating miscarriages and abortions, rather than limiting the analysis to births and it gains from rich prospective data on fertility biography of women. The third chapter provides theoretical insights on how to incorporate psychological variables into an economic framework which aims to analyse sexual well-being. According to this theory, personality may play a dual role by shaping a person’s preferences for sex as well as the person’s behaviour in a sexual relationship. Results of econometric analysis reveal detrimental effects of neuroticism on sexual well-being while conscientiousness seems to create a win-win situation for a couple. Extraversions and Openness have ambiguous effects on romantic relationships by enhancing sexual well-being on the one hand but raising commitment problems on the other. Agreeable persons seem to gain sexual satisfaction even if they perform worse in sexual communication.
Die Polargebiete sind geprägt von harschen Umweltbedingungen mit extrem kalten Temperaturen und Winden. Besonders während der polaren Nacht werden Temperaturen von bis zu -89.2°C}$ auf dem Antarktischen Plateau beobachtet. Infolge der starken Abkühlung beginnt das Ozeanwasser zu gefrieren und die Eisproduktion beginnt. Der Antarktische Ozean ist dabei von einer ausgeprägten zwischen- und innerjährlichen Variabilität geprägt und die Eisbedeckung variiert zwischen 2.07 * 10^6 km^2 im Sommer und 20.14 * 10^6 km^2 im Winter. Die Eisproduktion und Eisschmelze beeinflussen die atmosphärische und ozeanische Zirkulation. Dynamische Prozesse führen zur Bildung von Rissen im Eis und letztlich zum Entstehen von Eisrinnen (leads). Leads sind langgestreckte Risse die mindestens einige Meter breit und hunderte Meter bis hunderte Kilometer lang sein können. In diesen Eisrinnen ist das warme Ozeanwasser in Kontakt mit der kalten Atmosphäre, wodurch die Austauschraten fühlbarer und latenter Wärme, Feuchtigkeit und von Gasen stark erhöht sind. Eisrinnen tragen zur Eisproduktion in den Polargebieten bei und sind Habitat für zahlreiche Tiere. Eisrinnen, zentraler Bestandteil der präsentierten Studie, sind bis heute nur unzureichend im Südpolarmeer erforscht und beobachtet. Daher ist es Ziel einen Algorithmus zu entwickeln, um Eisrinnen in Fernerkundungsdaten automatisiert zu identifizieren. Dabei kommen thermal-Infrarot Satellitendaten des Moderate-Resolution Imaging Spectroradiometer (MODIS) zum Einsatz, welches auf den beiden Satelliten Aqua und Terra montiert ist und seit 2000 (Terra) bzw. 2002 (Aqua) Satellitenbilder bereitstellt. Die einzelnen Satellitenbilder beinhalten die Eisoberflächentemperatur des MOD/MYD 29 Produktes, welche in einem zweistufigen Algorithmus für den Zeitraum April bis September 2003 bis 2019 prozessiert werden.
Im ersten Schritt werden potentielle Eisrinnen anhand der lokalen positiven Temperaturanomalie identifiziert. Aufgrund von Artefakten werden weitere temperatur- und texturbasierte Parameter abgeleitet und zu täglichen Kompositen zusammengefügt. Diese werden in der zweiten Prozessierungsstufe verwendet, um Wolkenartefakte von echten Eisrinnen-Observationen zu trennen. Hier wird Fuzzy Logic genutzt und eine Antarktis-spezifische Konfiguration wird definiert. In diesem werden ausgewählte Eingabedaten aus dem ersten Prozessierungslevel genutzt, um einen finalen Proxy, den Lead Score (LS), zu berechnen. Der LS wird abschließend mittels manueller Qualitätskontrolle in eine Unsicherheit überführt. Die darüber identifizierten Artefakte können so zusätzlich zur MODIS-Wolkenmaske genutzt werden.
Auf Basis der Eisrinnenbeobachtungen wird ein klimatologischer Referenzdatensatz erstellt, der die repräsentative Eisrinnenverteilung im Antarktischen Ozean für die Wintermonate April bis September, 2003 bis 2019 zeigt. In diesem ist sichtbar, dass Eisrinnen in manchen Gegenden systematischer auftreten als in anderen. Das sind vor allem die Regionen entlang der Küstenregion, des kontinentalen Schelfabhangs und einigen Erhebungen und Kanälen in der Tiefsee. Dabei sind die erhöhten Frequenzen entlang des Schelfabhangs besonders interessant und der Einfluss von atmosphärischen und ozeanischen Einflüssen wird untersucht. Ein regionales Eis-Ozeanmodell wird genutzt, um ozeanische Einflüsse in Zusammenhang mit erhöhten Eisrinnenfrequenzen zu setzen.
In der vorliegenden Studie wird außerdem ein umfangreicher Überblick über die großskalige Variabilität von Antarktischem Meereis gegeben. Tägliche Eiskonzentrationsdaten, abgeleitet aus passiven Mikrowellendaten, werden aus dem Zeitraum 1979 bis 2018 für die Klassifikation genutzt. Der dk-means Algorithmus wird verwendet, um zehn repräsentative Eisklassen zu identifizieren. Die geographische Verteilung dieser Klassen wird als Karte dargestellt, in der der typische jährliche Eiszyklus je Klasse sichtbar ist.
Veränderungen in dem räumlichen Auftreten von Eisklassen werden identifiziert und qualitativ interpretiert. Positive Abweichungen hin zu höheren Eisklassen werden im Weddell- und dem Ross-Meer und einigen Regionen in der Ostantarktis identifiziert. Negative Abweichungen sind im Amundsen-Bellingshausen-Meer vorhanden. Der neu entwickelte (Climatological Sea Ice Anomaly Index) wird genutzt, um Klassenabweichungen in der Zeitreihe zu identifizieren. Damit werden drei Jahre (1986, 2007, 2014) für eine Fallstudie ausgewählt und in Relation zu atmosphärischen Daten aus ERA-Interim und Eisdrift-Daten untersucht. Für die beiden Jahre 1986 und 2007 können bestimmte atmosphärische Zirkulationsmuster identifiziert werden, die die entsprechende Eisklassifikation beeinflusst haben. Für das Jahr 2014 können keine besonders ausgeprägten atmosphärischen Anomalien ausgemacht werden.
Der Eisklassen-Datensatz kann in Zukunft als Ergänzung zu vorhandenen Studien und für die Validierung von Meereismodellen genutzt werden. Dabei sind vor allem Anwendungen in Bezug auf den Eisrinnen-Datensatz möglich.
Phylogeographic analyses point to long-term survival on the spot in micro-endemic Lycian salamanders
(2020)
Lycian salamanders (genus Lyciasalamandra) constitute an exceptional case of microendemism of an amphibian species on the Asian Minor mainland. These viviparous salamanders are confined to karstic limestone formations along the southern Anatolian coast and some islands. We here study the genetic differentiation within and among 118 populations of all seven Lyciasalamandra species across the entire genus’ distribution. Based on circa 900 base pairs of fragments of the mitochondrial 16SrDNA and ATPase genes, we analysed the spatial haplotype distribution as well as the genetic structure and demographic history of populations. We used 253 geo-referenced populations and CHELSA climate data to infer species distribution models which we projected on climatic conditions of the Last Glacial Maximum (LGM). Within all but one species, distinct phyloclades were identified, which only in parts matched current taxonomy. Most haplotypes (78%) were private to single populations. Sometimes population genetic parameters showed contradicting results, although in several cases they indicated recent population expansion of phyloclades. Climatic suitability of localities currently inhabited by salamanders was significantly lower during the LGM compared to recent climate. All data indicated a strong degree of isolation among Lyciasalamandra populations, even within phyloclades. Given the sometimes high degree of haplotype differentiation between adjacent populations, they must have survived periods of deteriorated climates during the Quaternary on the spot. However, the alternative explanation of male biased dispersal combined with a pronounced female philopatry can only be excluded if independent nuclear data confirm this result.
This work studies typical mathematical challenges occurring in the modeling and simulation of manufacturing processes of paper or industrial textiles. In particular, we consider three topics: approximate models for the motion of small inertial particles in an incompressible Newtonian fluid, effective macroscopic approximations for a dilute particle suspension contained in a bounded domain accounting for a non-uniform particle distribution and particle inertia, and possibilities for a reduction of computational cost in the simulations of slender elastic fibers moving in a turbulent fluid flow.
We consider the full particle-fluid interface problem given in terms of the Navier-Stokes equations coupled to momentum equations of a small rigid body. By choosing an appropriate asymptotic scaling for the particle-fluid density ratio and using an asymptotic expansion for the solution components, we derive approximations of the original interface problem. The approximate systems differ according to the chosen scaling of the density ratio in their physical behavior allowing the characterization of different inertial regimes.
We extend the asymptotic approach to the case of many particles suspended in a Newtonian fluid. Under specific assumptions for the combination of particle size and particle number, we derive asymptotic approximations of this system. The approximate systems describe the particle motion which allows to use a mean field approach in order to formulate the continuity equation for the particle probability density function. The coupling of the latter with the approximation for the fluid momentum equation then reveals a macroscopic suspension description which accounts for non-uniform particle distributions in space and for small particle inertia.
A slender fiber in a turbulent air flow can be modeled as a stochastic inextensible one-dimensionally parametrized Kirchhoff beam, i.e., by a stochastic partial differential algebraic equation. Its simulations involve the solution of large non-linear systems of equations by Newton's method. In order to decrease the computational time, we explore different methods for the estimation of the solution. Additionally, we apply smoothing techniques to the Wiener Process in order to regularize the stochastic force driving the fiber, exploring their respective impact on the solution and performance. We also explore the applicability of the Wiener chaos expansion as a solution technique for the simulation of the fiber dynamics.
This thesis addresses three different topics from the fields of mathematical finance, applied probability and stochastic optimal control. Correspondingly, it is subdivided into three independent main chapters each of which approaches a mathematical problem with a suitable notion of a stochastic particle system.
In Chapter 1, we extend the branching diffusion Monte Carlo method of Henry-Labordère et. al. (2019) to the case of parabolic PDEs with mixed local-nonlocal analytic nonlinearities. We investigate branching diffusion representations of classical solutions, and we provide sufficient conditions under which the branching diffusion representation solves the PDE in the viscosity sense. Our theoretical setup directly leads to a Monte Carlo algorithm, whose applicability is showcased in two stylized high-dimensional examples. As our main application, we demonstrate how our methodology can be used to value financial positions with defaultable, systemically important counterparties.
In Chapter 2, we formulate and analyze a mathematical framework for continuous-time mean field games with finitely many states and common noise, including a rigorous probabilistic construction of the state process. The key insight is that we can circumvent the master equation and reduce the mean field equilibrium to a system of forward-backward systems of (random) ordinary differential equations by conditioning on common noise events. We state and prove a corresponding existence theorem, and we illustrate our results in three stylized application examples. In the absence of common noise, our setup reduces to that of Gomes, Mohr and Souza (2013) and Cecchin and Fischer (2020).
In Chapter 3, we present a heuristic approach to tackle stochastic impulse control problems in discrete time. Based on the work of Bensoussan (2008) we reformulate the classical Bellman equation of stochastic optimal control in terms of a discrete-time QVI, and we prove a corresponding verification theorem. Taking the resulting optimal impulse control as a starting point, we devise a self-learning algorithm that estimates the continuation and intervention region of such a problem. Its key features are that it explores the state space of the underlying problem by itself and successively learns the behavior of the optimally controlled state process. For illustration, we apply our algorithm to a classical example problem, and we give an outlook on open questions to be addressed in future research.
Traditionell werden Zufallsstichprobenerhebungen so geplant, dass nationale Statistiken zuverlässig mit einer adäquaten Präzision geschätzt werden können. Hierbei kommen vorrangig designbasierte, Modell-unterstützte (engl. model assisted) Schätzmethoden zur Anwendung, die überwiegend auf asymptotischen Eigenschaften beruhen. Für kleinere Stichprobenumfänge, wie man sie für Small Areas (Domains bzw. Subpopulationen) antrifft, eignen sich diese Schätzmethoden eher nicht, weswegen für diese Anwendung spezielle modellbasierte Small Area-Schätzverfahren entwickelt wurden. Letztere können zwar Verzerrungen aufweisen, besitzen jedoch häufig einen kleineren mittleren quadratischen Fehler der Schätzung als dies für designbasierte Schätzer der Fall ist. Den Modell-unterstützten und modellbasierten Methoden ist gemeinsam, dass sie auf statistischen Modellen beruhen; allerdings in unterschiedlichem Ausmass. Modell-unterstützte Verfahren sind in der Regel so konstruiert, dass der Beitrag des Modells bei sehr grossen Stichprobenumfängen gering ist (bei einer Grenzwertbetrachtung sogar wegfällt). Bei modellbasierten Methoden nimmt das Modell immer eine tragende Rolle ein, unabhängig vom Stichprobenumfang. Diese Überlegungen veranschaulichen, dass das unterstellte Modell, präziser formuliert, die Güte der Modellierung für die Qualität der Small Area-Statistik von massgeblicher Bedeutung ist. Wenn es nicht gelingt, die empirischen Daten durch ein passendes Modell zu beschreiben und mit den entsprechenden Methoden zu schätzen, dann können massive Verzerrungen und / oder ineffiziente Schätzungen resultieren.
Die vorliegende Arbeit beschäftigt sich mit der zentralen Frage der Robustheit von Small Area-Schätzverfahren. Als robust werden statistische Methoden dann bezeichnet, wenn sie eine beschränkte Einflussfunktion und einen möglichst hohen Bruchpunkt haben. Vereinfacht gesprochen zeichnen sich robuste Verfahren dadurch aus, dass sie nur unwesentlich durch Ausreisser und andere Anomalien in den Daten beeinflusst werden. Die Untersuchung zur Robustheit konzentriert sich auf die folgenden Modelle bzw. Schätzmethoden:
i) modellbasierte Schätzer für das Fay-Herriot-Modell (Fay und Herrot, 1979, J. Amer. Statist. Assoc.) und das elementare Unit-Level-Modell (vgl. Battese et al., 1988, J. Amer. Statist. Assoc.).
ii) direkte, Modell-unterstützte Schätzer unter der Annahme eines linearen Regressionsmodells.
Das Unit-Level-Modell zur Mittelwertschätzung beruht auf einem linearen gemischten Gauss'schen Modell (engl. mixed linear model, MLM) mit blockdiagonaler Kovarianzmatrix. Im Gegensatz zu bspw. einem multiplen linearen Regressionsmodell, besitzen MLM-Modelle keine nennenswerten Invarianzeigenschaften, so dass eine Kontamination der abhängigen Variablen unvermeidbar zu verzerrten Parameterschätzungen führt. Für die Maximum-Likelihood-Methode kann die resultierende Verzerrung nahezu beliebig groß werden. Aus diesem Grund haben Richardson und Welsh (1995, Biometrics) die robusten Schätzmethoden RML 1 und RML 2 entwickelt, die bei kontaminierten Daten nur eine geringe Verzerrung aufweisen und wesentlich effizienter sind als die Maximum-Likelihood-Methode. Eine Abwandlung von Methode RML 2 wurde Sinha und Rao (2009, Canad. J. Statist.) für die robuste Schätzung von Unit-Level-Modellen vorgeschlagen. Allerdings erweisen sich die gebräuchlichen numerischen Verfahren zur Berechnung der RML-2-Methode (dies gilt auch für den Vorschlag von Sinha und Rao) als notorisch unzuverlässig. In dieser Arbeit werden zuerst die Konvergenzprobleme der bestehenden Verfahren erörtert und anschließend ein numerisches Verfahren vorgeschlagen, das sich durch wesentlich bessere numerische Eigenschaften auszeichnet. Schließlich wird das vorgeschlagene Schätzverfahren im Rahmen einer Simulationsstudie untersucht und anhand eines empirischen Beispiels zur Schätzung von oberirdischer Biomasse in norwegischen Kommunen illustriert.
Das Modell von Fay-Herriot kann als Spezialfall eines MLM mit blockdiagonaler Kovarianzmatrix aufgefasst werden, obwohl die Varianzen des Zufallseffekts für die Small Areas nicht geschätzt werden müssen, sondern als bereits bekannte Größen betrachtet werden. Diese Eigenschaft kann man sich nun zunutze machen, um die von Sinha und Rao (2009) vorgeschlagene Robustifizierung des Unit-Level-Modells direkt auf das Fay-Herriot Model zu übertragen. In der vorliegenden Arbeit wird jedoch ein alternativer Vorschlag erarbeitet, der von der folgenden Beobachtung ausgeht: Fay und Herriot (1979) haben ihr Modell als Verallgemeinerung des James-Stein-Schätzers motiviert, wobei sie sich einen empirischen Bayes-Ansatz zunutze machen. Wir greifen diese Motivation des Problems auf und formulieren ein analoges robustes Bayes'sches Verfahren. Wählt man nun in der robusten Bayes'schen Problemformulierung die ungünstigste Verteilung (engl. least favorable distribution) von Huber (1964, Ann. Math. Statist.) als A-priori-Verteilung für die Lokationswerte der Small Areas, dann resultiert als Bayes-Schätzer [=Schätzer mit dem kleinsten Bayes-Risk] die Limited-Translation-Rule (LTR) von Efron und Morris (1971, J. Amer. Statist. Assoc.). Im Kontext der frequentistischen Statistik kann die Limited-Translation-Rule nicht verwendet werden, weil sie (als Bayes-Schätzer) auf unbekannten Parametern beruht. Die unbekannten Parameter können jedoch nach dem empirischen Bayes-Ansatz an der Randverteilung der abhängigen Variablen geschätzt werden. Hierbei gilt es zu beachten (und dies wurde in der Literatur vernachlässigt), dass die Randverteilung unter der ungünstigsten A-priori-Verteilung nicht einer Normalverteilung entspricht, sondern durch die ungünstigste Verteilung nach Huber (1964) beschrieben wird. Es ist nun nicht weiter erstaunlich, dass es sich bei den Maximum-Likelihood-Schätzern von Regressionskoeffizienten und Modellvarianz unter der Randverteilung um M-Schätzer mit der Huber'schen psi-Funktion handelt.
Unsere theoriegeleitete Herleitung von robusten Schätzern zum Fay-Herriot-Modell zeigt auf, dass bei kontaminierten Daten die geschätzte LTR (mit Parameterschätzungen nach der M-Schätzmethodik) optimal ist und, dass die LTR ein integraler Bestandteil der Schätzmethodik ist (und nicht als ``Zusatz'' o.Ä. zu betrachten ist, wie dies andernorts getan wird). Die vorgeschlagenen M-Schätzer sind robust bei Vorliegen von atypischen Small Areas (Ausreissern), wie dies auch die Simulations- und Fallstudien zeigen. Um auch Robustheit bei Vorkommen von einflussreichen Beobachtungen in den unabhängigen Variablen zu erzielen, wurden verallgemeinerte M-Schätzer (engl. generalized M-estimator) für das Fay-Herriot-Modell entwickelt.
B/ordering the Anthropocene: Inter- and Transdisciplinary Perspectives on Nature-Culture Relations
(2020)
In and with this thematic issue we would like to invite you to engage in productive boundary work and to critically examine the relationship between nature and culture in the Anthropocene. A few years ago, the term Anthropocene was proposed by Paul Crutzen as a term for the current geological epoch, in which humankind (the ‘anthropos’) is seen as the central driving force for global changes in ecological systems. This epoch is characterized by the blurring of boundaries between society and nature, science and politics, as well as by the increased drawing of boundaries between social groups, lifestyles, and the Global North and Global South. With this issue, we would like to give an impetus to explore boundary phenomena in the relationship between nature and society, which up to now have not been the focus of Border Studies. The challenges and problems of the Anthropocene require cross-border thinking and research that stimulates a new reflexivity and commitment, to which the multidisciplinary field of Border Studies can contribute.
The formerly communist countries in Central and Eastern Europe (transitional economies in Europe and the Soviet Union – for example, East Germany, Czech Republic, Hungary, Lithuania, Poland, Russia) and transitional economies in Asia – for example, China, Vietnam had centrally planned economies, which did not allow entrepreneurship activities. Despite the political-socioeconomic transformations in transitional economies around 1989, they still had an institutional heritage that affects individuals’ values and attitudes, which, in turn, influence intentions, behaviors, and actions, including entrepreneurship.
While prior studies on the long-lasting effects of socialist legacy on entrepreneurship have focused on limited geographical regions (e.g., East-West Germany, and East-West Europe), this dissertation focuses on the Vietnamese context, which offers a unique quasi-experimental setting. In 1954, Vietnam was divided into the socialist North and the non-socialist South, and it was then reunified under socialist rule in 1975. Thus, the intensity of differences in socialist treatment in North-South Vietnam (about 21 years) is much shorter than that in East-West Germany (about 40 years) and East-West Europe (about 70 years when considering former Soviet Union countries).
To assess the relationship between socialist history and entrepreneurship in this unique setting, we survey more than 3,000 Vietnamese individuals. This thesis finds that individuals from North Vietnam have lower entrepreneurship intentions, are less likely to enroll in entrepreneurship education programs, and display lower likelihood to take over an existing business, compared to those from the South of Vietnam. The long-lasting effect of formerly socialist institutions on entrepreneurship is apparently deeper than previously discovered in the prominent case of East-West Germany and East-West Europe as well.
In the second empirical investigation, this dissertation focuses on how succession intentions differ from others (e.g., founding, and employee intentions) regarding career choice motivation, and the effect of three main elements of the theory of planned behavior (e.g., entrepreneurial attitude, subjective norms, and perceived behavioral control) in transition economy – Vietnam context. The findings of this thesis suggest that an intentional founder is labeled with innovation, an intentional successor is labeled with roles motivation, and an intentional employee is labeled with social mission. Additionally, this thesis reveals that entrepreneurial attitude and perceived behavioral control are positively associated with the founding intention, whereas there is no difference in this effect between succession and employee intentions.
Many NP-hard optimization problems that originate from classical graph theory, such as the maximum stable set problem and the maximum clique problem, have been extensively studied over the past decades and involve the choice of a subset of edges or vertices. There usually exist combinatorial methods that can be applied to solve them directly in the graph.
The most simple method is to enumerate feasible solutions and select the best. It is not surprising that this method is very slow oftentimes, so the task is to cleverly discard fruitless search space during the search. An alternative method to solve graph problems is to formulate integer linear programs, such that their solution yields an optimal solution to the original optimization problem in the graph. In order to solve integer linear programs, one can start with relaxing the integer constraints and then try to find inequalities for cutting off fractional extreme points. In the best case, it would be possible to describe the convex hull of the feasible region of the integer linear program with a set of inequalities. In general, giving a complete description of this convex hull is out of reach, even if it has a polynomial number of facets. Thus, one tries to strengthen the (weak) relaxation of the integer linear program best possible via strong inequalities that are valid for the convex hull of feasible integer points.
Many classes of valid inequalities are of exponential size. For instance, a graph can have exponentially many odd cycles in general and therefore the number of odd cycle inequalities for the maximum stable set problem is exponential. It is sometimes possible to check in polynomial time if some given point violates any of the exponentially many inequalities. This is indeed the case for the odd cycle inequalities for the maximum stable set problem. If a polynomial time separation algorithm is known, there exists a formulation of polynomial size that contains a given point if and only if it does not violate one of the (potentially exponentially many) inequalities. This thesis can be divided into two parts. The first part is the main part and it contains various new results. We present new extended formulations for several optimization problems, i.e. the maximum stable set problem, the nonconvex quadratic program with box
constraints and the p-median problem. In the second part we modify a very fast algorithm for finding a maximum clique in very large sparse graphs. We suggest and compare three alternative versions of this algorithm to the original version and compare their strengths and weaknesses.
Evidence points to autonomy as having a place next to affiliation, achievement, and power as one of the basic implicit motives; however, there is still some research that needs to be conducted to support this notion.
The research in this dissertation aimed to address this issue. I have specifically focused on two issues that help solidify the foundation of work that has already been conducted on the implicit autonomy motive, and will also be a foundation for future studies. The first issue is measurement. Implicit motives should be measured using causally valid instruments (McClelland, 1980). The second issue addresses the function of motives. Implicit motives orient, select, and energize behavior (McClelland, 1980). If autonomy is an implicit motive, then we need a valid instrument to measure it and we also need to show that it orients, selects, and energizes behavior.
In the following dissertation, I address these two issues in a series of ten studies. Firstly, I present studies that examine the causal validity of the Operant Motive Test (OMT; Kuhl, 2013) for the implicit affiliation and power motives using established methods. Secondly, I developed and empirically tested pictures to specifically assess the implicit autonomy motive and examined their causal validity. Thereafter, I present two studies that investigated the orienting and energizing effects of the implicit autonomy motive. The results of the studies solidified the foundation of the OMT and how it measures nAutonomy. Furthermore, this dissertation demonstrates that nAutonomy fulfills the criteria for two of the main functions of implicit motives. Taken together, the findings of this dissertation provide further support for autonomy as an implicit motive and a foundation for intriguing future studies.
Imagery-based techniques have received increasing interest in psychotherapy research. Whereas their effectiveness has been shown for various psychological disorders, their underlying mechanisms remain unclear. Current research predominantly investigates intrapersonal processes, while interpersonal processes have received no attention to date. The aim of the current dissertation was to fill this lacuna. The three interrelated studies comprising this dissertation were the first to examine the effectiveness of imagery-based techniques in the treatment of test anxiety, relate physiological arousal to emotional processing, and investigate the association between physiological synchrony and multiple process measures.
Study I investigated the feasibility of a newly developed protocol, which integrates imagery-based and cognitive-behavioral components, to treat test anxiety in a sample of 31 students. The results indicated the protocol as acceptable, feasible, and effective in the treatment of test anxiety. Additionally, the imagery-based component was positively associated with therapeutic bond, session evaluation, and emotional experience.
Study II shifted the focus from the effectiveness of imagery-based techniques to client-therapist physiological synchrony as a putative mechanism of change in the same sample. The results suggested that physiological synchrony was greater than chance during both imagery-based and cognitive-behavioral components. Variability of physiological synchrony on the session-level during the imagery-based components and variability on both levels (session and dyad) during the cognitive-behavioral components were demonstrated. Furthermore, physiological synchrony of the imagery-based segments was positively assocatied with therapeutic bond. No association was found for the cognitive-behavioral components.
Study III examined both intrapersonal (i.e., clients’ electrodermal activity) and interpersonal (i.e., client-therapist electrodermal activity synchrony) processes and their associations with emotional processing in a sample of 49 client-therapist-dyads. The results suggested that higher client physiological arousal and a moderate level of physiological synchrony were associated with deeper emotional processing.
Taken together, the results highlight the effectiveness of imagery-based techniques in the treatment of test anxiety. Furthermore, the results of Studies II and III support the idea of physiological synchrony as a mechanism of change in imagery with and without rescripting. The current dissertation takes an important step towards optimizing process research within psychotherapy and contributes to a better understanding of the potency and mechanisms of change of imagery-based techniques. We hope that these studies’ implications will support everyday clinical practice.
Data used for the purpose of machine learning are often erroneous. In this thesis, p-quasinorms (p<1) are employed as loss functions in order to increase the robustness of training algorithms for artificial neural networks. Numerical issues arising from these loss functions are addressed via enhanced optimization algorithms (proximal point methods; Frank-Wolfe methods) based on the (non-monotonic) Armijo-rule. Numerical experiments comprising 1100 test problems confirm the effectiveness of the approach. Depending on the parametrization, an average reduction of the absolute residuals of up to 64.6% is achieved (aggregated over 100 test problems).
In this thesis we study structure-preserving model reduction methods for the efficient and reliable approximation of dynamical systems. A major focus is the approximation of a nonlinear flow problem on networks, which can, e.g., be used to describe gas network systems. Our proposed approximation framework guarantees so-called port-Hamiltonian structure and is general enough to be realizable by projection-based model order reduction combined with complexity reduction. We divide the discussion of the flow problem into two parts, one concerned with the linear damped wave equation and the other one with the general nonlinear flow problem on networks.
The study around the linear damped wave equation relies on a Galerkin framework, which allows for convenient network generalizations. Notable contributions of this part are the profound analysis of the algebraic setting after space-discretization in relation to the infinite dimensional setting and its implications for model reduction. In particular, this includes the discussion of differential-algebraic structures associated to the network-character of our problem and the derivation of compatibility conditions related to fundamental physical properties. Amongst the different model reduction techniques, we consider the moment matching method to be a particularly well-suited choice in our framework.
The Galerkin framework is then appropriately extended to our general nonlinear flow problem. Crucial supplementary concepts are required for the analysis, such as the partial Legendre transform and a more careful discussion of the underlying energy-based modeling. The preservation of the port-Hamiltonian structure after the model-order- and complexity-reduction-step represents a major focus of this work. Similar as in the analysis of the model order reduction, compatibility conditions play a crucial role in the analysis of our complexity reduction, which relies on a quadrature-type ansatz. Furthermore, energy-stable time-discretization schemes are derived for our port-Hamiltonian approximations, as structure-preserving methods from literature are not applicable due to our rather unconventional parametrization of the solution.
Apart from the port-Hamiltonian approximation of the flow problem, another topic of this thesis is the derivation of a new extension of moment matching methods from linear systems to quadratic-bilinear systems. Most system-theoretic reduction methods for nonlinear systems rely on multivariate frequency representations. Our approach instead uses univariate frequency representations tailored towards user-defined families of inputs. Then moment matching corresponds to a one-dimensional interpolation problem rather than to a multi-dimensional interpolation as for the multivariate approaches, i.e., it involves fewer interpolation frequencies to be chosen. The notion of signal-generator-driven systems, variational expansions of the resulting autonomous systems as well as the derivation of convenient tensor-structured approximation conditions are the main ingredients of this part. Notably, our approach allows for the incorporation of general input relations in the state equations, not only affine-linear ones as in existing system-theoretic methods.
Structured Eurobonds - Optimal Construction, Impact on the Euro and the Influence of Interest Rates
(2020)
Structured Eurobonds are a prominent topic in the discussions how to complete the monetary and fiscal union. This work sheds light on several issues going hand in hand with the introduction of common bonds. At first a crucial question is on the optimal construction, e.g. what is the optimal common liability. Other questions that arise belong to the time after the introduction. The impact on several exchnage rates is examined in this work. Finally an approximation bias in forward-looking DSGE models is quantified which would lead to an adjustment of central bank interest rates and therefore has an impact on the other two topics.
The present thesis is devoted to a construction which defies generalisations about the prototypical English noun phrase (NP) to such an extent that it has been termed the Big Mess Construction (Berman 1974). As illustrated by the examples in (1) and (2), the NPs under study involve premodifying adjective phrases (APs) which precede the determiner (always realised in the form of the indefinite article a(n)) rather than following it.
(1) NoS had not been hijacked – that was too strong a word. (BNC: CHU 1766)
(2) He was prepared for a battle if the porter turned out to be as difficult a customer as his wife. (BNC: CJX 1755)
Previous research on the construction is largely limited to contributions from the realms of theoretical syntax and a number of cursory accounts in reference grammars. No comprehensive investigation of its realisations and uses has as yet been conducted. My thesis fills this gap by means of an exhaustive analysis of the construction on the basis of authentic language data retrieved from the British National Corpus (BNC). The corpus-based approach allows me to examine not only the possible but also the most typical uses of the construction. Moreover, while previous work has almost exclusively focused on the formal realisations of the construction, I investigate both its forms and functions.
It is demonstrated that, while the construction is remarkably flexible as concerns its possible realisations, its use is governed by probabilistic constraints. For example, some items occur much more frequently inside the degree item slot than others (as, too and so stand out for their particularly high frequency). Contrary to what is assumed in most previous descriptions, the slot is not restricted in its realisation to a fixed number of items. Rather than representing a specialised structure, the construction is furthermore shown to be distributed over a wide range of possible text types and syntactic functions. On the other hand, it is found to be much less typical of spontaneous conversation than of written language; Big Mess NPs further display a strong preference for the function of subject complement. Investigations of the internal structural complexity of the construction indicate that its obligatory components can optionally be enriched by a remarkably wide range of optional (if infrequent) elements. In an additional analysis of the realisations of the obligatory but lexically variable slots (head noun and head of AP), the construction is highlighted to represent a productive pattern. With the help of the methods of Collexeme Analysis (Stefanowitsch and Gries 2003) and Co-varying Collexeme Analysis (Gries and Stefanowitsch 2004b, Stefanowitsch and Gries 2005), the two slots are, however, revealed to be strongly associated with general nouns and ‘evaluative’ and ‘dimension’ adjectives, respectively. On the basis of an inspection of the most typical adjective-noun combinations, I identify the prototypical semantics of the Big Mess Construction.
The analyses of the constructional functions centre on two distinct functional areas. First, I investigate Bolinger’s (1972) hypothesis that the construction fulfils functions in line with the Principle of Rhythmic Alternation (e.g. Selkirk 1984: 11, Schlüter 2005). It is established that rhythmic preferences co-determine the use of the construction to some extent, but that they clearly do not suffice to explain the phenomenon under study. In a next step, the discourse-pragmatic functions of the construction are scrutinised. Big Mess NPs are demonstrated to perform distinct information-structural functions in that the non-canonical position of the AP serves to highlight focal information (compare De Mönnink 2000: 134-35). Additionally, the construction is shown to place emphasis on acts of evaluation. I conclude the construction to represent a contrastive focus construction.
My investigations of the formal and functional characteristics of Big Mess NPs each include analyses which compare individual versions of the construction to one another (e.g. the As Big a Mess, Too Big a Mess and So Big a Mess Constructions). It is revealed that the versions are united by a shared core of properties while differing from one another at more abstract levels of description. The question of the status of the constructional versions as separate constructions further receives special emphasis as part of a discussion in which I integrate my results into the framework of usage-based Construction Grammar (e.g. Goldberg 1995, 2006).
The dissertation includes three published articles on which the development of a theoretical model of motivational and self-regulatory determinants of the intention to comprehensively search for health information is based. The first article focuses on building a solid theoretical foundation as to the nature of a comprehensive search for health information and enabling its integration into a broader conceptual framework. Based on subjective source perceptions, a taxonomy of health information sources was developed. The aim of this taxonomy was to identify most fundamental source characteristics to provide a point of reference when it comes to relating to the target objects of a comprehensive search. Three basic source characteristics were identified: expertise, interaction and accessibility. The second article reports on the development and evaluation of an instrument measuring the goals individuals have when seeking health information: the ‘Goals Associated with Health Information Seeking’ (GAINS) questionnaire. Two goal categories (coping focus and regulatory focus) were theoretically derived, based on which four goals (understanding, action planning, hope and reassurance) were classified. The final version of the questionnaire comprised four scales representing the goals, with four items per scale (sixteen items in total). The psychometric properties of the GAINS were analyzed in three independent samples, and the questionnaire was found to be reliable and sufficiently valid as well as suitable for a patient sample. It was concluded that the GAINS makes it possible to evaluate goals of health information seeking (HIS) which are likely to inform the intention building on how to organize the search for health information. The third article describes the final development and a first empirical evaluation of a model of motivational and self-regulatory determinants of an intentionally comprehensive search for health information. Based on the insights and implications of the previous two articles and an additional rigorous theoretical investigation, the model included approach and avoidance motivation, emotion regulation, HIS self-efficacy, problem and emotion focused coping goals and the intention to seek comprehensively (as outcome variable). The model was analyzed via structural equation modeling in a sample of university students. Model fit was good and hypotheses with regard to specific direct and indirect effects were confirmed. Last, the findings of all three articles are synthesized, the final model is presented and discussed with regard to its strengths and weaknesses, and implications for further research are determined.
In current times, the coronavirus is spreading and taking its toll all over the world. Inspite of having developed into a global pandemic, COVID-19 is oftentimes met with local national(ist) reactions. Many states pursue iso-lationist politics by closing and enforcing borders and by focusing entirely on their own functioning in this mo-ment of crisis. This nationalist/nationally-oriented rebordering politics goes hand in hand with what might be termed ‘linguistic rebordering,’ i.e. the attempts of constructing the disease as something foreign-grown and by apportioning the blame to ‘the other.’ This paper aims at laying bare the interconnectedness of these geopoliti-cal and linguistic/discursive rebordering politics. It questions their efficacy and makes a plea for cross-border solidarity.
The object of the current Thematic Issue is not to focus on the individuals (the cross-border commuters) but on the organization of the cross-border labor markets. We move from a micro perspective to a macro perspective in order to underline the diversity of the cross-border labor markets (at the French borders, for example) and shed light on the many aspects that impact cross-border supply or demand. Trying to understand the whole system that goes beyond the cross-border flows, the question we address in this thematic issue is about the organization of the labor markets: is the system organized in a cross-border way? Or do the borders still prevent a genuinely integrated cross-border labor market?
Internet interventions have gained popularity and the idea is to use them to increase the availability of psychological treatment. Research suggests that internet interventions are effective for a number of psychological disorders with effect sizes comparable to those found in face-to-face treatment. However, when provided as an add-on to treatment as usual, internet interventions do not seem to provide additional benefit. Furthermore, adherence and dropout rates vary greatly between studies, limiting the generalizability of the findings. This underlines the need to further investigate differences between internet interventions, participating patients, and their usage of interventions. A stronger focus on the processes of change seems necessary to better understand the varying findings regarding outcome, adherence and dropout in internet interventions. Thus, the aim of this dissertation was to investigate change processes in internet interventions and the factors that impact treatment response. This could help to identify important variables that should be considered in research on internet interventions as well as in clinical settings that make use of internet interventions.
Study I (Chapter 5) investigated early change patterns in participants of an internet intervention targeting depression. Data from 409 participants were analyzed using Growth Mixture Modeling. Specifically a piecewise model was applied to model change from screening to registration (pretreatment) and early change (registration to week four of treatment). Three early change patterns were identified; two were characterized by improvement and one by deterioration. The patterns were predictive of treatment outcome. The results therefore indicated that early change should be closely monitored in internet interventions, as early change may be an important indicator of treatment outcome.
Study II (Chapter 6) picked up on the idea of analyzing change patterns in internet interventions and extended it by using the Muthen-Roy model to identify change-dropout patterns. A sligthly bigger sample of the dataset from Study I was analyzed (N = 483). Four change-dropout patterns emerged; high risk of dropout was associated with rapid improvement and deterioration. These findings indicate that clinicians should consider how dropout may depend on patient characteristics as well as symptom change, as dropout is associated with both deterioration and a good enough dosage of treatment.
Study III (Chapter 7) compared adherence and outcome in different participant groups and investigated the impact of adherence to treatment components on treatment outcome in an internet intervention targeting anxiety symptoms. 50 outpatient participants waiting for face- to-face treatment and 37 self-referred participants were compared regarding adherence to treatment components and outcome. In addition, outpatient participants were compared to a matched sample of outpatients, who had no access to the internet intervention during the waiting period. Adherence to treatment components was investigated as a predictor of treatment outcome. Results suggested that especially adherence may vary depending on participant group. Also using specific measures of adherence such as adherence to treatment components may be crucial to detect change mechanisms in internet interventions. Fostering adherence to treatment components in participants may increase the effectiveness of internet interventions.
Results of the three studies are discussed and general conclusions are drawn.
Implications for future research as well as their utility for clinical practice and decision- making are presented.