Filtern
Erscheinungsjahr
- 2019 (33) (entfernen)
Dokumenttyp
- Dissertation (19)
- Wissenschaftlicher Artikel (11)
- Teil eines Buches (Kapitel) (1)
- Habilitation (1)
- Arbeitspapier (1)
Sprache
- Englisch (33) (entfernen)
Schlagworte
- Optimierung (4)
- Fernerkundung (3)
- Stichprobe (3)
- Bodenmikrobiologie (2)
- Familienbetrieb (2)
- Finanzierung (2)
- Modellierung (2)
- Neuroendokrines System (2)
- Stressreaktion (2)
- Abwasser (1)
- Aktivierung (1)
- Akzeptanz (1)
- Allokation (1)
- Amtliche Statistik (1)
- Analysis (1)
- Analysis on fractals (1)
- Anorexia nervosa (1)
- Anpassung (1)
- Antarctic (1)
- Antarktis (1)
- Approximation (1)
- Arbeitsgedächtnis (1)
- Assistance System (1)
- Autokorrelation (1)
- Automation of Simulation (1)
- BRDF (1)
- BWL (1)
- Behavioral model (1)
- Biodiversität (1)
- Bodenwasser (1)
- Business Angel (1)
- Business Angels (1)
- Capital structure (1)
- Computational Statistics (1)
- Coposititive, Infinite Dimension (1)
- Crop classification (1)
- DNS-Sequenz (1)
- Discrete optimization (1)
- Diskretisierung (1)
- Einzugsgebiet (1)
- Energiepflanzen (1)
- Entrepreneurial Finance (1)
- Enzymes (1)
- Epistemology of Simulation (1)
- Europäische Union / Wasserrahmenrichtlinie (1)
- Evapotranspiration (1)
- Exposure time (1)
- Family business (1)
- Family firm (1)
- Feldforschung (1)
- Feldfrucht (1)
- Firm performance (1)
- Flugkörper (1)
- Forest evapotranspiration (1)
- Fraktal (1)
- Gedächtnis (1)
- Genetische Variabilität (1)
- HPA (1)
- Haushalt (1)
- Hypothalamic-pituitary-adrenal axis (1)
- Hypothesis Testing (1)
- Information Retrieval (1)
- Infrarotthermographie (1)
- Initial Coin Offerings (ICOs) (1)
- Japan (1)
- Japanology (1)
- Kapitalstruktur (1)
- Katabatischer Wind (1)
- Kriging (1)
- Körpererfahrung (1)
- LAP (1)
- Langzeitgedächtnis (1)
- Learning (1)
- Lernen (1)
- Long-term memory (1)
- M&A decision criteria (1)
- M&A process (1)
- MODIS ice surface temperatures (1)
- Maisanbau (1)
- Maschinelles Lernen (1)
- Memory (1)
- Meta-analysis (1)
- MinION (1)
- Mixed-integer optimization (1)
- Modernity (1)
- Multi-Level Modelling (1)
- Multilineare Algebra (1)
- Multispektralfotografie (1)
- Nanopartikel (1)
- Nichtlineare Optimierung (1)
- Nonlocal convection-diffusion (1)
- Norwegen (1)
- Numerische Mathematik (1)
- Oxford Nanopore Technologies (1)
- Patientenorientierte Medizin (1)
- Patientin (1)
- Penalized Maximum Likelihood (1)
- Prediction (1)
- Pseudogley (1)
- Psychobiologie (1)
- Psychometrie (1)
- Psychotherapie (1)
- Rechte Hemisphäre (1)
- Rechtsvergleichung (1)
- Reduktion (1)
- Regression (1)
- Regression Kriging (1)
- Regression estimator, household surveys, calibration, weighting, integrated weighting (1)
- Regressionsanalyse (1)
- Regressionsmodell (1)
- Reihenfolgeproblem (1)
- Revue (1)
- Revuetheater (1)
- Rheinland-Pfalz (1)
- Risikokapital (1)
- Robust Statistics (1)
- Robust optimization (1)
- Satellitenfernerkundung (1)
- Schelfeis (1)
- Schätzfunktion (1)
- Schätzung (1)
- Sequenzanalyse / Chemie (1)
- Silber (1)
- Silver nanoparticles (1)
- Simulation Studies (1)
- Soil microbial community (1)
- Soil parameterization (1)
- Soil texture (1)
- Spatial autocorrelation (1)
- Stadtplanung (1)
- Stagnosols (1)
- Statistical Properties (1)
- Strategische Planung (1)
- Stratified sampling (1)
- Stress (1)
- Stresstest (1)
- Subset Selection (1)
- Survey statistics (1)
- TSST-VR (1)
- Theatre (1)
- Theorie (1)
- Therapeut (1)
- Therapieerfolg (1)
- Thermalluftbild (1)
- Trier Social Stress Test (1)
- UAV (1)
- Unternehmensgründung (1)
- Unternehmenskauf (1)
- Venture Capital (VC) (1)
- Verfassungsgerichtsbarkeit (1)
- Verfassungsrecht (1)
- Verstärkung (1)
- Verzerrung (1)
- Virtual Reality (1)
- Virtuelle Realität (1)
- Wald (1)
- Wasserbilanz (1)
- Wasserstress (1)
- Water Framework Directive (1)
- Water balance simulation (1)
- Weinbau (1)
- Working memory (1)
- Wärmeanomalie (1)
- acquisition (1)
- behavioral genetics (1)
- biodiversity (1)
- choice-based conjoint analysis (1)
- cluster analysis (1)
- competitive analysis (1)
- crop stress (1)
- decision making pattern (1)
- drought (1)
- emissivity (1)
- empirical taxonomy (1)
- eukaryotes (1)
- evapotranspiration (ET) modeling (1)
- family business (1)
- family management (1)
- finite element method (1)
- fractional Poisson equation (1)
- generational stage (1)
- geometric (1)
- ice shelves (1)
- katabatic winds (1)
- local limit (1)
- local wastewater planning (1)
- long DNA barcodes (1)
- metabarcoding (1)
- multilevel Toeplitz (1)
- multilinear algebra (1)
- multispectral (1)
- non-family business (1)
- nonlinear optimization (1)
- numerical analysis (1)
- physiological parameters (1)
- plant adaptation mechanisms (1)
- pre-acquisition phase (1)
- remote sensing (1)
- ribosomal (1)
- shape optimization (1)
- soil microbial activity (1)
- soil microbial biomass (1)
- strategic acquisition (1)
- stress (1)
- target screening and selection (1)
- temperature (1)
- tensor methods (1)
- thermal infrared (TIR) (1)
- transgenerational intention (1)
- urban and rural boundaries (1)
- viticulture (1)
- water stress (1)
- water use (1)
- waterlogging (1)
- weighting (1)
Institut
- Fachbereich 4 (13)
- Raum- und Umweltwissenschaften (6)
- Fachbereich 1 (4)
- Fachbereich 6 (4)
- Fachbereich 2 (1)
- Fachbereich 5 (1)
- Mathematik (1)
The forward testing effect refers to the finding that retrieval practice of previously studied information enhances learning and retention of subsequently studied other information. While most of the previous research on the forward testing effect examined group differences, the present study took an individual differences approach to investigate this effect. Experiment 1 examined whether the forward effect has test-retest reliability between two experimental sessions. Experiment 2 investigated whether the effect is related to participants’ working memory capacity. In both experiments (and each session of Experiment 1), participants studied three lists of items in anticipation of a final cumulative recall test. In the testing condition, participants were tested immediately on lists 1 and 2, whereas in the restudy condition, they restudied lists 1 and 2. In both conditions, participants were tested immediately on list 3. On the group level, the results of both experiments demonstrated a forward testing effect, with interim testing of lists 1 and 2 enhancing immediate recall of list 3. On the individual level, the results of Experiment 1 showed that the forward effect on list 3 recall has moderate test-retest reliability between two experimental sessions. In addition, the results of Experiment 2 showed that the forward effect on list 3 recall does not depend on participants’ working memory capacity. These findings suggest that the forward testing effect is reliable at the individual level and affects learners at a wide range of working memory capacities alike. The theoretical and practical implications of the findings are discussed.
In this thesis, we aim to study the sampling allocation problem of survey statistics under uncertainty. We know that the stratum specific variances are generally not known precisely and we have no information about the distribution of uncertainty. The cost of interviewing each person in a stratum is also a highly uncertain parameter as sometimes people are unavailable for the interview. We propose robust allocations to deal with the uncertainty in both stratum specific variances and costs. However, in real life situations, we can face such cases when only one of the variances or costs is uncertain. So we propose three different robust formulations representing these different cases. To the best of our knowledge robust allocation in the sampling allocation problem has not been considered so far in any research.
The first robust formulation for linear problems was proposed by Soyster (1973). Bertsimas and Sim (2004) proposed a less conservative robust formulation for linear problems. We study these formulations and extend them for the nonlinear sampling allocation problem. It is very unlikely to happen that all of the stratum specific variances and costs are uncertain. So the robust formulations are in such a way that we can select how many strata are uncertain which we refer to as the level of uncertainty. We prove that an upper bound on the probability of violation of the nonlinear constraints can be calculated before solving the robust optimization problem. We consider various kinds of datasets and compute robust allocations. We perform multiple experiments to check the quality of the robust allocations and compare them with the existing allocation techniques.
This dissertation investigates corporate acquisition decisions that represent important corporate development activities for family and non-family firms. The main research objective of this dissertation is to generate insights into the subjective decision-making behavior of corporate decision-makers from family and non-family firms and their weighting of M&A decision-criteria during the early pre-acquisition target screening and selection process. The main methodology chosen for the investigation of M&A decision-making preferences and the weighting of M&A decision criteria is a choice-based conjoint analysis. The overall sample of this dissertation consists of 304 decision-makers from 264 private and public family and non-family firms from mainly Germany and the DACH-region. In the first empirical part of the dissertation, the relative importance of strategic, organizational and financial M&A decision-criteria for corporate acquirers in acquisition target screening is investigated. In addition, the author uses a cluster analysis to explore whether distinct decision-making patterns exist in acquisition target screening. In the second empirical part, the dissertation explores whether there are differences in investment preferences in acquisition target screening between family and non-family firms and within the group of family firms. With regards to the heterogeneity of family firms, the dissertation generated insights into how family-firm specific characteristics like family management, the generational stage of the firm and non-economic goals such as transgenerational control intention influences the weighting of different M&A decision criteria in acquisition target screening. The dissertation contributes to strategic management research, in specific to M&A literature, and to family business research. The results of this dissertation generate insights into the weighting of M&A decision-making criteria and facilitate a better understanding of corporate M&A decisions in family and non-family firms. The findings show that decision-making preferences (hence the weighting of M&A decision criteria) are influenced by characteristics of the individual decision-maker, the firm and the environment in which the firm operates.
Computer simulation has become established in a two-fold way: As a tool for planning, analyzing, and optimizing complex systems but also as a method for the scientific instigation of theories and thus for the generation of knowledge. Generated results often serve as a basis for investment decisions, e.g., road construction and factory planning, or provide evidence for scientific theory-building processes. To ensure the generation of credible and reproducible results, it is indispensable to conduct systematic and methodologically sound simulation studies. A variety of procedure models exist that structure and predetermine the process of a study. As a result, experimenters are often required to repetitively but thoroughly carry out a large number of experiments. Moreover, the process is not sufficiently specified and many important design decisions still have to be made by the experimenter, which might result in an unintentional bias of the results.
To facilitate the conducting of simulation studies and to improve both replicability and reproducibility of the generated results, this thesis proposes a procedure model for carrying out Hypothesis-Driven Simulation Studies, an approach that assists the experimenter during the design, execution, and analysis of simulation experiments. In contrast to existing approaches, a formally specified hypothesis becomes the key element of the study so that each step of the study can be adapted and executed to directly contribute to the verification of the hypothesis. To this end, the FITS language is presented, which enables the specification of hypotheses as assumptions regarding the influence specific input values have on the observable behavior of the model. The proposed procedure model systematically designs relevant simulation experiments, runs, and iterations that must be executed to provide evidence for the verification of the hypothesis. Generated outputs are then aggregated for each defined performance measure to allow for the application of statistical hypothesis testing approaches. Hence, the proposed assistance only requires the experimenter to provide an executable simulation model and a corresponding hypothesis to conduct a sound simulation study. With respect to the implementation of the proposed assistance system, this thesis presents an abstract architecture and provides formal specifications of all required services.
To evaluate the concept of Hypothesis-Driven Simulation Studies, two case studies are presented from the manufacturing domain. The introduced approach is applied to a NetLogo simulation model of a four-tiered supply chain. Two scenarios as well as corresponding assumptions about the model behavior are presented to investigate conditions for the occurrence of the bullwhip effect. Starting from the formal specification of the hypothesis, each step of a Hypothesis-Driven Simulation Study is presented in detail, with specific design decisions outlined, and generated inter- mediate data as well as final results illustrated. With respect to the comparability of the results, a conventional simulation study is conducted which serves as reference data. The approach that is proposed in this thesis is beneficial for both practitioners and scientists. The presented assistance system allows for a more effortless and simplified execution of simulation experiments while the efficient generation of credible results is ensured.
Hypothalamic-pituitary-adrenal (HPA) axis-related genetic variants influence the stress response
(2019)
The physiological stress system includes the hypothalamic-pituitary-adrenal (HPA) axis and the sympathetic-adrenal-medullary system (SAM). Parameters representing these systems such as cortisol, blood pressure or heart rate define the physiological reaction in response to a stressor. The main objective of the studies described in this thesis was to understand the role of the HPA-related genetic factors in these two systems. Genetic factors represent one of the components causing individual variations in physiological stress parameters. Five genes involved in the functioning of the HPA axis regarding stress responses are examined in this thesis. They are: corticotropin-releasing hormone (CRH), the glucocorticoid receptor (GR), the mineralocorticoid receptor (MR), the 5-hydroxytryptamine-transporter-linked polymorphic region (5-HTTLPR) in the serotonin transporter (5-HTT) and the brain-derived neurotrophic factor (BDNF) gene. Two hundred thirty-two healthy participants were genotyped. The influence of genetic factors on physiological parameters, such as post-awakening cortisol and blood pressure was assessed, as well as the influence of genetic factors on stress reactivity in response to a socially evaluated cold pressor test (SeCPT). Three studies tested the HPA-related genes each on three different levels. The first study examined the influences of genotypes and haplotypes of these five genes on physiological as well as psychological stress indicators (Chapter 2). The second study examined the effects of GR variants (genotypes and haplotypes) and promoter methylation level on both the SAM system and the HPA axis stress reactivity (Chapter 3). The third study comprised the characterization of CRH promoter haplotypes in an in-vitro study and the association of the CRH promoter with stress indicators in vivo (Chapter 4).
Background
In light of the current biodiversity crisis, DNA barcoding is developing into an essential tool to quantify state shifts in global ecosystems. Current barcoding protocols often rely on short amplicon sequences, which yield accurate identification of biological entities in a community but provide limited phylogenetic resolution across broad taxonomic scales. However, the phylogenetic structure of communities is an essential component of biodiversity. Consequently, a barcoding approach is required that unites robust taxonomic assignment power and high phylogenetic utility. A possible solution is offered by sequencing long ribosomal DNA (rDNA) amplicons on the MinION platform (Oxford Nanopore Technologies).
Findings
Using a dataset of various animal and plant species, with a focus on arthropods, we assemble a pipeline for long rDNA barcode analysis and introduce a new software (MiniBar) to demultiplex dual indexed Nanopore reads. We find excellent phylogenetic and taxonomic resolution offered by long rDNA sequences across broad taxonomic scales. We highlight the simplicity of our approach by field barcoding with a miniaturized, mobile laboratory in a remote rainforest. We also test the utility of long rDNA amplicons for analysis of community diversity through metabarcoding and find that they recover highly skewed diversity estimates.
Conclusions
Sequencing dual indexed, long rDNA amplicons on the MinION platform is a straightforward, cost-effective, portable, and universal approach for eukaryote DNA barcoding. Although bulk community analyses using long-amplicon approaches may introduce biases, the long rDNA amplicons approach signifies a powerful tool for enabling the accurate recovery of taxonomic and phylogenetic diversity across biological communities.
We consider a linear regression model for which we assume that some of the observed variables are irrelevant for the prediction. Including the wrong variables in the statistical model can either lead to the problem of having too little information to properly estimate the statistic of interest, or having too much information and consequently describing fictitious connections. This thesis considers discrete optimization to conduct a variable selection. In light of this, the subset selection regression method is analyzed. The approach gained a lot of interest in recent years due to its promising predictive performance. A major challenge associated with the subset selection regression is the computational difficulty. In this thesis, we propose several improvements for the efficiency of the method. Novel bounds on the coefficients of the subset selection regression are developed, which help to tighten the relaxation of the associated mixed-integer program, which relies on a Big-M formulation. Moreover, a novel mixed-integer linear formulation for the subset selection regression based on a bilevel optimization reformulation is proposed. Finally, it is shown that the perspective formulation of the subset selection regression is equivalent to a state-of-the-art binary formulation. We use this insight to develop novel bounds for the subset selection regression problem, which show to be highly effective in combination with the proposed linear formulation.
In the second part of this thesis, we examine the statistical conception of the subset selection regression and conclude that it is misaligned with its intention. The subset selection regression uses the training error to decide on which variables to select. The approach conducts the validation on the training data, which oftentimes is not a good estimate of the prediction error. Hence, it requires a predetermined cardinality bound. Instead, we propose to select variables with respect to the cross-validation value. The process is formulated as a mixed-integer program with the sparsity becoming subject of the optimization. Usually, a cross-validation is used to select the best model out of a few options. With the proposed program the best model out of all possible models is selected. Since the cross-validation is a much better estimate of the prediction error, the model can select the best sparsity itself.
The thesis is concluded with an extensive simulation study which provides evidence that discrete optimization can be used to produce highly valuable predictive models with the cross-validation subset selection regression almost always producing the best results.
Die vorgelegte Dissertation trägt den Titel Regularization Methods for Statistical Modelling in Small Area Estimation. In ihr wird die Verwendung regularisierter Regressionstechniken zur geographisch oder kontextuell hochauflösenden Schätzung aggregatspezifischer Kennzahlen auf Basis kleiner Stichproben studiert. Letzteres wird in der Fachliteratur häufig unter dem Begriff Small Area Estimation betrachtet. Der Kern der Arbeit besteht darin die Effekte von regularisierter Parameterschätzung in Regressionsmodellen, welche gängiger Weise für Small Area Estimation verwendet werden, zu analysieren. Dabei erfolgt die Analyse primär auf theoretischer Ebene, indem die statistischen Eigenschaften dieser Schätzverfahren mathematisch charakterisiert und bewiesen werden. Darüber hinaus werden die Ergebnisse durch numerische Simulationen veranschaulicht, und vor dem Hintergrund empirischer Anwendungen kritisch verortet. Die Dissertation ist in drei Bereiche gegliedert. Jeder Bereich behandelt ein individuelles methodisches Problem im Kontext von Small Area Estimation, welches durch die Verwendung regularisierter Schätzverfahren gelöst werden kann. Im Folgenden wird jedes Problem kurz vorgestellt und im Zuge dessen der Nutzen von Regularisierung erläutert.
Das erste Problem ist Small Area Estimation in der Gegenwart unbeobachteter Messfehler. In Regressionsmodellen werden typischerweise endogene Variablen auf Basis statistisch verwandter exogener Variablen beschrieben. Für eine solche Beschreibung wird ein funktionaler Zusammenhang zwischen den Variablen postuliert, welcher durch ein Set von Modellparametern charakterisiert ist. Dieses Set muss auf Basis von beobachteten Realisationen der jeweiligen Variablen geschätzt werden. Sind die Beobachtungen jedoch durch Messfehler verfälscht, dann liefert der Schätzprozess verzerrte Ergebnisse. Wird anschließend Small Area Estimation betrieben, so sind die geschätzten Kennzahlen nicht verlässlich. In der Fachliteratur existieren hierfür methodische Anpassungen, welche in der Regel aber restriktive Annahmen hinsichtlich der Messfehlerverteilung benötigen. Im Rahmen der Dissertation wird bewiesen, dass Regularisierung in diesem Kontext einer gegen Messfehler robusten Schätzung entspricht - und zwar ungeachtet der Messfehlerverteilung. Diese Äquivalenz wird anschließend verwendet, um robuste Varianten bekannter Small Area Modelle herzuleiten. Für jedes Modell wird ein Algorithmus zur robusten Parameterschätzung konstruiert. Darüber hinaus wird ein neuer Ansatz entwickelt, welcher die Unsicherheit von Small Area Schätzwerten in der Gegenwart unbeobachteter Messfehler quantifiziert. Es wird zusätzlich gezeigt, dass diese Form der robusten Schätzung die wünschenswerte Eigenschaft der statistischen Konsistenz aufweist.
Das zweite Problem ist Small Area Estimation anhand von Datensätzen, welche Hilfsvariablen mit unterschiedlicher Auflösung enthalten. Regressionsmodelle für Small Area Estimation werden normalerweise entweder für personenbezogene Beobachtungen (Unit-Level), oder für aggregatsbezogene Beobachtungen (Area-Level) spezifiziert. Doch vor dem Hintergrund der stetig wachsenden Datenverfügbarkeit gibt es immer häufiger Situationen, in welchen Daten auf beiden Ebenen vorliegen. Dies beinhaltet ein großes Potenzial für Small Area Estimation, da somit neue Multi-Level Modelle mit großem Erklärungsgehalt konstruiert werden können. Allerdings ist die Verbindung der Ebenen aus methodischer Sicht kompliziert. Zentrale Schritte des Inferenzschlusses, wie etwa Variablenselektion und Parameterschätzung, müssen auf beiden Levels gleichzeitig durchgeführt werden. Hierfür existieren in der Fachliteratur kaum allgemein anwendbare Methoden. In der Dissertation wird gezeigt, dass die Verwendung ebenenspezifischer Regularisierungsterme in der Modellierung diese Probleme löst. Es wird ein neuer Algorithmus für stochastischen Gradientenabstieg zur Parameterschätzung entwickelt, welcher die Informationen von allen Ebenen effizient unter adaptiver Regularisierung nutzt. Darüber hinaus werden parametrische Verfahren zur Abschätzung der Unsicherheit für Schätzwerte vorgestellt, welche durch dieses Verfahren erzeugt wurden. Daran anknüpfend wird bewiesen, dass der entwickelte Ansatz bei adäquatem Regularisierungsterm sowohl in der Schätzung als auch in der Variablenselektion konsistent ist.
Das dritte Problem ist Small Area Estimation von Anteilswerten unter starken verteilungsbezogenen Abhängigkeiten innerhalb der Kovariaten. Solche Abhängigkeiten liegen vor, wenn eine exogene Variable durch eine lineare Transformation einer anderen exogenen Variablen darstellbar ist (Multikollinearität). In der Fachliteratur werden hierunter aber auch Situationen verstanden, in welchen mehrere Kovariate stark korreliert sind (Quasi-Multikollinearität). Wird auf einer solchen Datenbasis ein Regressionsmodell spezifiziert, dann können die individuellen Beiträge der exogenen Variablen zur funktionalen Beschreibung der endogenen Variablen nicht identifiziert werden. Die Parameterschätzung ist demnach mit großer Unsicherheit verbunden und resultierende Small Area Schätzwerte sind ungenau. Der Effekt ist besonders stark, wenn die zu modellierende Größe nicht-linear ist, wie etwa ein Anteilswert. Dies rührt daher, dass die zugrundeliegende Likelihood-Funktion nicht mehr geschlossen darstellbar ist und approximiert werden muss. Im Rahmen der Dissertation wird gezeigt, dass die Verwendung einer L2-Regularisierung den Schätzprozess in diesem Kontext signifikant stabilisiert. Am Beispiel von zwei nicht-linearen Small Area Modellen wird ein neuer Algorithmus entwickelt, welche den bereits bekannten Quasi-Likelihood Ansatz (basierend auf der Laplace-Approximation) durch Regularisierung erweitert und verbessert. Zusätzlich werden parametrische Verfahren zur Unsicherheitsmessung für auf diese Weise erhaltene Schätzwerte beschrieben.
Vor dem Hintergrund der theoretischen und numerischen Ergebnisse wird in der Dissertation demonstriert, dass Regularisierungsmethoden eine wertvolle Ergänzung der Fachliteratur für Small Area Estimation darstellen. Die hier entwickelten Verfahren sind robust und vielseitig einsetzbar, was sie zu hilfreichen Werkzeugen der empirischen Datenanalyse macht.
This dissertation deals with consistent estimates in household surveys. Household surveys are often drawn via cluster sampling, with households sampled at the first stage and persons selected at the second stage. The collected data provide information for estimation at both the person and the household level. However, consistent estimates are desirable in the sense that the estimated household-level totals should coincide with the estimated totals obtained at the person-level. Current practice in statistical offices is to use integrated weighting. In this approach consistent estimates are guaranteed by equal weights for all persons within a household and the household itself. However, due to the forced equality of weights, the individual patterns of persons are lost and the heterogeneity within households is not taken into account. In order to avoid the negative consequences of integrated weighting, we propose alternative weighting methods in the first part of this dissertation that ensure both consistent estimates and individual person weights within a household. The underlying idea is to limit the consistency conditions to variables that emerge in both the personal and household data sets. These common variables are included in the person- and household-level estimator as additional auxiliary variables. This achieves consistency more directly and only for the relevant variables, rather than indirectly by forcing equal weights on all persons within a household. Further decisive advantages of the proposed alternative weighting methods are that original individual rather than the constructed aggregated auxiliaries are utilized and that the variable selection process is more flexible because different auxiliary variables can be incorporated in the person-level estimator than in the household-level estimator.
In the second part of this dissertation, the variances of a person-level GREG estimator and an integrated estimator are compared in order to quantify the effects of the consistency requirements in the integrated weighting approach. One of the challenges is that the estimators to be compared are of different dimensions. The proposed solution is to decompose the variance of the integrated estimator into the variance of a reduced GREG estimator, whose underlying model is of the same dimensions as the person-level GREG estimator, and add a constructed term that captures the effects disregarded by the reduced model. Subsequently, further fields of application for the derived decomposition are proposed such as the variable selection process in the field of econometrics or survey statistics.
Understanding the mechanisms that shape access to the fisheries ecosystem service in Tsokomey, Accra
(2019)
Questions of access to ecosystem services remain largely unaddressed. Yet, in the coming decades, addressing access to services and securing them for livelihoods and well-being of people will likely gain importance, especially to guide according policies at the local scale. Through a qualitative approach, this paper addresses the mechanisms that shape access to the fisheries eco- system service in Accra, Ghana. The analysis uses a framework that focuses on access to land, tools and technology, knowledge and information, capital and credit, as well as labor. This research reveals how access is organized across the different categories of this framework and how people’s well-being is shaped. Moreover, it helps to further our understanding of what regulates the access to ecosystem services and how to address future shocks and capacity in terms of production of ecosystem services.