Refine
Language
- English (13) (remove)
Keywords
- Schätzung (4)
- Stichprobe (4)
- survey statistics (4)
- Erhebungsverfahren (3)
- Amtliche Statistik (2)
- Datenerhebung (2)
- Haushalt (2)
- Official Statistics (2)
- Optimierung (2)
- Unternehmen (2)
Institute
- Fachbereich 4 (9)
- Mathematik (1)
- Wirtschaftswissenschaften (1)
In this thesis, we aim to study the sampling allocation problem of survey statistics under uncertainty. We know that the stratum specific variances are generally not known precisely and we have no information about the distribution of uncertainty. The cost of interviewing each person in a stratum is also a highly uncertain parameter as sometimes people are unavailable for the interview. We propose robust allocations to deal with the uncertainty in both stratum specific variances and costs. However, in real life situations, we can face such cases when only one of the variances or costs is uncertain. So we propose three different robust formulations representing these different cases. To the best of our knowledge robust allocation in the sampling allocation problem has not been considered so far in any research.
The first robust formulation for linear problems was proposed by Soyster (1973). Bertsimas and Sim (2004) proposed a less conservative robust formulation for linear problems. We study these formulations and extend them for the nonlinear sampling allocation problem. It is very unlikely to happen that all of the stratum specific variances and costs are uncertain. So the robust formulations are in such a way that we can select how many strata are uncertain which we refer to as the level of uncertainty. We prove that an upper bound on the probability of violation of the nonlinear constraints can be calculated before solving the robust optimization problem. We consider various kinds of datasets and compute robust allocations. We perform multiple experiments to check the quality of the robust allocations and compare them with the existing allocation techniques.
This dissertation deals with consistent estimates in household surveys. Household surveys are often drawn via cluster sampling, with households sampled at the first stage and persons selected at the second stage. The collected data provide information for estimation at both the person and the household level. However, consistent estimates are desirable in the sense that the estimated household-level totals should coincide with the estimated totals obtained at the person-level. Current practice in statistical offices is to use integrated weighting. In this approach consistent estimates are guaranteed by equal weights for all persons within a household and the household itself. However, due to the forced equality of weights, the individual patterns of persons are lost and the heterogeneity within households is not taken into account. In order to avoid the negative consequences of integrated weighting, we propose alternative weighting methods in the first part of this dissertation that ensure both consistent estimates and individual person weights within a household. The underlying idea is to limit the consistency conditions to variables that emerge in both the personal and household data sets. These common variables are included in the person- and household-level estimator as additional auxiliary variables. This achieves consistency more directly and only for the relevant variables, rather than indirectly by forcing equal weights on all persons within a household. Further decisive advantages of the proposed alternative weighting methods are that original individual rather than the constructed aggregated auxiliaries are utilized and that the variable selection process is more flexible because different auxiliary variables can be incorporated in the person-level estimator than in the household-level estimator.
In the second part of this dissertation, the variances of a person-level GREG estimator and an integrated estimator are compared in order to quantify the effects of the consistency requirements in the integrated weighting approach. One of the challenges is that the estimators to be compared are of different dimensions. The proposed solution is to decompose the variance of the integrated estimator into the variance of a reduced GREG estimator, whose underlying model is of the same dimensions as the person-level GREG estimator, and add a constructed term that captures the effects disregarded by the reduced model. Subsequently, further fields of application for the derived decomposition are proposed such as the variable selection process in the field of econometrics or survey statistics.
Estimation and therefore prediction -- both in traditional statistics and machine learning -- encounters often problems when done on survey data, i.e. on data gathered from a random subset of a finite population. Additional to the stochastic generation of the data in the finite population (based on a superpopulation model), the subsetting represents a second randomization process, and adds further noise to the estimation. The character and impact of the additional noise on the estimation procedure depends on the specific probability law for subsetting, i.e. the survey design. Especially when the design is complex or the population data is not generated by a Gaussian distribution, established methods must be re-thought. Both phenomena can be found in business surveys, and their combined occurrence poses challenges to the estimation.
This work introduces selected topics linked to relevant use cases of business surveys and discusses the role of survey design therein: First, consider micro-econometrics using business surveys. Regression analysis under the peculiarities of non-normal data and complex survey design is discussed. The focus lies on mixed models, which are able to capture unobserved heterogeneity e.g. between economic sectors, when the dependent variable is not conditionally normally distributed. An algorithm for survey-weighted model estimation in this setting is provided and applied to business data.
Second, in official statistics, the classical sampling randomization and estimators for finite population totals are relevant. The variance estimation of estimators for (finite) population totals plays a major role in this framework in order to decide on the reliability of survey data. When the survey design is complex, and the number of variables is large for which an estimated total is required, generalized variance functions are popular for variance estimation. They allow to circumvent cumbersome theoretical design-based variance formulae or computer-intensive resampling. A synthesis of the superpopulation-based motivation and the survey framework is elaborated. To the author's knowledge, such a synthesis is studied for the first time both theoretically and empirically.
Third, the self-organizing map -- an unsupervised machine learning algorithm for data visualization, clustering and even probability estimation -- is introduced. A link to Markov random fields is outlined, which to the author's knowledge has not yet been established, and a density estimator is derived. The latter is evaluated in terms of a Monte-Carlo simulation and then applied to real world business data.
Die vorgelegte Dissertation trägt den Titel Regularization Methods for Statistical Modelling in Small Area Estimation. In ihr wird die Verwendung regularisierter Regressionstechniken zur geographisch oder kontextuell hochauflösenden Schätzung aggregatspezifischer Kennzahlen auf Basis kleiner Stichproben studiert. Letzteres wird in der Fachliteratur häufig unter dem Begriff Small Area Estimation betrachtet. Der Kern der Arbeit besteht darin die Effekte von regularisierter Parameterschätzung in Regressionsmodellen, welche gängiger Weise für Small Area Estimation verwendet werden, zu analysieren. Dabei erfolgt die Analyse primär auf theoretischer Ebene, indem die statistischen Eigenschaften dieser Schätzverfahren mathematisch charakterisiert und bewiesen werden. Darüber hinaus werden die Ergebnisse durch numerische Simulationen veranschaulicht, und vor dem Hintergrund empirischer Anwendungen kritisch verortet. Die Dissertation ist in drei Bereiche gegliedert. Jeder Bereich behandelt ein individuelles methodisches Problem im Kontext von Small Area Estimation, welches durch die Verwendung regularisierter Schätzverfahren gelöst werden kann. Im Folgenden wird jedes Problem kurz vorgestellt und im Zuge dessen der Nutzen von Regularisierung erläutert.
Das erste Problem ist Small Area Estimation in der Gegenwart unbeobachteter Messfehler. In Regressionsmodellen werden typischerweise endogene Variablen auf Basis statistisch verwandter exogener Variablen beschrieben. Für eine solche Beschreibung wird ein funktionaler Zusammenhang zwischen den Variablen postuliert, welcher durch ein Set von Modellparametern charakterisiert ist. Dieses Set muss auf Basis von beobachteten Realisationen der jeweiligen Variablen geschätzt werden. Sind die Beobachtungen jedoch durch Messfehler verfälscht, dann liefert der Schätzprozess verzerrte Ergebnisse. Wird anschließend Small Area Estimation betrieben, so sind die geschätzten Kennzahlen nicht verlässlich. In der Fachliteratur existieren hierfür methodische Anpassungen, welche in der Regel aber restriktive Annahmen hinsichtlich der Messfehlerverteilung benötigen. Im Rahmen der Dissertation wird bewiesen, dass Regularisierung in diesem Kontext einer gegen Messfehler robusten Schätzung entspricht - und zwar ungeachtet der Messfehlerverteilung. Diese Äquivalenz wird anschließend verwendet, um robuste Varianten bekannter Small Area Modelle herzuleiten. Für jedes Modell wird ein Algorithmus zur robusten Parameterschätzung konstruiert. Darüber hinaus wird ein neuer Ansatz entwickelt, welcher die Unsicherheit von Small Area Schätzwerten in der Gegenwart unbeobachteter Messfehler quantifiziert. Es wird zusätzlich gezeigt, dass diese Form der robusten Schätzung die wünschenswerte Eigenschaft der statistischen Konsistenz aufweist.
Das zweite Problem ist Small Area Estimation anhand von Datensätzen, welche Hilfsvariablen mit unterschiedlicher Auflösung enthalten. Regressionsmodelle für Small Area Estimation werden normalerweise entweder für personenbezogene Beobachtungen (Unit-Level), oder für aggregatsbezogene Beobachtungen (Area-Level) spezifiziert. Doch vor dem Hintergrund der stetig wachsenden Datenverfügbarkeit gibt es immer häufiger Situationen, in welchen Daten auf beiden Ebenen vorliegen. Dies beinhaltet ein großes Potenzial für Small Area Estimation, da somit neue Multi-Level Modelle mit großem Erklärungsgehalt konstruiert werden können. Allerdings ist die Verbindung der Ebenen aus methodischer Sicht kompliziert. Zentrale Schritte des Inferenzschlusses, wie etwa Variablenselektion und Parameterschätzung, müssen auf beiden Levels gleichzeitig durchgeführt werden. Hierfür existieren in der Fachliteratur kaum allgemein anwendbare Methoden. In der Dissertation wird gezeigt, dass die Verwendung ebenenspezifischer Regularisierungsterme in der Modellierung diese Probleme löst. Es wird ein neuer Algorithmus für stochastischen Gradientenabstieg zur Parameterschätzung entwickelt, welcher die Informationen von allen Ebenen effizient unter adaptiver Regularisierung nutzt. Darüber hinaus werden parametrische Verfahren zur Abschätzung der Unsicherheit für Schätzwerte vorgestellt, welche durch dieses Verfahren erzeugt wurden. Daran anknüpfend wird bewiesen, dass der entwickelte Ansatz bei adäquatem Regularisierungsterm sowohl in der Schätzung als auch in der Variablenselektion konsistent ist.
Das dritte Problem ist Small Area Estimation von Anteilswerten unter starken verteilungsbezogenen Abhängigkeiten innerhalb der Kovariaten. Solche Abhängigkeiten liegen vor, wenn eine exogene Variable durch eine lineare Transformation einer anderen exogenen Variablen darstellbar ist (Multikollinearität). In der Fachliteratur werden hierunter aber auch Situationen verstanden, in welchen mehrere Kovariate stark korreliert sind (Quasi-Multikollinearität). Wird auf einer solchen Datenbasis ein Regressionsmodell spezifiziert, dann können die individuellen Beiträge der exogenen Variablen zur funktionalen Beschreibung der endogenen Variablen nicht identifiziert werden. Die Parameterschätzung ist demnach mit großer Unsicherheit verbunden und resultierende Small Area Schätzwerte sind ungenau. Der Effekt ist besonders stark, wenn die zu modellierende Größe nicht-linear ist, wie etwa ein Anteilswert. Dies rührt daher, dass die zugrundeliegende Likelihood-Funktion nicht mehr geschlossen darstellbar ist und approximiert werden muss. Im Rahmen der Dissertation wird gezeigt, dass die Verwendung einer L2-Regularisierung den Schätzprozess in diesem Kontext signifikant stabilisiert. Am Beispiel von zwei nicht-linearen Small Area Modellen wird ein neuer Algorithmus entwickelt, welche den bereits bekannten Quasi-Likelihood Ansatz (basierend auf der Laplace-Approximation) durch Regularisierung erweitert und verbessert. Zusätzlich werden parametrische Verfahren zur Unsicherheitsmessung für auf diese Weise erhaltene Schätzwerte beschrieben.
Vor dem Hintergrund der theoretischen und numerischen Ergebnisse wird in der Dissertation demonstriert, dass Regularisierungsmethoden eine wertvolle Ergänzung der Fachliteratur für Small Area Estimation darstellen. Die hier entwickelten Verfahren sind robust und vielseitig einsetzbar, was sie zu hilfreichen Werkzeugen der empirischen Datenanalyse macht.
The Eurosystem's Household Finance and Consumption Survey (HFCS) collects micro data on private households' balance sheets, income and consumption. It is a stylised fact that wealth is unequally distributed and that the wealthiest own a large share of total wealth. For sample surveys which aim at measuring wealth and its distribution, this is a considerable problem. To overcome it, some of the country surveys under the HFCS umbrella try to sample a disproportionately large share of households that are likely to be wealthy, a technique referred to as oversampling. Ignoring such types of complex survey designs in the estimation of regression models can lead to severe problems. This thesis first illustrates such problems using data from the first wave of the HFCS and canonical regression models from the field of household finance and gives a first guideline for HFCS data users regarding the use of replicate weight sets for variance estimation using a variant of the bootstrap. A further investigation of the issue necessitates a design-based Monte Carlo simulation study. To this end, the already existing large close-to-reality synthetic simulation population AMELIA is extended with synthetic wealth data. We discuss different approaches to the generation of synthetic micro data in the context of the extension of a synthetic simulation population that was originally based on a different data source. We propose an additional approach that is suitable for the generation of highly skewed synthetic micro data in such a setting using a multiply-imputed survey data set. After a description of the survey designs employed in the first wave of the HFCS, we then construct new survey designs for AMELIA that share core features of the HFCS survey designs. A design-based Monte Carlo simulation study shows that while more conservative approaches to oversampling do not pose problems for the estimation of regression models if sampling weights are properly accounted for, the same does not necessarily hold for more extreme oversampling approaches. This issue should be further analysed in future research.
Official business surveys form the basis for national and regional business statistics and are thus of great importance for analysing the state and performance of the economy. However, both the heterogeneity of business data and their high dynamics pose a particular challenge to the feasibility of sampling and the quality of the resulting estimates. A widely used sampling frame for creating the design of an official business survey is an extract from an official business register. However, if this frame does not accurately represent the target population, frame errors arise. Amplified by the heterogeneity and dynamics of business populations, these errors can significantly affect the estimation quality and lead to inefficiencies and biases. This dissertation therefore deals with design-based methods for optimising business surveys with respect to different types of frame errors.
First, methods for adjusting the sampling design of business surveys are addressed. These approaches integrate auxiliary information about the expected structures of frame errors into the sampling design. The aim is to increase the number of sampled businesses that are subject to frame errors. The element-specific frame error probability is estimated based on auxiliary information about frame errors observed in previous samples. The approaches discussed consider different types of frame errors and can be incorporated into predefined designs with fixed strata.
As the second main pillar of this work, methods for adjusting weights to correct for frame errors during estimation are developed and investigated. As a result of frame errors, the assumptions under which the original design weights were determined based on the sampling design no longer hold. The developed methods correct the design weights taking into account the errors identified for sampled elements. Case-number-based reweighting approaches, on the one hand, attempt to reconstruct the unknown size of the individual strata in the target population. In the context of weight smoothing methods, on the other hand, design weights are modelled and smoothed as a function of target or auxiliary variables. This serves to avoid inefficiencies in the estimation due to highly scattering weights or weak correlations between weights and target variables. In addition, possibilities of correcting frame errors by calibration weighting are elaborated. Especially when the sampling frame shows over- and/or undercoverage, the inclusion of external auxiliary information can provide a significant improvement of the estimation quality. For those methods whose quality cannot be measured using standard procedures, a procedure for estimating the variance based on a rescaling bootstrap is proposed. This enables an assessment of the estimation quality when using the methods in practice.
In the context of two extensive simulation studies, the methods presented in this dissertation are evaluated and compared with each other. First, in the environment of an experimental simulation, it is assessed which approaches are particularly suitable with regard to different data situations. In a second simulation study, which is based on the structural survey in the services sector, the applicability of the methods in practice is evaluated under realistic conditions.
Survey data can be viewed as incomplete or partially missing from a variety of perspectives and there are different ways of dealing with this kind of data in the prediction and the estimation of economic quantities. In this thesis, we present two selected research contexts in which the prediction or estimation of economic quantities is examined under incomplete survey data.
These contexts are first the investigation of composite estimators in the German Microcensus (Chapters 3 and 4) and second extensions of multivariate Fay-Herriot (MFH) models (Chapters 5 and 6), which are applied to small area problems.
Composite estimators are estimation methods that take into account the sample overlap in rotating panel surveys such as the German Microcensus in order to stabilise the estimation of the statistics of interest (e.g. employment statistics). Due to the partial sample overlaps, information from previous samples is only available for some of the respondents, so the data are partially missing.
MFH models are model-based estimation methods that work with aggregated survey data in order to obtain more precise estimation results for small area problems compared to classical estimation methods. In these models, several variables of interest are modelled simultaneously. The survey estimates of these variables, which are used as input in the MFH models, are often partially missing. If the domains of interest are not explicitly accounted for in a sampling design, the sizes of the samples allocated to them can, by chance, be small. As a result, it can happen that either no estimates can be calculated at all or that the estimated values are not published by statistical offices because their variances are too large.
Non-probability sampling is a topic of growing relevance, especially due to its occurrence in the context of new emerging data sources like web surveys and Big Data.
This thesis addresses statistical challenges arising from non-probability samples, where unknown or uncontrolled sampling mechanisms raise concerns in terms of data quality and representativity.
Various methods to quantify and reduce the potential selectivity and biases of non-probability samples in estimation and inference are discussed. The thesis introduces new forms of prediction and weighting methods, namely
a) semi-parametric artificial neural networks (ANNs) that integrate B-spline layers with optimal knot positioning in the general structure and fitting procedure of artificial neural networks, and
b) calibrated semi-parametric ANNs that determine weights for non-probability samples by integrating an ANN as response model with calibration constraints for totals, covariances and correlations.
Custom-made computational implementations are developed for fitting (calibrated) semi-parametric ANNs by means of stochastic gradient descent, BFGS and sequential quadratic programming algorithms.
The performance of all the discussed methods is evaluated and compared for a bandwidth of non-probability sampling scenarios in a Monte Carlo simulation study as well as an application to a real non-probability sample, the WageIndicator web survey.
Potentials and limitations of the different methods for dealing with the challenges of non-probability sampling under various circumstances are highlighted. It is shown that the best strategy for using non-probability samples heavily depends on the particular selection mechanism, research interest and available auxiliary information.
Nevertheless, the findings show that existing as well as newly proposed methods can be used to ease or even fully counterbalance the issues of non-probability samples and highlight the conditions under which this is possible.
Data fusions are becoming increasingly relevant in official statistics. The aim of a data fusion is to combine two or more data sources using statistical methods in order to be able to analyse different characteristics that were not jointly observed in one data source. Record linkage of official data sources using unique identifiers is often not possible due to methodological and legal restrictions. Appropriate data fusion methods are therefore of central importance in order to use the diverse data sources of official statistics more effectively and to be able to jointly analyse different characteristics. However, the literature lacks comprehensive evaluations of which fusion approaches provide promising results for which data constellations. Therefore, the central aim of this thesis is to evaluate a concrete plethora of possible fusion algorithms, which includes classical imputation approaches as well as statistical and machine learning methods, in selected data constellations.
To specify and identify these data contexts, data and imputation-related scenario types of a data fusion are introduced: Explicit scenarios, implicit scenarios and imputation scenarios. From these three scenario types, fusion scenarios that are particularly relevant for official statistics are selected as the basis for the simulations and evaluations. The explicit scenarios are the fulfilment or violation of the Conditional Independence Assumption (CIA) and varying sample sizes of the data to be matched. Both aspects are likely to have a direct, that is, explicit, effect on the performance of different fusion methods. The summed sample size of the data sources to be fused and the scale level of the variable to be imputed are considered as implicit scenarios. Both aspects suggest or exclude the applicability of certain fusion methods due to the nature of the data. The univariate or simultaneous, multivariate imputation solution and the imputation of artificially generated or previously observed values in the case of metric characteristics serve as imputation scenarios.
With regard to the concrete plethora of possible fusion algorithms, three classical imputation approaches are considered: Distance Hot Deck (DHD), the Regression Model (RM) and Predictive Mean Matching (PMM). With Decision Trees (DT) and Random Forest (RF), two prominent tree-based methods from the field of statistical learning are discussed in the context of data fusion. However, such prediction methods aim to predict individual values as accurately as possible, which can clash with the primary objective of data fusion, namely the reproduction of joint distributions. In addition, DT and RF only comprise univariate imputation solutions and, in the case of metric variables, artificially generated values are imputed instead of real observed values. Therefore, Predictive Value Matching (PVM) is introduced as a new, statistical learning-based nearest neighbour method, which could overcome the distributional disadvantages of DT and RF, offers a univariate and multivariate imputation solution and, in addition, imputes real and previously observed values for metric characteristics. All prediction methods can form the basis of the new PVM approach. In this thesis, PVM based on Decision Trees (PVM-DT) and Random Forest (PVM-RF) is considered.
The underlying fusion methods are investigated in comprehensive simulations and evaluations. The evaluation of the various data fusion techniques focusses on the selected fusion scenarios. The basis for this is formed by two concrete and current use cases of data fusion in official statistics, the fusion of EU-SILC and the Household Budget Survey on the one hand and of the Tax Statistics and the Microcensus on the other. Both use cases show significant differences with regard to different fusion scenarios and thus serve the purpose of covering a variety of data constellations. Simulation designs are developed from both use cases, whereby the explicit scenarios in particular are incorporated into the simulations.
The results show that PVM-RF in particular is a promising and universal fusion approach under compliance with the CIA. This is because PVM-RF provides satisfactory results for both categorical and metric variables to be imputed and also offers a univariate and multivariate imputation solution, regardless of the scale level. PMM also represents an adequate fusion method, but only in relation to metric characteristics. The results also imply that the application of statistical learning methods is both an opportunity and a risk. In the case of CIA violation, potential correlation-related exaggeration effects of DT and RF, and in some cases also of RM, can be useful. In contrast, the other methods induce poor results if the CIA is violated. However, if the CIA is fulfilled, there is a risk that the prediction methods RM, DT and RF will overestimate correlations. The size ratios of the studies to be fused in turn have a rather minor influence on the performance of fusion methods. This is an important indication that the larger dataset does not necessarily have to serve as a donor study, as was previously the case.
The results of the simulations and evaluations provide concrete implications as to which data fusion methods should be used and considered under the selected data and imputation constellations. Science in general and official statistics in particular benefit from these implications. This is because they provide important indications for future data fusion projects in order to assess which specific data fusion method could provide adequate results along the data constellations analysed in this thesis. Furthermore, with PVM this thesis offers a promising methodological innovation for future data fusions and for imputation problems in general.
The publication of statistical databases is subject to legal regulations, e.g. national statistical offices are only allowed to publish data if the data cannot be attributed to individuals. Achieving this privacy standard requires anonymizing the data prior to publication. However, data anonymization inevitably leads to a loss of information, which should be kept minimal. In this thesis, we analyze the anonymization method SAFE used in the German census in 2011 and we propose a novel integer programming-based anonymization method for nominal data.
In the first part of this thesis, we prove that a fundamental variant of the underlying SAFE optimization problem is NP-hard. This justifies the use of heuristic approaches for large data sets. In the second part, we propose a new anonymization method belonging to microaggregation methods, specifically designed for nominal data. This microaggregation method replaces rows in a microdata set with representative values to achieve k-anonymity, ensuring each data row is identical to at least k − 1 other rows. In addition to the overall dissimilarities of the data rows, the method accounts for errors in resulting frequency tables, which are of high interest for nominal data in practice. The method employs a typical two-step structure: initially partitioning the data set into clusters and subsequently replacing all cluster elements with representative values to achieve k-anonymity. For the partitioning step, we propose a column generation scheme followed by a heuristic to obtain an integer solution, which is based on the dual information. For the aggregation step, we present a mixed-integer problem formulation to find cluster representatives. To this end, we take errors in a subset of frequency tables into account. Furthermore, we show a reformulation of the problem to a minimum edge-weighted maximal clique problem in a multipartite graph, which allows for a different perspective on the problem. Moreover, we formulate a mixed-integer program, which combines the partitioning and the aggregation step and aims to minimize the sum of chi-squared errors in frequency tables.
Finally, an experimental study comparing the methods covered or developed in this work shows particularly strong results for the proposed method with respect to relative criteria, while SAFE shows its strength with respect to the maximum absolute error in frequency tables. We conclude that the inclusion of integer programming in the context of data anonymization is a promising direction to reduce the inevitable information loss inherent in anonymization, particularly for nominal data.