48 results on '"response propensity"'
Search Results
2. Two Sources of Nonsampling Error in Fishing Surveys
- Author
-
Brick, J. Michael, Andrews, William R., Foster, John, Chen, Ding-Geng (Din), Series Editor, Bekker, Andriëtte, Editorial Board Member, Coelho, Carlos A., Editorial Board Member, Finkelstein, Maxim, Editorial Board Member, Wilson, Jeffrey R., Editorial Board Member, Ng, Hon Keung Tony, Editorial Board Member, Lio, Yuhlong, Editorial Board Member, and Heitjan, Daniel F., editor
- Published
- 2022
- Full Text
- View/download PDF
3. Data Collection Expert Prior Elicitation in Survey Design: Two Case Studies.
- Author
-
Wu, Shiya, Schouten, Barry, Meijers, Ralph, and Moerbeek, Mirjam
- Subjects
- *
ACQUISITION of data , *ELICITATION technique , *BAYESIAN analysis - Abstract
Data collection staff involved in sampling designs, monitoring and analysis of surveys often have a good sense of the response rate that can be expected in a survey, even when this survey is new or done at a relatively low frequency. They make expectations of response rates, and, subsequently, costs on an almost continuous basis. Rarely, however, are these expectations formally structured. Furthermore, the expectations usually are point estimates without any assessment of precision or uncertainty. In recent years, the interest in adaptive survey designs has increased. These designs lean heavily on accurate estimates of response rates and costs. In order to account for inaccurate estimates, a Bayesian analysis of survey design parameters is very sensible. The combination of strong intrinsic knowledge of data collection staff and a Bayesian analysis is a natural next step. In this article, prior elicitation is developed for design parameters with the help of data collection staff. The elicitation is applied to two case studies in which surveys underwent a major redesign and direct historic survey data was unavailable. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Missingness, Its Reasons and Treatment
- Author
-
Laaksonen, Seppo and Laaksonen, Seppo
- Published
- 2018
- Full Text
- View/download PDF
5. Optimizing Response Rates
- Author
-
Brick, J. Michael, Vannette, David L., editor, and Krosnick, Jon A., editor
- Published
- 2018
- Full Text
- View/download PDF
6. Evidence About the Accuracy of Surveys in the Face of Declining Response Rates
- Author
-
Keeter, Scott, Vannette, David L., editor, and Krosnick, Jon A., editor
- Published
- 2018
- Full Text
- View/download PDF
7. HOW TO MAKE ESTIMATES WITH COMPENSATION FOR NONRESPONSE IN STATISTICAL ANALYSIS OF CENSUS DATA.
- Author
-
Terek, Milan, Muchova, Eva, and Lesko, Peter
- Subjects
STATISTICS ,PROBLEM solving ,DATA analysis ,CENSUS ,STATISTICAL correlation - Abstract
The paper deals with the issue of solving nonresponse problems in a realized census. The purpose is to discuss the statistical methods we explain. To test the new approach, the data from the survey at one University are used. The suggested approach offers more accurate estimates because of compensation for nonresponse and the possibility to formulate broader conclusions based on the census data. The approach is advised in all surveys in which the costs of realization in the survey by census are practically the same as for sample survey, and the list of all units of the population is available. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. The Relationship Between Response Probabilities and Data Quality in Grid Questions
- Author
-
Tobias Gummer, Ruben Bach, Jessica Daikeler, and Stephanie Eckman
- Subjects
response propensity ,measurement error ,data quality ,panel survey ,adaptive survey design ,Social sciences (General) ,H1-99 - Abstract
Response probabilities are used in adaptive and responsive survey designs to guide data collection efforts, often with the goal of diversifying the sample composition. However, if response probabilities are also correlated with measurement error, this approach could introduce bias into survey data. This study analyzes the relationship between response probabilities and data quality in grid questions. Drawing on data from the probability-based GESIS panel, we found low propensity cases to more frequently produce item nonresponse and nondifferentiated answers than high propensity cases. However, this effect was observed only among long-time respondents, not among those who joined more recently. We caution that using adaptive or responsive techniques may increase measurement error while reducing the risk of nonresponse bias.
- Published
- 2021
- Full Text
- View/download PDF
9. Working with Response Probabilities.
- Author
-
Bethlehem, Jelke
- Subjects
- *
PROBABILITY theory , *SURVEYS - Abstract
Sample surveys are often affected by nonresponse. These surveys have in common that their outcomes depend at least partly on a human decision whether or not to participate. If it would be completely clear how this decision mechanism works, estimates could be corrected. An often used approach is to introduce the concept of the response probability. Of course, these probabilities are a theoretical concept and therefore unknown. The idea is to estimate them by using the available data. If it is possible to obtain good estimates of the response probabilities, they can be used to improve estimators of population characteristics. Estimating response probabilities relies heavily on the use of models. An often used model is the logit model. In the article, this model is compared with the simple linear model. Estimation of response probabilities models requires the individual values of the auxiliary variables to be available for both the respondents and the nonrespondents of the survey. Unfortunately, this is often not the case. This article explores some approaches for estimating response probabilities that have less heavy data requirements. The estimated response probabilities were also used to measure possible deviations from representativity of the survey response. The indicator used is the coefficient of variation (CV) of the response probabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. Modelling time change in survey response rates: A Bayesian approach with an application to the Dutch Health Survey
- Author
-
Wu, Shiya, Boonstra, Harm-Jan, Moerbeek, Mirjam, Schouten, Barry, Wu, Shiya, Boonstra, Harm-Jan, Moerbeek, Mirjam, and Schouten, Barry
- Abstract
Precise and unbiased estimates of response propensities (RPs) play a decisive role in the monitoring, analysis, and adaptation of data collection. In a fixed survey climate, those parameters are stable and their estimates ultimately converge when sufficient historic data is collected. In survey practice, however, response rates gradually vary in time. Understanding time-dependent variation in predicting response rates is key when adapting survey design. This paper illuminates time-dependent variation in response rates through multi-level time-series models. Reliable predictions can be generated by learning from historic time series and updating with new data in a Bayesian framework. As an illustrative case study, we focus on Web response rates in the Dutch Health Survey from 2014 to 2019.
- Published
- 2023
11. Improving Predictions of Response Propensities for Effective Adaptive Survey Design (ASD)
- Author
-
Wu, Shiya and Wu, Shiya
- Abstract
Survey practitioners keep steadily searching for methods to improve effectiveness of adaptive survey design. The adaptation performance depends heavily on precise survey parameter estimates, such as response propensity. Recently, making precise estimates becomes increasingly difficult. The existing methods most often come in conflict with the rare historic data sets for running an infrequent or new survey. Also, methods most often ignore the timeliness of historic data of an ongoing survey. Therefore, this dissertation focuses on developing and applying Bayesian methods in adaptive survey design, both for precise and reliable predictions to make about survey design parameters and for ensuring timeliness of scarce survey resources to allocate. I discuss the Bayesian framework for its ability to include external data through prior distributions and to learn how responses vary in time in order to improve prediction precision. I also discuss effective adaptive survey designs that timely tailor the follow-up strategy to approach nonrespondents in order to enhance the obtained response. The proposed methods in this dissertation are applied to some case studies.
- Published
- 2023
12. Improving Predictions of Response Propensities for Effective Adaptive Survey Design (ASD)
- Subjects
nonrespons ,Bayesiaanse analyse ,tijdsverandering ,adaptief enquêteontwerp ,response propensity ,Bayesian analysis ,uitlokking door deskundigen ,adaptive survey design ,expert elicitation ,nonresponse ,allocation ,time change ,responsneiging ,optimization ,optimalisatie - Abstract
Survey practitioners keep steadily searching for methods to improve effectiveness of adaptive survey design. The adaptation performance depends heavily on precise survey parameter estimates, such as response propensity. Recently, making precise estimates becomes increasingly difficult. The existing methods most often come in conflict with the rare historic data sets for running an infrequent or new survey. Also, methods most often ignore the timeliness of historic data of an ongoing survey. Therefore, this dissertation focuses on developing and applying Bayesian methods in adaptive survey design, both for precise and reliable predictions to make about survey design parameters and for ensuring timeliness of scarce survey resources to allocate. I discuss the Bayesian framework for its ability to include external data through prior distributions and to learn how responses vary in time in order to improve prediction precision. I also discuss effective adaptive survey designs that timely tailor the follow-up strategy to approach nonrespondents in order to enhance the obtained response. The proposed methods in this dissertation are applied to some case studies.
- Published
- 2023
13. Bayesian Estimation of Latent Class Model for Survey Data Subject to Item Nonresponse.
- Author
-
Zakaria, Samah, Hafez, Mai Sherif, and Gad, Ahmed Mahmoud
- Subjects
- *
MONTE Carlo method , *LATENT variables , *MARKOV chain Monte Carlo , *DEMOGRAPHIC surveys - Abstract
Latent variable models are widely used in social sciences for measuring constructs (latent variables) such as ability, attitude, behavior, and wellbeing. Those unobserved constructs are measured through a number of observed items (variables). The observed variables are often subject to item nonresponse that may be nonignorable. Incorporating a missingness mechanism within the model used to analyze data with nonresponse is crucial to obtain valid estimates for parameters, especially when the missingness is nonignorable. In this paper, we propose a latent class model (LCM) where a categorical latent variable is used to capture a latent phenomenon, and another categorical latent variable is used to summarize response propensity. The proposed model incorporates a missingness mechanism. Bayesian estimation using Markov Chain Monte Carlo (MCMC) methods are used for fitting this LCM. Real data with binary items from the 2014 Egyptian Demographic and Health Survey (EDHS14) are used. Different levels of missingness are artificially created in order to study results of the model under low, moderate and high levels of missingness. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
14. CLASSIFICATION AND REGRESSION TREES AND FORESTS FOR INCOMPLETE DATA FROM SAMPLE SURVEYS.
- Author
-
Wei-Yin Loh, Eltinge, John, Moon Jung Cho, and Yuanzhi Li
- Subjects
REGRESSION trees ,CONSUMPTION (Economics) ,CLASSIFICATION ,CONSUMER surveys - Abstract
Analysis of sample survey data often requires adjustments for missing val- ues in the variables of interest. Standard adjustments based on item imputation or on propensity weighting factors rely on the availability of auxiliary variables for both responding and non-responding units. Their application can be challenging when the auxiliary variables are numerous and are themselves subject to incomplete-data problems. This paper shows how classification and regression trees and forests can overcome these difficulties and compares them with likelihood methods in terms of bias and mean squared error. The development centers on a component of income data from the U.S. Consumer Expenditure Survey, which has a relatively high rate of item missingness. Classification trees and forests are used to model the unit- level propensity for item missingness in the income component. Regression trees and forests are used to model the conditional mean of the income component. The methods are then used to estimate the mean of the income component, adjusted for item nonresponse. Thirteen methods for estimating a population mean are compared in simulation experiments. The results show that if the number of auxiliary variables with missing values is not small, or if they have substantial missingness rates, likelihood methods can be impracticable or inapplicable. Tree and forest methods are always applicable, are relatively fast, and have higher efficiency than likelihood methods under real-data situations with incomplete-data patterns similar to that in the abovementioned survey. Their efficiency loss under parametric conditions most favorable to likelihood methods is observed to be between 10-25%. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
15. Latent variable modelling with non‐ignorable item non‐response: multigroup response propensity models for cross‐national analysis.
- Author
-
Kuha, Jouni, Katsikatsou, Myrsini, and Moustaki, Irini
- Subjects
SURVEYS ,QUESTIONNAIRES ,RESPONDENTS ,DATA analysis - Abstract
Summary: When missing data are produced by a non‐ignorable non‐response mechanism, analysis of the observed data should include a model for the probabilities of responding. We propose such models for non‐response in survey questions which are treated as measures of latent constructs and analysed by using latent variable models. The non‐response models that we describe include additional latent variables (latent response propensities) which determine the response probabilities. We argue that this model should be specified as flexibly as possible, and we propose models where the response propensity is a categorical variable (a latent response class). This can be combined with any latent variable model for the survey items, and an association between the latent variables measured by the items and the latent response propensities then implies a model with non‐ignorable non‐response. We consider in particular such models for the analysis of data from cross‐national surveys, where the non‐response model may also vary across the countries. The models are applied to data on welfare attitudes in 29 countries in the European Social Survey. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey.
- Author
-
Mirel, Lisa B. and Chowdhury, Sadeq R.
- Subjects
- *
STATISTICAL sampling , *MEDICAL care costs , *RESPONSE rates , *INTERVIEWERS , *DATA quality - Abstract
Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS) and the Medical Expenditure Panel Survey (MEPS). MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year's NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. Downward calibration property of estimated response propensities.
- Author
-
LEPIK, NATALJA and TRAAT, IMBI
- Subjects
- *
CALIBRATION , *ESTIMATION theory , *PROBABILITY theory - Abstract
We consider four methods for estimating response propensities: three traditional ones (linear, logistic, probit) and one more recent, a decision tree method. We show that some but not all the methods produce estimates that calibrate sample totals of auxiliary variables down to the response set totals. The downward calibration property reveals interesting relationships between estimated propensities, auxiliary variables, and true response probabilities. However, the property itself does not guarantee more accurate propensity estimation. Our simulation study shows that the accuracy of the estimation method depends primarily on the relationship nature between true response probabilities and auxiliary variables. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. EXAMINING CHANGES OF INTERVIEW LENGTH OVER THE COURSE OF THE FIELD PERIOD.
- Author
-
KIRCHNER, ANTJE and OLSON, KRISTEN
- Subjects
- *
ACQUISITION of data , *EMPIRICAL research , *MEASUREMENT errors , *RESPONDENTS , *INTERVIEWERS , *ATTITUDE (Psychology) - Abstract
It is well established that interviewers learn behaviors both during training and on the job. How this learning occurs has received surprisingly little empirical attention: Is it driven by the interviewer herself or by the respondents she interviews? There are two competing hypotheses about what happens during field data collection: (1) interviewers learn behaviors from their previous interviews, and thus change their behavior in reaction to the behaviors previously encountered; and (2) interviewers encounter different types of and, especially, less cooperative respondents (i.e., nonresponse propensity affecting the measurement error situation), leading to changes in interview behaviors over the course of the field period. We refer to these hypotheses as the experience and response propensity hypotheses, respectively. This paper examines the relationship between proxy indicators for the experience and response propensity hypotheses on interview length using data and paradata from two telephone surveys. Our results indicate that both interviewer-driven experience and respondent-driven response propensity are associated with the length of interview. While general interviewing experience is nonsignificant, within-study experience decreases interview length significantly, even when accounting for changes in sample composition. Interviewers with higher cooperation rates have significantly shorter interviews in study one; however, this effect is mediated by the number of words spoken by the interviewer. We find that older respondents and male respondents have longer interviews despite controlling for the number of words spoken, as do respondents who complete the survey at first contact. Not surprisingly, interviews are significantly longer the more words interviewers and respondents speak. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
19. Interviewers' expectations of response propensity can introduce nonresponse bias in survey data.
- Author
-
Eckman, Stephanie
- Subjects
- *
HOUSEHOLD surveys , *PERSONAL finance , *PROPENSITY to consume , *INTERVIEWERS - Abstract
The article offers the author's insights on the paper "What's the Chance? Interviewers' Expectations of Response in the 2010 SCF," by economist Arthur B. Kennickell. Topics include the concern of Kennickell regarding the Survey of Consumer Finances (SCF), task of field interviewers, and interviewer judgments of response propensity.
- Published
- 2017
- Full Text
- View/download PDF
20. New calibration estimation procedure in the presence of unit non response.
- Author
-
Jaiswal, Ashok Kumar, Usman, Mahamood, and Singh, Garib Nath
- Subjects
STATISTICAL sampling ,CALIBRATION ,SAMPLE size (Statistics) - Abstract
In the present work, we have proposed the calibrated estimator of population mean in the presence of unit non response. The non response has been adjusted using the non response adjustment factor depending on different models (which are inverse linear , logistic and exponential) based on auxiliary information, incorporated in calibration estimation. We have studied the case of unit non response through calibration technique under three sampling designs named as simple random sampling (SRS), probability proportional to size sampling (PPS) and stratified random sampling. A broad simulation study has been conducted to evaluate the performances of the calibrated estimators in each sampling scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. Working with Response Probabilities
- Author
-
Jelke Bethlehem
- Subjects
Estimation ,education.field_of_study ,Computer science ,adjustment weighting ,Population ,Statistics ,response propensity ,Estimator ,Probability and statistics ,Sample (statistics) ,representativity ,Logistic regression ,Measure (mathematics) ,HA1-4737 ,nonresponse ,education ,Human decision - Abstract
Sample surveys are often affected by nonresponse. These surveys have in common that their outcomes depend at least partly on a human decision whether or not to participate. If it would be completely clear how this decision mechanism works, estimates could be corrected. An often used approach is to introduce the concept of the response probability. Of course, these probabilities are a theoretical concept and therefore unknown. The idea is to estimate them by using the available data. If it is possible to obtain good estimates of the response probabilities, they can be used to improve estimators of population characteristics. Estimating response probabilities relies heavily on the use of models. An often used model is the logit model. In the article, this model is compared with the simple linear model. Estimation of response probabilities models requires the individual values of the auxiliary variables to be available for both the respondents and the nonrespondents of the survey. Unfortunately, this is often not the case. This article explores some approaches for estimating response probabilities that have less heavy data requirements. The estimated response probabilities were also used to measure possible deviations from representativity of the survey response. The indicator used is the coefficient of variation (CV) of the response probabilities.
- Published
- 2020
22. Comparación de dos métodos para corregir el sesgo de no respuesta a una encuesta: sustitución muestral y ajuste según propensión a responder A comparison of two methods to adjust for non-response bias: field substitution and weighting non-response adjusments based on response propensity
- Author
-
Alejandra Vives, Catterina Ferreccio, and Guillermo Marshall
- Subjects
Sesgo de no respuesta ,Sustitución muestral ,Propensión a responder ,Non-response bias ,Field substitution ,Response propensity ,Public aspects of medicine ,RA1-1270 - Abstract
La no respuesta es un problema creciente en encuestas poblacionales que puede ser causa de sesgo de no-respuesta cuando respondedores y no respondedores difieren sistemáticamente. Objetivo: Comparar los resultados obtenidos mediante dos técnicas de corrección para el sesgo de no-respuesta: sustitución muestral y pesos de no respuesta obtenidos mediante propensión a responder. Métodos: Se comparan los efectos de la sustitución muestral semicontrolada y el uso de pesos de ajuste obtenidos mediante la propensión a responder sobre seis resultados de una encuesta de salud. Resultados: A pesar de las diferencias significativas entre respondedores y no respondedores, mediante la corrección las prevalencias estimadas sólo cambian levemente, dando ambas técnicas de ajuste resultados similares. Sólo en el caso del tabaquismo, la sustitución muestral parece haber aumentado el sesgo de la estimación. Conclusiones: Nuestros resultados sugieren que tanto mediante un procedimiento de sustitución muestral semicontrolada, como a través del ajuste estadístico de la no respuesta mediante la propensión a responder, se obtienen estimaciones de prevalencias corregidas similares.Unit non-response is a growing problem in sample surveys that can bias survey estimates if respondents and non-respondents differ systematically. Objetives: To compare the results of two nonresponse adjustment methods: field substitution and weighting nonresponse adjustment based on response propensity. Methods: Field substitution and response propensity weights are used to adjust for non-response and their effect on the prevalence of six survey outcomes is compared. Results Although significant differences are found between respondents and non-respondents, only slight changes on prevalence estimates are observed after adjustment, with both techniques showing similar results. In the sole case of smoking, substitution seems to have further biased survey estimates. Conclusions: Our results suggest that when there is information available for both respondents and non-respondents, or if a careful sample substitution process is performed, weighting adjustments based on response propensity and field substitution produce comparable results on prevalence estimates.
- Published
- 2009
23. Bayesian Estimation of Latent Class Model for Survey Data Subject to Item Nonresponse
- Author
-
Ahmed M. Gad, Mai Sherif Hafez, and Samah Zakaria
- Subjects
Statistics and Probability ,Bayes estimator ,Statistics,latent models ,Markov chain Monte Carlo ,Latent variable ,Management Science and Operations Research ,Missing data ,Latent class model ,62P25 ,symbols.namesake ,Modeling and Simulation ,Subject (grammar) ,symbols ,Econometrics ,Survey data collection ,Bayesian estimation ,Nonignorable item nonresponse ,Response propensity ,Statistics, Probability and Uncertainty ,Categorical variable ,Mathematics - Abstract
Latent variable models are widely used in social sciences for measuring constructs (latent variables) such as ability, attitude, behavior, and wellbeing. Those unobserved constructs are measured through a number of observed items (variables). The observed variables are often subject to item nonresponse, that may be nonignorable. Incorporating a missingness mechanism within the model used to analyze data with nonresponse is crucial to obtain valid estimates for parameters, especially when the missingness is nonignorable.In this paper, we propose a latent class model (LCM) where a categorical latent variable is used to capture a latent phenomenon, and another categorical latent variable is used to summarize response propensity. The proposed model incorporates a missingness mechanism. Bayesian estimation using Markov Chain Monte Carlo (MCMC) methods are used for fitting this LCM. Real data with binary items from the 2014 Egyptian Demographic and Health Survey (EDHS14) are used. Different levels of missingness are artificially created in order to study results of the model under low, moderate and high levels of missingness.
- Published
- 2019
24. Analysis of Multivariate Longitudinal Data Subject to Nonrandom Dropout.
- Author
-
Hafez, Mai Sherif, Moustaki, Irini, and Kuha, Jouni
- Subjects
- *
MULTIVARIATE analysis , *LONGITUDINAL method , *LATENT variables , *RANDOM data (Statistics) , *HAZARD function (Statistics) - Abstract
Longitudinal data are collected for studying changes across time. We consider multivariate longitudinal data where multiple observed variables, measured at each time point, are used as indicators for theoretical constructs (latent variables) of interest. A common problem in longitudinal studies is dropout, where subjects exit the study prematurely. Ignoring the dropout mechanism can lead to biased estimates, especially when the dropout isnonrandom. Our proposed approach uses latent variable models to capture the evolution of the latent phenomenon over time while also accounting for possibly nonrandom dropout. The dropout mechanism is modeled with a hazard function that depends on the latent variables and observed covariates. Different relationships among these variables and the dropout mechanism are studied via 2 model specifications. The proposed models are used to study people’s perceptions of women’s work using 3 questions from 5 waves from the British Household Panel Survey. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
25. Risk of Nonresponse Bias and the Length of the Field Period in a Mixed-Mode General Population Panel
- Author
-
Tobias Gummer and Bella Struminskaya
- Subjects
Statistics and Probability ,Field (physics) ,Population ,Online-Befragung ,Umfrageforschung ,Antwortverhalten ,0504 sociology ,postalische Befragung ,survey research ,Statistics ,050602 political science & public administration ,GESIS Panel—Standard Edition, Version 19.0.0, 2017-4-18 Release 19, doi:10.4232/1.12743 [Coefficient of variation ,Field duration ,Mixed-mode panels ,Nonresponse bias ,Response propensity ,ZA5665] ,Non-response bias ,survey ,response behavior ,Datengewinnung ,education ,Social sciences, sociology, anthropology ,mail survey ,Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,education.field_of_study ,Sozialwissenschaften, Soziologie ,Applied Mathematics ,05 social sciences ,050401 social sciences methods ,Befragung ,Mixed mode ,0506 political science ,data capture ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,ddc:300 ,Panel ,online survey ,Statistics, Probability and Uncertainty ,Psychology ,Social Sciences (miscellaneous) ,Period (music) - Abstract
Survey researchers are often confronted with the question of how long to set the length of the field period. Longer fielding time might lead to greater participation yet requires survey managers to devote more of their time to data collection efforts. With the aim of facilitating the decision about the length of the field period, we investigated whether a longer fielding time reduces the risk of nonresponse bias to judge whether field periods can be ended earlier without endangering the performance of the survey. By using data from six waves of a probability-based mixed-mode (online and mail) panel of the German population, we analyzed whether the risk of nonresponse bias decreases over the field period by investigating how day-by-day coefficients of variation develop during the field period. We then determined the optimal cut-off points for each mode after which data collection can be terminated without increasing the risk of nonresponse bias and found that the optimal cut-off points differ by mode. Our study complements prior research by shifting the perspective in the investigation of the risk of nonresponse bias to panel data as well as to mixed-mode surveys, in particular. Our proposed method of using coefficients of variation to assess whether the risk of nonresponse bias decreases significantly with each additional day of fieldwork can aid survey practitioners in finding the optimal field period for their mixed-mode surveys.
- Published
- 2021
26. The Influence of Relationship Quality on the Participation of Secondary Respondents: Results from the German Family Panel
- Author
-
Jette Schröder, Laura Castiglioni, Josef Brüderl, and Ulrich Krieger
- Subjects
Secondary nonresponse ,Nonresponse bias ,Dyadic data analysis ,Response propensity ,pairfam ,Urban groups. The city. Urban sociology ,HT101-395 ,City population. Including children in cities, immigration ,HT201-221 ,Demography. Population. Vital events ,HB848-3697 - Abstract
The pairfam study offers the rare opportunity of conducting dyadic analyses of partner and of parent-child relationships. Not only the randomly drawn anchor respondents are surveyed in the study, but also – with the consent of the anchors – their partners, parents and children. However, we must ask whether or in how far the participation of the secondary respondents is selective, thus introducing a nonresponse bias in the dyadic data. This article analyses what factors influence the participation of partners and parents of the anchors on the German Family Panel pairfam. We focus on the question of whether the quality of the relationship between the anchor and partner or anchor and parent influences participation. Among parents there is both an influence of the relationship quality in the narrower sense and an influence of the closeness of the relationship with regard to contact and mutual support. By contrast, relationship quality appears to have a lesser significance for the participation of partners, whereas the degree of institutionalisation in the relationship has a major influence. The article aims to sensitise pairfam users to the possibility of a nonresponse bias in dyadic analyses and provides information on suitable handling of the data.
- Published
- 2013
27. Handling nonignorable nonresponse with respondent modeling and the SIR algorithm.
- Author
-
Paik, Minhui and Larsen, Michael D.
- Subjects
- *
ALGORITHMS , *STATISTICAL sampling , *MISSING data (Statistics) , *MATHEMATICAL variables , *SENSITIVITY analysis , *DISTRIBUTION (Probability theory) , *MATHEMATICAL models - Abstract
Abstract: Handling missing data based on parametric models typically involves computing the conditional expectation of missing data given observed data for nonrespondents. Under a nonignorable missing data mechanism, the conditional distribution requires joint modeling of the study and response variables. A natural way of factoring the model is to use models for the distribution of variables under complete response and for the probability of response. Sensitivity to model specification is a serious scientific problem. Models cannot be validated, however, from missing data, because, by definition, the information needed for validation is missing. In many cases, under assumed models, a Monte Carlo (MC) method can be used to compute the conditional expectation of missing given observed variables. The issue of model specification translates into the questions, how should one generate values from the conditional distribution for nonrespondents? One way to interpret this issue is as the need to specify an imputation method for the missing data. In this paper, we consider a simulation method based on the model for the distribution of respondents together with the Sampling Importance Resampling (SIR) algorithm. The proposed method is shown to be more robust than some current approaches in the sense that assumed models can be verified from respondents. A linearized variance estimation method is also studied. Results from a limited simulation study are presented. [Copyright &y& Elsevier]
- Published
- 2014
- Full Text
- View/download PDF
28. The utility of auxiliary data for survey response modeling: Evidence from the German Internet Panel
- Author
-
Cornesse, Carina
- Subjects
Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Sozialwissenschaften, Soziologie ,Datenqualität ,Umfrageforschung ,Federal Republic of Germany ,auxiliary data ,INKAR data ,interviewer observations ,Microm data ,Nonresponse ,online panel recruitment ,response propensity ,German Internet Panel ,GIP ,Antwortverhalten ,Bundesrepublik Deutschland ,data capture ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,survey research ,ddc:300 ,data quality ,response behavior ,Datengewinnung ,Social sciences, sociology, anthropology - Abstract
Auxiliary data are becoming more important as nonresponse rates increase and new fieldwork monitoring and respondent targeting strategies develop. In many cases, auxiliary data are collected or linked to the gross sample to predict survey response. If the auxiliary data have high predictive power, the response models can meaningfully inform survey operations as well as post-survey adjustment procedures. In this paper, I examine the utility of different sources of auxiliary data (sampling frame data, interviewer observations, and micro-geographic area data) for modeling survey response in a probability-based online panel in Germany. I find that the utility of each of these data sources is challenged by a number of concerns (scarcity, missing data, transparency issues, and high levels of aggregation) and that none of the auxiliary data are associated with survey response to any substantial degree.
- Published
- 2020
29. What Do You Think? Using Expert Opinion to Improve Predictions of Response Propensity Under a Bayesian Framework
- Author
-
Coffey, Stephanie, West, Brady T., Wagner, James, and Elliott, Michael R.
- Subjects
Erhebungstechniken und Analysetechniken der Sozialwissenschaften ,Bayesian Analysis ,Response Propensity ,Expert Opinion ,Elicitation of Priors ,Responsive Survey Design ,Sozialwissenschaften, Soziologie ,Datenqualität ,Umfrageforschung ,Antwortverhalten ,Article ,data capture ,Methods and Techniques of Data Collection and Data Analysis, Statistical Methods, Computer Methods ,survey research ,Bayesian Analysis, Response Propensity, Expert Opinion, Elicitation of Priors, Responsive Survey Design ,ddc:300 ,data quality ,response behavior ,Datengewinnung ,Social sciences, sociology, anthropology - Abstract
Responsive survey designs introduce protocol changes to survey operations based on accumulating paradata. Case-level predictions, including response propensity, can be used to tailor data collection features in pursuit of cost or quality goals. Unfortunately, predictions based only on partial data from the current round of data collection can be biased, leading to ineffective tailoring. Bayesian approaches can provide protection against this bias. Prior beliefs, which are generated from data external to the current survey implementation, contribute information that may be lacking from the partial current data. Those priors are then updated with the accumulating paradata. The elicitation of the prior beliefs, then, is an important characteristic of these approaches. While historical data for the same or a similar survey may be the most natural source for generating priors, eliciting prior beliefs from experienced survey managers may be a reasonable choice for new surveys, or when historical data are not available. Here, we fielded a questionnaire to survey managers, asking about expected attempt-level response rates for different subgroups of cases, and developed prior distributions for attempt-level response propensity model coefficients based on the mean and standard error of their responses. Then, using respondent data from a real survey, we compared the predictions of response propensity when the expert knowledge is incorporated into a prior to those based on a standard method that considers accumulating paradata only, as well as a method that incorporates historical survey data, methods, data, analyses, Vol 14, No 2 (2020)
- Published
- 2020
30. Reduction of Nonresponse Bias through Case Prioritization
- Author
-
Andy Peytchev, Sarah Riley, Jeff Rosen, Joe Murphy, and Mark Lindblad
- Subjects
Nonresponse bias ,Response propensity ,Paradata ,Case prioritization ,Social sciences (General) ,H1-99 - Abstract
How response rates are increased can determine the remaining nonresponse bias in estimates. Studies often target sample members that are most likely to be interviewed to maximize response rates. Instead, we suggest targeting likely nonrespondents from the onset of a study with a different protocol to minimize nonresponse bias. To inform the targeting of sample members, various sources of information can be utilized: paradata collected by interviewers, demographic and substantive survey data from prior waves, and administrative data. Using these data, the likelihood of any sample member becoming a nonrespondent is estimated and on those sample cases least likely to respond, a more effective, often more costly, survey protocol can be employed to gain respondent cooperation. This paper describes the two components of this approach to reducing nonresponse bias. We demonstrate assignment of case priority based on response propensity models, and present empirical results from the use of a different protocol for prioritized cases. In a field data collection, a random half of cases with low response propensity received higher priority and increased resources. Resources for high-priority cases were allocated as interviewer incentives. We find that we were relatively successful in predicting response outcome prior to the survey and stress the need to test interventions in order to benefit from case prioritization.
- Published
- 2010
- Full Text
- View/download PDF
31. Explaining Rising Nonresponse Rates in Cross-Sectional Surveys.
- Author
-
Brick, J. Michael and Williams, Douglas
- Abstract
This review of nonresponse in cross-sectional household surveys in the United States shows trends in nonresponse rates, the main reasons for nonresponse, and changes in the components of nonresponse. It shows that nonresponse is increasing but that existing methods for modeling response mechanisms do not adequately explain these changes. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
32. Estimation of an indicator of the representativeness of survey response
- Author
-
Shlomo, Natalie, Skinner, Chris, and Schouten, Barry
- Subjects
- *
ESTIMATION theory , *MEASUREMENT errors , *STATISTICAL sampling , *NONRESPONSE (Statistics) , *POPULATION statistics , *SIMULATION methods & models , *ANALYSIS of variance - Abstract
Abstract: Nonresponse is a major source of estimation error in sample surveys. The response rate is widely used to measure survey quality associated with nonresponse, but is inadequate as an indicator because of its limited relation with nonresponse bias. proposed an alternative indicator, which they refer to as an indicator of representativeness or R-indicator. This indicator measures the variability of the probabilities of response for units in the population. This paper develops methods for the estimation of this R-indicator assuming that values of a set of auxiliary variables are observed for both respondents and nonrespondents. We propose bias adjustments to the point estimator proposed by and demonstrate the effectiveness of this adjustment in a simulation study where it is shown that the method is valid, especially for smaller sample sizes. We also propose linearization variance estimators which avoid the need for computer-intensive replication methods and show good coverage in the simulation study even when models are not fully specified. The use of the proposed procedures is also illustrated in an application to two business surveys at Statistics Netherlands. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
33. EFFECTS OF VARIABLES IN A RESPONSE PROPENSITY SCORE MODEL FOR SURVEY DATA ADJUSTMENT: A SIMULATION STUDY.
- Author
-
Fukuda, Masafumi
- Abstract
On building a model for estimating response propensity score for survey data adjustment, we carried out computer simulations to examine the effects of seven types of variables, each having differing associations with the sample inclusion probability, response probability and study variable, by comparing the respective cases in which the variables are included and excluded by the model. Then the following main results were obtained. The most important variable for the model is the one that is simultaneously associated with the study variable, the sample inclusion probability, and the response probability. The variables which have no association with the study variable should not be included in the response propensity model. These results support the conclusions of Brookhart et al. (2006), who examined propensity score models in their study on estimating treatment effects. Additionally, a small difference was found in comparing the effects of the variable associated with sample inclusion probability and the study variable to those of the variable associated with the response probability and the study variable. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
34. Robust Model-Based Inference for Incomplete Data via Penalized Spline Propensity Prediction.
- Author
-
An, Hyonggin and Little, RoderickJ. A.
- Subjects
- *
PARAMETRIC devices , *REGRESSION analysis , *STATISTICAL matching , *BIOMETRY , *ESTIMATION theory , *SAMPLE size (Statistics) - Abstract
Parametric model-based regression imputation is commonly applied to missing-data problems, but is sensitive to misspecification of the imputation model. Little and An (2004) proposed a semiparametric approach called penalized spline propensity prediction (PSPP), where the variable with missing values is modeled by a penalized spline (P-Spline) of the response propensity score, which is logit of the estimated probability of being missing given the observed variables. Variables other than the response propensity are included parametrically in the imputation model. However they only considered point estimation based on single imputation with PSPP. We consider here three approaches to standard errors estimation incorporating the uncertainty due to non response: (a) standard errors based on the asymptotic variance of the PSPP estimator, ignoring sampling error in estimating the response propensity; (b) standard errors based on the bootstrap method; and (c) multiple imputation-based standard errors using draws from the joint posterior predictive distribution of missing values under the PSPP model. Simulation studies suggest that the bootstrap and multiple imputation approaches yield good inferences under a range of simulation conditions, with multiple imputation showing some evidence of closer to nominal confidence interval coverage when the sample size is small. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
35. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey
- Author
-
Sadeq R. Chowdhury and Lisa B. Mirel
- Subjects
sampling ,education.field_of_study ,Statistics ,05 social sciences ,Population ,response propensity ,national health interview survey ,Sampling (statistics) ,Survey sampling ,050801 communication & media studies ,paradata ,Paradata ,HA1-4737 ,0506 political science ,Survey methodology ,0508 media and communications ,interviewer observations ,Sampling design ,050602 political science & public administration ,Econometrics ,National Health Interview Survey ,Medical Expenditure Panel Survey ,education ,medical expenditure panel survey - Abstract
Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS) and the Medical Expenditure Panel Survey (MEPS). MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.
- Published
- 2017
36. Evaluation of adjustments for partial non-response bias in the US National Immunization Survey.
- Author
-
Smith, Philip J., Hoaglin, David C., Rao, J. N. K., Battaglia, Michael P., and Daniels, Danni
- Subjects
IMMUNIZATION ,HEALTH surveys ,STATISTICS ,MEDICAL care - Abstract
Many health surveys conduct an initial household interview to obtain demographic information and then request permission to obtain detailed information on health outcomes from the respondent's health care providers. A ‘complete response’ results when both the demographic information and the detailed health outcome data are obtained. A ‘partial response’ results when the initial interview is complete but, for one reason or another, the detailed health outcome information is not obtained. If ‘complete responders’ differ from ‘partial responders’ and the proportion of partial responders in the sample is at least moderately large, statistics that use only data from complete responders may be severely biased. We refer to bias that is attributable to these differences as ‘partial non-response’ bias. In health surveys it is customary to adjust survey estimates to account for potential differences by employing adjustment cells and weighting to reduce bias from partial response. Before making these adjustments, it is important to ask whether an adjustment is expected to increase or decrease bias from partial non-response. After making these adjustments, an equally important question is ‘How well does the method of adjustment work to reduce partial non-response bias?’. The paper describes methods for answering these questions. Data from the US National Immunization Survey are used to illustrate the methods. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. Unit Nonresponse and Weighting Adjustments: A Critical Review
- Author
-
J. Michael Brick
- Subjects
Estimation ,bias ,data collection ,Data collection ,Statistics ,response propensity ,Econometrics ,Non-response bias ,calibration ,HA1-4737 ,Weighting ,Unit (housing) - Abstract
This article reviews unit nonresponse in cross-sectional household surveys, the consequences of the nonresponse on the bias of the estimates, and methods of adjusting for it. We describe the development of models for nonresponse bias and their utility, with particular emphasis on the role of response propensity modeling and its assumptions. The article explores the close connection between data collection protocols, estimation strategies, and the resulting nonresponse bias in the estimates. We conclude with some comments on the current state of the art and the need for future developments that expand our understanding of the response phenomenon.
- Published
- 2013
38. Unit non-response in household wealth surveys: Experience from the Eurosystem's Household Finance and Consumption Survey
- Author
-
Osier, Guillaume
- Subjects
C83 ,ddc:330 ,response propensity ,calibration ,unit non-response ,sampling weights - Abstract
The Household Finance and Consumption Survey (HFCS) is a recent initiative from the Eurosystem to collect comparable micro-data on household wealth and indebtedness in the euro area countries. The Household Finance and Consumption Network (HFCN), which comprises the European Central Bank (ECB), national central banks (NCBs), and national statistical institutes (NSIs), is in charge of the development and implementation of the HFCS. The first round of the survey was successfully conducted between 2008 and 2011, and the results were published in April 2013. The second round is now under way and will cover all the euro area countries. This paper is a joint effort by several members of the HFCN to further investigate the issue of unit non-response in the HFCS, better describe and understand its patterns, measure its effects on the overall quality of the survey and, ultimately, propose strategies to mitigate them. The paper is divided into sections, the first section being the introduction. The second section draws up a list of the main possible sources of auxiliary information that can be relied on in order to analyse non-response patterns in the HFCS. It also presents summary indicators that can be used to quantify unit non-response. In the third section, based on the experience from the first wave of the HFCS, the report elaborates on good survey practices (e.g. interviewer training and compensation, use of incentives, persuasive contact strategies, etc.) to prevent unit non-response from occurring. The fourth section compares several reweighting strategies for coping with unit non-response a posteriori, in particular simple and generalised calibration methods. These methods are assessed with respect to their impact on the main HFCS-based estimates. Finally, based on the outcome of this empirical analysis, recommendations are made with regard to post-survey weighting adjustment in the HFCS.
- Published
- 2016
39. Systematic Non-Response in Stated Preference Choice Experiments: Implications for the Valuation of Climate Risk Reductions
- Author
-
Abdulrahman, Abdulallah S and Johnston, Robert J
- Subjects
Response bias ,Response propensity ,Coastal adaptation ,Sea level rise ,Flood risk ,Environmental Economics and Policy - Abstract
Discrete choice experiments (DCEs) addressing adaptation to climate-related risks may be subject to response biases associated with variations in risk exposure across sampled populations. Systematic adjustments for such biases are hindered by the absence of rigorous, standardized selection-correction models for multinomial DCEs, together with a lack of information on non-respondents. This paper illustrates a systematic approach to accommodate risk-related non-response bias in DCEs, where variations in risk exposure may be linked to observable landscape characteristics. The approach adapts reduced form response-propensity models to correct for survey non-response, capitalizing on the fact that indicators of risk exposure may be linked to the geocoded locations of respondents and non-respondents. An application to coastal flood adaptation in Connecticut, USA illustrates implications for welfare estimation. Results demonstrate that the proposed approach can reveal otherwise invisible, systematic effects of survey response patterns on estimated WTP.
- Published
- 2016
- Full Text
- View/download PDF
40. What Do You Think? Using Expert Opinion to Improve Predictions of Response Propensity Under a Bayesian Framework.
- Author
-
Coffey S, West BT, Wagner J, and Elliott MR
- Abstract
Responsive survey designs introduce protocol changes to survey operations based on accumulating paradata. Case-level predictions, including response propensity, can be used to tailor data collection features in pursuit of cost or quality goals. Unfortunately, predictions based only on partial data from the current round of data collection can be biased, leading to ineffective tailoring. Bayesian approaches can provide protection against this bias. Prior beliefs, which are generated from data external to the current survey implementation, contribute information that may be lacking from the partial current data. Those priors are then updated with the accumulating paradata. The elicitation of the prior beliefs, then, is an important characteristic of these approaches. While historical data for the same or a similar survey may be the most natural source for generating priors, eliciting prior beliefs from experienced survey managers may be a reasonable choice for new surveys, or when historical data are not available. Here, we fielded a questionnaire to survey managers, asking about expected attempt-level response rates for different subgroups of cases, and developed prior distributions for attempt-level response propensity model coefficients based on the mean and standard error of their responses. Then, using respondent data from a real survey, we compared the predictions of response propensity when the expert knowledge is incorporated into a prior to those based on a standard method that considers accumulating paradata only, as well as a method that incorporates historical survey data.
- Published
- 2020
- Full Text
- View/download PDF
41. Estimation of an indicator of the representativeness of survey response
- Author
-
Barry Schouten, Natalie Shlomo, and Chris J. Skinner
- Subjects
Statistics and Probability ,education.field_of_study ,Applied Mathematics ,Population ,Survey sampling ,Estimator ,Sample (statistics) ,Representativeness heuristic ,nonresponse ,quality ,representative ,response propensity ,sample survey ,Sample size determination ,Statistics ,Econometrics ,jel:C1 ,Non-response bias ,HA Statistics ,Point estimation ,Statistics, Probability and Uncertainty ,education ,Mathematics - Abstract
Nonresponse is a major source of estimation error in sample surveys. The response rate is widely used to measure survey quality associated with nonresponse, but is inadequate as an indicator because of its limited relation with nonresponse bias. Schouten et al. (2009) proposed an alternative indicator, which they refer to as an indicator of representativeness or R-indicator. This indicator measures the variability of the probabilities of response for units in the population. This paper develops methods for the estimation of this R-indicator assuming that values of a set of auxiliary variables are observed for both respondents and nonrespondents. We propose bias adjustments to the point estimator proposed by Schouten et al. (2009) and demonstrate the effectiveness of this adjustment in a simulation study where it is shown that the method is valid, especially for smaller sample sizes. We also propose linearization variance estimators which avoid the need for computer-intensive replication methods and show good coverage in the simulation study even when models are not fully specified. The use of the proposed procedures is also illustrated in an application to two business surveys at Statistics Netherlands.
- Published
- 2012
42. Indicators for monitoring and improving representativeness of response
- Author
-
Schouten, Barry, Shlomo, Natalie, and Skinner, Chris J.
- Subjects
H Social Sciences (General) ,jel:C1 ,HA Statistics ,auxiliary variable ,business survey ,nonresponse ,response propensity - Abstract
The increasing efforts and costs required to achieve survey response have led to a stronger focus on survey data collection monitoring by means of paradata and to the rise of adaptive and responsive survey designs. Indicators that support data collection monitoring, targeting and prioritising in such designs are not yet available. Subgroup response rates come closest but do not account for subgroup size, are univariate and are not available at the variable level. We present and investigate indicators that support data collection monitoring and effective decisions in adaptive and responsive survey designs. As they are natural extensions of R-indicators, they are termed partial R-indicators. We make a distinction between unconditional and conditional partial R-indicators. Unconditional partial R-indicators provide a univariate assessment of the impact of register data and paradata variables on representativeness of response. Conditional partial R-indicators offer a multivariate assessment. We propose methods for estimating partial indicators and investigate their sampling properties in a simulation study. The use of partial indicators for monitoring and targeting nonresponse is illustrated for both a household and a business survey. Guidelines for the use of the indicators are given.
- Published
- 2011
43. Inverse probability weighting for clustered nonresponse
- Author
-
Chris J. Skinner and Julia D'Arrigo
- Subjects
Statistics::Applications ,ISI ,conditional logistic regression ,nonresponse ,response propensity ,survey weight ,jel:C1 - Abstract
Correlated nonresponse within clusters arises in certain survey settings. It is often represented by a random effects model and assumed to be cluster-specific nonignorable, in the sense that survey and nonresponse outcomes are conditionally independent given cluster-level random effects. Two basic forms of inverse probability weights are considered: response propensity weights based on a marginal model, and weights based on predicted random effects. It is shown that both approaches can lead to biased estimation under cluster-specific nonignorable nonresponse, when the cluster sample sizes are small. We propose a new form of weighted estimator based upon conditional logistic regression, which can avoid this bias. An associated estimator of variance and an extension to observational studies with clustered treatment assignment are also described. Properties of the alternative estimators are illustrated in a small simulation study.
- Published
- 2011
44. A comparison of two methods to adjust for non-response bias: field substitution and weighting non-response adjusments based on response propensity
- Author
-
Vives, Alejandra, Ferreccio, Catterina, and Marshall, Guillermo
- Subjects
Response propensity ,Sesgo de no respuesta ,Field substitution ,Propensión a responder ,Sustitución muestral ,Non-response bias - Abstract
La no respuesta es un problema creciente en encuestas poblacionales que puede ser causa de sesgo de no-respuesta cuando respondedores y no respondedores difieren sistemáticamente. Objetivo: Comparar los resultados obtenidos mediante dos técnicas de corrección para el sesgo de no-respuesta: sustitución muestral y pesos de no respuesta obtenidos mediante propensión a responder. Métodos: Se comparan los efectos de la sustitución muestral semicontrolada y el uso de pesos de ajuste obtenidos mediante la propensión a responder sobre seis resultados de una encuesta de salud. Resultados: A pesar de las diferencias significativas entre respondedores y no respondedores, mediante la corrección las prevalencias estimadas sólo cambian levemente, dando ambas técnicas de ajuste resultados similares. Sólo en el caso del tabaquismo, la sustitución muestral parece haber aumentado el sesgo de la estimación. Conclusiones: Nuestros resultados sugieren que tanto mediante un procedimiento de sustitución muestral semicontrolada, como a través del ajuste estadístico de la no respuesta mediante la propensión a responder, se obtienen estimaciones de prevalencias corregidas similares. Unit non-response is a growing problem in sample surveys that can bias survey estimates if respondents and non-respondents differ systematically. Objetives: To compare the results of two nonresponse adjustment methods: field substitution and weighting nonresponse adjustment based on response propensity. Methods: Field substitution and response propensity weights are used to adjust for non-response and their effect on the prevalence of six survey outcomes is compared. Results Although significant differences are found between respondents and non-respondents, only slight changes on prevalence estimates are observed after adjustment, with both techniques showing similar results. In the sole case of smoking, substitution seems to have further biased survey estimates. Conclusions: Our results suggest that when there is information available for both respondents and non-respondents, or if a careful sample substitution process is performed, weighting adjustments based on response propensity and field substitution produce comparable results on prevalence estimates.
- Published
- 2009
45. Comparación de dos métodos para corregir el sesgo de no respuesta a una encuesta: sustitución muestral y ajuste según propensión a responder
- Author
-
Catterina Ferreccio, Guillermo Marshall, and Alejandra Vives
- Subjects
Field substitution ,Substitution (logic) ,Public Health, Environmental and Occupational Health ,Propensión a responder ,Sample (statistics) ,Sustitución muestral ,Non-response bias ,Process substitution ,Field (geography) ,Weighting ,Response propensity ,Sesgo de no respuesta ,Statistics ,Mathematics - Abstract
ResumenLa no respuesta es un problema creciente en encuestas poblacionales que puede ser causa de sesgo de no-respuesta cuando respondedores y no respondedores difieren sistemáticamente.ObjetivoComparar los resultados obtenidos mediante dos técnicas de corrección para el sesgo de no-respuesta: sustitución muestral y pesos de no respuesta obtenidos mediante propensión a responder.MétodosSe comparan los efectos de la sustitución muestral semicontrolada y el uso de pesos de ajuste obtenidos mediante la propensión a responder sobre seis resultados de una encuesta de salud.ResultadosA pesar de las diferencias significativas entre respondedores y no respondedores, mediante la corrección las prevalencias estimadas sólo cambian levemente, dando ambas técnicas de ajuste resultados similares. Sólo en el caso del tabaquismo, la sustitución muestral parece haber aumentado el sesgo de la estimación.ConclusionesNuestros resultados sugieren que tanto mediante un procedimiento de sustitución muestral semicontrolada, como a través del ajuste estadístico de la no respuesta mediante la propensión a responder, se obtienen estimaciones de prevalencias corregidas similares.AbstractUnit non-response is a growing problem in sample surveys that can bias survey estimates if respondents and non-respondents differ systematically.ObjetivesTo compare the results of two nonresponse adjustment methods: field substitution and weighting nonresponse adjustment based on response propensity.MethodsField substitution and response propensity weights are used to adjust for non-response and their effect on the prevalence of six survey outcomes is compared.ResultsAlthough significant differences are found between respondents and non-respondents, only slight changes on prevalence estimates are observed after adjustment, with both techniques showing similar results. In the sole case of smoking, substitution seems to have further biased survey estimates.ConclusionsOur results suggest that when there is information available for both respondents and non-respondents, or if a careful sample substitution process is performed, weighting adjustments based on response propensity and field substitution produce comparable results on prevalence estimates.
- Published
- 2009
46. An Investigation of the Nonresponse - Measurement Error Nexus.
- Author
-
Olson, Kristen M.
- Subjects
- Survey Nonresponse, Measurement Error, Survey Methodology, Response Propensity
- Abstract
This dissertation examines the nexus between nonresponse and measurement errors in sample surveys. Recent research has shown no strong relationship between nonresponse rates and nonresponse bias. Nonetheless, best practices argue that researchers should attempt to maximize response rates. One voiced concern about practices involving nonresponse reduction is that reluctant sample persons, successfully brought into the respondent data set through persuasive efforts, may provide data filled with measurement error. This research addresses two questions. First, under what circumstances is nonresponse propensity related to the survey variables of interest? Is noncontact or refusal nonresponse more likely to induce nonresponse bias? Second, what is the relationship between nonresponse propensity, nonresponse bias and measurement error? In particular, how do properties of questions and characteristics of respondents affect the nexus between nonresponse bias and measurement error? This dissertation has four main findings. First, nonresponse bias due to noncontact nonresponse is different from that due to noncooperation nonresponse. The difference lies in how the multiple competing influences on response propensity interact to produce the final respondent data set. Second, expert reviewers are used to provide empirical tests of the cognitive response process, despite unreliability in identifying the cognitive response process. Third, conceptual models linking nonresponse propensity and measurement error are supported. Fourth, these models challenge the common belief that reluctant respondents are the most likely respondents to give answers filled with measurement error.
- Published
- 2007
47. CLASSIFICATION AND REGRESSION TREES AND FORESTS FOR INCOMPLETE DATA FROM SAMPLE SURVEYS
- Author
-
Loh, Wei-Yin, Eltinge, John, Cho, Moon Jung, and Li, Yuanzhi
- Published
- 2019
48. Dealing with non-ignorable nonresponse in survey sampling: A latent modeling approach
- Author
-
Matei, Alina and Maria Giovanna RANALLI
- Subjects
Unit nonresponse ,Item nonresponse ,Latent trait models ,Response propensity ,Rasch models
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.