Working papers
Selection in Surveys: Using Randomized Incentives to Detect and Account for Nonresponse Bias [pdf] [NBER link] Revise and Resubmit at Review of Economic Studies
with Ingrid Huitfeldt, Santiago Lacouture, Magne Mogstad, Alex Torgovitsky, and Winnie van Dijk
October, 2022
We show how to use randomized participation incentives to test and account for nonresponse bias in surveys. We first use data from a survey about labor market conditions, linked to full-population administrative data, to provide evidence of large differences in labor market outcomes between participants and nonparticipants, differences which would not be observable to an analyst who only has access to the survey data. These differences persist even after correcting for observable characteristics, raising concerns about nonresponse bias in survey responses. We then use the randomized incentives in our survey to directly test for nonresponse bias, and find strong evidence that the bias is substantial. Next, we apply a range of existing methods that account for nonresponse bias and find they produce bounds (or point estimates) that are either wide or far from the ground truth. We investigate the failure of these methods by taking a closer look at the determinants of participation, finding that the composition of participants changes in opposite directions in response to incentives and reminder emails. We develop a model of participation that allows for two dimensions of unobserved heterogeneity in the participation decision. Applying the model to our data produces bounds (or point estimates) that are narrower and closer to the ground truth than the other methods. Our results highlight the benefits of including randomized participation incentives in surveys. Both the testing procedure and the methods for bias adjustment may be attractive tools for researchers who are able to embed randomized incentives into their survey.
Publications
Eliciting Willingness-to-Pay to Decompose Beliefs and Preferences that Determine Selection into Competition in Lab Experiments [pdf] [publisher link] Forthcoming, Journal of Econometrics
with Yvonne Jie Chen, Li Li , Sarah Moon, Edward Vytlacil, and Songfa Zhong
December, 2023
This paper develops a partial-identification methodology for analyzing self-selection into alternative compensation schemes in a laboratory environment. We formulate an economic model of self-selection in which individuals select the compensation scheme with the largest expected valuation, which depend on individual- and scheme-specific beliefs and non-monetary preferences. We characterize the resulting sharp identified-sets for individual-specific willingness-to-pay, subjective beliefs, and preferences, and develop conditions on the experimental design under which these identified-sets are informative. We apply our methods to examine gender differences in preference for winner-take-all compensation schemes. We find that what has commonly been attributed to a gender difference in non-monetary preference for performing in a competition is instead explained by men being far more confident than women in their probability of winning a future (though not necessarily a past) competition.
Non-Representativeness in Population Health Research: Evidence from a COVID-19 Antibody Study [pdf] [publisher link] Forthcoming, AER: Insights
with Michael Greenstone, Ali Hortacsu, Santiago Lacouture, Magne Mogstad, Azeem Shaikh, Alex Torgovitsky, and Winnie van Dijk
September, 2023
We examine data from a serological study that randomized participation incentives ($0, $100, $500). Minority and poor households are underrepresented at lower incentives. We develop a framework that uses randomized incentives to disentangle non-contact and hesitancy and find that underrepresentation occurs because minority and poor households are more hesitant to participate, not because they are harder to contact. In particular, reservation payments for contacted households in minority and poor neighborhoods are substantially more likely to exceed $100. The $500 incentive closes hesitancy gaps and restores representativeness on observable dimensions including hospitalization and insurance rates, and a COVID-19 risk index.
Selection Bias in Voluntary Random Testing: Evidence from a COVID-19 Antibody Study [pdf] [publisher link] AEA: Papers and Proceedings
with Michael Greenstone, Ali Hortacsu, Santiago Lacouture, Magne Mogstad, Danae Roumis, Azeem Shaikh, Alex Torgovitsky, and Winnie van Dijk
May, 2023
We use data from a serological study that experimentally varied financial incentives for participation to detect and characterize selection bias. Participants are from neighborhoods with substantially lower COVID-19 risks. Existing methods to account for the resulting selection bias produce wide bounds or estimates that are inconsistent with the population. One explanation for these inconsistent estimates is that the underlying methods presume a single dimension of unobserved heterogeneity. The data suggests there are two types of nonparticipants with opposing selection patterns. Allowing for these different types may lead to better accounting for selection bias.
Work in progress
Allocating Short-Term Rental Assistance by Targeting Temporary Shocks
with John Eric Humphries, Santiago Lacouture, Stephen Stapleton, and Winnie van Dijk
Eviction and Children's Wellbeing and Educational Attainment
with Rob Collinson, John Eric Humphries, Nick Mader, Daniel Tannenbaum, and Winnie van Dijk