How Well Do My Results Generalize Now? The External Validity of Online Privacy and Security Surveys

02/28/2022
by   Jenny Tang, et al.
0

Security and privacy researchers often rely on data collected through online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) and Prolific. Prior work – which used data collected in the United States between 2013 and 2017 – found that MTurk responses regarding security and privacy were generally representative for people under 50 or with some college education. However, the landscape of online crowdsourcing has changed significantly over the last five years, with the rise of Prolific as a major platform and the increasing presence of bots. This work attempts to replicate the prior results about the external validity of online privacy and security surveys. We conduct an online survey on MTurk (n=800), a gender-balanced survey on Prolific (n=800), and a representative survey on Prolific (n=800) and compare the responses to a probabilistic survey conducted by the Pew Research Center (n=4272). We find that MTurk responses are no longer representative of the U.S. population, even when responses that fail attention check questions or CAPTCHAs are excluded. Data collected through Prolific is generally representative for questions about user perceptions and experience, but not for questions about security and privacy knowledge. We also evaluate the impact of Prolific settings (e.g., gender-balanced sample vs. representative sample), various attention check questions, and statistical methods on the external validity of surveys conducted through Prolific, and we develop recommendations about best practices for conducting online privacy and security surveys.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset