Your survey data is flawed, and what to do about it

Science magazine just come out with a bombshell: senior researchers at SurveyMonkey have detected that as many as one in five surveys contain fraudulent data. And we’re not just talking about a handful of surveys, they analyzed over 1000 public data sets from multiple different surveys to reach this conclusion.

“…Among 1008 surveys, their test flagged 17% as likely to contain a significant portion of fabricated data. For surveys conducted in wealthy westernized nations, that figure drops to 5%, whereas for those done in the developing world it shoots up to 26%.”

This is a major problem.  The usage of quantitative data in product teams has exploded in the past 10 years, and if you can’t trust your data, then you can’t trust the decisions you are making that rely on that data.

Fortunately, the researchers, Michael Robbins and Noble Kuriakose, point to the likely culprit:

“The basis of the test is the likelihood, by chance alone, that two respondents will give highly similar answers to questions on a survey. How similar is too similar? After running a simulation of data fabrication scenarios, they settled on 85% as the cutoff. In a 100-question survey of 100 people, for example, fewer than five people would be expected to have identical answers on 85 of the questions.

…one of the inevitable problems, Robbins says, is “curbstoning” where an interviewer sits on the curb and invents survey responses—often duplicating answers—in order to avoid risk or save time.”

Essentially, some researchers tend to create data, especially when the survey population puts the researcher at personal risk.  It makes sense, and is also why you don’t see a ton of polling data or opinion surveys on ISIS, because very few researchers are going to go there (and very few people would be willing to pay for that).  There’s a fairly low likelihood that in a long-form research survey, that multiple survey responses will be identical or nearly identical.  If the dataset includes lots of identical data, it could be a flag for data falsification.  To be fair, Pew Research takes issue with Robbins and Kurikose’s research, and is pushing back publicly on their website.  This fight will continue for awhile, because the answer is not black and white, it’s a gradient.

So what is the impact to product teams?  Probably minimal, for most teams.  Most product teams are getting validation data from a variety of areas that are less susceptible to this type of manipulation, such as A/B testing and experimentation.  Even if you are performing surveys as  part of your data collection (and you should be), most product teams aren’t building products that target the developing world or high risk populations.  If you are, ensure that you work with a reputable research team or train your team on how to collect that data responsibly, and check the dataset against Robbins and Kuriakose’s test.

If you aren’t building products that target market segments with these problems, take this as a reminder that data is just data, and before we draw conclusions, there is a set of sanity checks we need to run against the data first.  One of those checks may include “is everyone answering the questions in the same way?”

Paul Young

Paul Young

Paul Young oversees the strategic development of Pragmatic Institute’s portfolio of products and leads the executive team in the evaluation of new product opportunities. He also manages the instructor team. Paul began his career as a software developer and has worked in startups and large companies across B2B and B2C industries, including telecommunications and networking, IT and professional services, consumer electronics and enterprise software. He has managed P&L lines for products with hundreds of millions in revenue, and faced difficult choices about which products in the portfolio to retain and which to kill. Reach him at pyoung@pragmaticmarketing.com.


(0) Comments

Looking for the latest in product and data science? Get our articles, webinars and podcasts.