How We Ensure Your Insights Are Grounded in Good Data
By Ryan Palmer and Taryn Akiyoshi
Any market research firm with a quantitative research program will tell you that data quality is paramount – we can’t deliver sound, actionable insights if the data underpinning our analysis is bad. As online respondents have become more savvy to the incentive game, those who are unqualified to take a survey have worked to maximize their profits from survey incentives by designing bot programs to take surveys at random or by just flat-out lying to researchers. The result is “bad data” based on the opinions of those who are not relevant to the research design.
That poses a serious challenge for market researchers and sample providers. We all have an interest in making sure that the end-client has insights based on reliable data, even if it takes more time, effort, and money to get that.
So we get creative. Our surveys include trap questions that trip up bots or inattentive respondents, we monitor the speed at which surveys are completed, and we watch for suspect answering patterns. Doing all of that means that we’re crunching more work to deliver a quality product into ever tightening timelines that our clients need to follow to stay competitive in a fast-paced economy. It’s hard, but necessary – it may take longer and cost more, but being able to know for sure that the insights we’re finding for clients are real is invaluable.
Stricter Quality Controls Save Projects
We ran into sampling issues on two recent projects that really exemplified the issue and reinvigorated our internal conversation about our data integrity strategy.
The first was a study for a brand looking to improve its content marketing program. We were looking for a sample of several hundred industry-specific professionals, which was going to be difficult to find and expensive to incentivize. Our first fielding period saw 30-40% of the respondents clearly not taking the survey in good faith, and that’s after the first line of quality control! So we began analyzing open-ended responses to profession-specific questions to help us catch the larger issue, and we took that issue back to our providers. The cycle repeated on us: we’d hit our sample target, junk two out of every five respondents for bad data, and then go back into field until we hit it again. We disqualified so many bad respondents in an effort to hit our target, it added two additional weeks of fielding time.
The second project had similar goals, so we tried a new sample provider that was referred to us and came armed with tricks we developed: profession-specific quiz questions, trick questions, open-end response analysis, and data cleaning protocols. Even with a new sample provider, all of those checks were needed. For a sample target of a few hundred respondents, our extra checks were needed to clear out nearly as many as our target. The stricter protocols saved us from having over 80% of the sample be from people who aren’t qualified to answer our questions!
Partnerships for Quality
We got some pushback from our sample providers on the number of respondents that we were terming out, but we held firm. Our clients need the data that drives their business decisions to be reliable – that’s a bright red line that we will not cross. We have to think of this as a partnership where both parties need to be committed, transparent, and working toward the common goal. We do that in the following ways:
- Build an understanding of our common goal and make sure it’s clear to all sample providers and their partners: data quality is paramount
- Have open conversations with sample providers and ask for their recommendations for getting the best quality data
- Monitor data streams by panel and provide candid feedback to each partner so we can identify where problematic respondents are coming from
- Be transparent with sample providers about which of their panel partners are underperforming expectations so we can address issues together
- Constantly review cleaning criteria so all teams can identify problem areas
- If fielding isn’t going well, add in Alter Agents’ additional checks like rescreening, dummy questions, quiz questions, and even use an automated LinkedIn checker to verify B2B respondents.
Candor is one of our core values as a company. When we find a data quality issue, we’re committed to addressing it openly and plainly with our clients and partners. That’s the first step to presenting solutions and being upfront regarding their impacts on cost, timing, and quality. It doesn’t matter if the shortcoming was a sample provider or ours – we will fix it because our clients deserve actionable insights.
If you’re interested in working with our team, contact us today!
Thought Leadership Updates
Get updates in your email.