In other words, how to consume surveys and data with pinches of salt to remain critical thinkers in the misinformation age.
Every day in work and play, we are constantly bombarded by sweeping statements in advertisements and the mass media.
“XX% of women spend Y% more than men on Item Z”.
“Indian smartphone users are A% more active than their [name another country] counterparts”.
“Three-in-four APJ workers demand better hybrid-working support or they will quit”.
“Global CFOs concerned about [name a trend]: but expectation gaps exist”.
The striking headlines and blurbs can be so convincing, so interesting … and so contextual. Yes, contextual. Read further into an article reporting on survey findings and you may see how the data has been interpreted in one of many possible alternative ways. Or the data may really apply only to the sample group polled, but the report generalizes the trend across the board—sometimes even when the confidence level or sample size is not enumerated or is statistically dubious due to special caveats listed in the report.
Since not everyone of us is a statistics expert, and even those who are can be misled by subtle biases in statistical reports, here are some tips for reading survey reports with a critical mind. That way, even as digitalization democratizes more organizations to generate even more data analyses and cherry pick observations and trends that favor their agenda, consumers of such reports can remain objective and made better decisions.
10 pinches of salt when consuming survey data
Whether we admit it or not, it is a well-known fact statistics and survey processes can be tweaked in infinite ways to support weak arguments, throw doubt on opposing studies, or simply cast doubts in the masses to test cognitive dissonances.
With that said, the onus in on the critical consumer to take every survey-backed claim with a heavy pinch of salt, and follow the checklist of 10 discernments as follows.
- Full disclosure: Expect any believable and professional report or survey to disclose full data on the period of study, the sample populations and all selection criteria, the methodologies used (especially if rewards, incentives, limited time to participate in the survey are involved), as well as provisions to allow readers to reach conclusions about confidence metrics of the data
- Conflict of interest: Know the exact stated agenda of the survey and the providence of the report: Will you be more confident of the credibility of a report on trends in a certain technology, when the firm that commissioned the survey is a stakeholder in that field? What if the commissioning firm is a neutral party or a government think-tank? Has a study been properly peer reviewed for technical competency? In an age where even biased peer review processes and reviewers can be roped-in to endorse skewed studies, full disclosure and background checks on reviewers are increasingly needed.
- Keep an open mind but expect tempered claims: Understand that truly comprehensive, above-board global surveys demand huge budgets and resources to be commissioned. We get that. However, with every survey that is not large enough, or performed within certain limits that stretch the imagination too far, demand that the authors of the report scale down their generalizations and be upfront with the limitations of the data.
- Read between the lines: Given that the way data is collected, filtered and interpreted can be skewed inadvertently, if you are looking for specific trends in the report, head straight for the numbers but keep your lips salted against editorialized statements interpreting those numbers. For example, do not be influenced by subtle manipulations such as “under X% said they liked Product A over Product B” or when a percentage Y works against a report’s agenda, it is stated as “Only Y% agreed that they found this technology useful” when in actual fact, Y is a higher percentage relative to other viewpoints. This kind of circumstantial ‘false causality’ can be subtly injected into a survey report to plant subliminal beliefs.
- Beware of finding what you seek: Just as surveys and reports can be misleading, consumers of such information can also be biased in trying to find data to support what they want to prove. In this case, the general rule of thumb is to give full consideration to data that goes against your own agenda.
- Signs of hyperbole: Notice how some survey reports play up the observations with high percentages (“92% of CIOs are drowning in pandemic-induced cyber threat surges”) to gain the newsiness effect, and how—upon further reading or examination of the methodology disclosures—that these 92% were from SMEs and not large enterprises? Or maybe the exact definition of “drowning” was just a 6% increase month on month? You get the drift. Therefore, watch out for hyperbole terms exaggerating the high-percentage metrics. Sometimes, it is the smaller numbers that hold the real clues about certain trends.
- More than, less than, over X%…
- Overgeneralizations such “Women tend to…” when in actuality only a certain sample group of women of specific demographics are being referenced
- Playing up small numbers as big trends simply because the year-on-year change is larger or ignoring certain mitigating or unpredictable factors
- Overgeneralizations involving respondents: Do you really believe that
- Time interpolations: While surveys are about testing/searching for hypotheses and trends, watch out for the common use of words to declare that a trend existed back in time, and therefore it still exists six months later. Confidence level and statistical methods aside, if a group of respondents participated at a certain time in the past, assumptions that their sentiments are still valid a year later will need to be taken with a pinch of salt.
- Loaded questions, prompted views: In the ideal situation, all reports to be consumed for serious decision making must include the survey questions used, and the participation/attrition rate of respondents. This is just a failsafe for consumers of the report to screen for questions or a line of questioning that may inadvertently lead respondents to express their views in a certain way with calculated phrases and word play.
Watch out for signs of this in statements such as “R% agreed that Product A has all the desired qualities of a superior product, compared to Products B, C and D”. Should a good survey prompt respondents with pre-defined choices? As most surveys are multiple-choice questionnaires, this is a given limitation that reports should not use to advantage without clear disclosure.
- Deferring to statistical authority: Does commissioning an highly established firm to conduct a survey make the latter more authoritative and therefore unquestionably neutral? We leave readers to decide the answer for themselves, but the best mindset is to not assume so.
- Lack of evidence proves a point is true, and many more fallacies: “The most recent data shows that [name your assertion] is true, and there is no evidence to prove [your assertion here] is not true. During the two+ years of the COVID-19 pandemic, many so-called scientific agencies resorted to this method of argument, thereby violating the very foundation of scientific inquiry.
- Now, more than two years later, empirical evidence is emerging to disprove the earlier assumed proofs, showing that fear and ignorance can derailscientific neutrality. Enough said—if the topic of statistical manipulation is of interest, you can read up on the minefield of statistical manipulation here.
In our busy lives, we tend to have limited time to think critically when consuming information as well as disinformation/misinformation, innuendo and psychological manipulations (Psyops). However, with practice, we can learn to take things with a pinch of salt intuitively and use the muscle memory to save a lot of precious time from not making misguided decisions at work and in our personal lives.