This is the first part of a two-part essay. The second part is available here.
Data-driven. Evidence-based. No matter what field you’re in, these buzzwords are ubiquitous. Two recent reads — Candice Lanius on statistics and racism and Matt LeMay on the trouble with “data” — have gotten me thinking once more about how insidious they can be.
“Insidious?” you may ask. “But shouldn’t decisions be based on evidence? Shouldn’t they be based on data? That’s just common sense.”
Then again, anthropology teaches us that “common sense” should always be interrogated. Common sense has a way of making some very good questions un-askable.
Here’s one of them: in the data-driven world, what counts as “data”? What doesn’t?
Most proponents of “evidence-based” and “data-driven” decision-making mean something very specific by “data”. They mean statistics. But numbers aren’t the only kind of data; qualitative (or descriptive) data are important too.
Here’s an example (for which I thank Alejandro Alves): Twitter broke the internet when they changed their star icon to a heart. A quantitative analysis might show that users are more likely to use a heart button than a star button, or vice versa. But a qualitative analysis could show how we use them differently. Maybe users are only comfortable using the heart button for something they feel positively about, while a star could also denote something important, for instance.
Quantitative and Qualitative Data
The truth is, qualitative and quantitative methods have different purposes, and they’re best used in sync.
Quantitative data allows generalization. Statistical methods were designed to allow researchers to draw inferences from a segment of the population to the entire population. They’re great at giving a sense of just how common something is, and whether two things are linked or not.
Qualitative data, on the other hand, is more in-depth. It’s particularly useful in the exploratory phase of a project, when researchers may not know what factors to focus on. It’s also helpful for understanding processes, and thus for establishing causation rather than correlation.
The best research tacks back and forth between both approaches. So why is qualitative data so often marginalized or left out of the conversation entirely?
There are lots of reasons why quantitative data is king. I suspect a general statistical illiteracy is part of the picture; numbers become authoritative because so few people feel confident in interpreting them. And there’s another piece too: numbers, the thinking goes, don’t require interpretation. Numbers are objective.
There’s only one problem with this logic: it’s simply not true.
Few of the things we want to study are purely quantitative in nature. To quantify them, first we have to figure out how to operationalize the things we’re interested in. That is, we have to find something measurable and quantifiable that we think is a pretty good proxy for the thing we’re interested in.
Let’s say we want to know how religious people are. We can think about this qualitatively: we can have in-depth conversations with them about their religious beliefs and practices and find a way to code this information and make determinations from there. Or we can think about it quantitatively. Do they believe in God? How often do they attend religious meetings? How important is religion to them, on a 5- or 7-point Likert scale? All of these approaches will result in different rankings.
Surely if our data are objective, they should all return the same results. We’re beginning to home in on the problem.
Let’s think about another example, the use of standardized tests in education. Let’s say everyone agrees that seventh-graders should know the name of the President of the United States. Consider the following two questions:
Who is the President of the United States?
a) Barack Obama
b) George W. Bush
c) Mitt Romney
d) Donald Trump
Barack Obama is the ____ President of the United States.
Both questions, arguably, test students on the current President of the United States. But there are some pretty major differences. One is much easier than the other, for starters. They test different kinds of thinking. And they’re probably going to give you a different distribution of answers.
They both test the same knowledge … but they give a very different picture of how many students know who the President is. What happened to objectivity?
The fact that something can be quantified doesn’t mean it’s objective. (In fact, I’ll argue in Part 2 of this essay that objectivity is impossible.) But since the chief and often unspoken argument for using quantitative data to the exclusion of qualitative data is this alleged objectivity, we now have a problem. What kind of data can, and should, we be using?