Imagine that you’ve just returned home after an arduous day at the office. Using what little energy remains, you hobble to the living room and slump your limp frame on the sofa. Then, in an attempt to drown out the haunting echoes of your obnoxious coworkers, you switch on the television and dial in to the local news station. Thankfully, you tuned in just in time for a very special report.
“A new study shows that our city has the third worst child poverty rate in the state.”
“Third worst,” you wonder, “what’s going on in this town?”
Of course you would wonder such a thing. After all, a city ranked third worst in child poverty must be doing very poorly at dealing with the issue, right? Although this is a possibility, the study itself actually says almost nothing about the state of child poverty in the city. Let’s find out why this is the case.
First of all, it’s important to recognize that the position of having no position on an issue is a position – a position of neutrality that reflects a lack of emotional investment. Second, although most studies are conducted with the intent to produce unbiased information, it’s worth noting that they are often funded or even conducted by organizations with a vested interest in the results, which undoubtedly leads to selective publication or manipulation of data. Third, and most importantly, we should be aware that studies alone are largely meaningless, for without context and implication, information does little to inform. This is because the manner in which information is collected through studies is not relevant to the common person. It must be made relevant by interpretation and presentation, such as a selective ranking used to imply poor performance.
If we were to explore the data in the example mentioned earlier, we might find that the city third worst in child poverty was only a narrow margin behind the top ranked city, or maybe we would learn that the city showed great improvement over the last year, perhaps more improvement than any other city. It’s possible that the top ranked city is declining while the third lowest ranked city is showing significant improvement, in which case we should probably show more concern for the declining city. We also don’t know the history of the cities, which may heavily influence child poverty rates. There are so many missing pieces of information that could change our reaction to the study that such a report is hardly worth our attention. Also, as we learned before, there can only be one winner, so ranking results will always produce disappointment. After all, if the third worst city were to improve to fourth worst, some other city would then inherit the rank of third worst.
News reporters, politicians and talk show guests regularly cite statistics in order to persuade listeners, but this is often done by ignoring certain studies. An example of this would be the Summer of the Shark, which occurred in 2001. Sensationalist coverage of shark attacks during the period eventually resulted in calls to pass legislation to address what seemed like a growing number incidents. However, the number of attacks in 2001 was actually 76, down from 85 in the previous year.
Sometimes statistics are misused not by ignoring studies, but by drawing connections between unrelated statistics. It is commonly cited that a person more likely to die from a coconut dropping on their head, than from a shark attack. However, this statistical analysis fails to take many factors into account, such as geography and recreational preference. For example, a person who regularly surfs on the coast of South Africa, where there many great white sharks and no coconuts, should not feel safe because of the statistic. Another example would be if female swimmer took comfort in knowing that 80% of drownings victims in the United States were male. This statistic doesn’t necessarily mean that women are better at swimming. If that were the case, then white people should also feel safe, since their drowning rates are significantly lower than those of other races. These are yet more examples of how a probability may be improperly understood to imply a possibility. It’s also strange that people are quick to preach the dangers of certain behaviors, like skateboarding and combat sports, yet feel comfortable with far more deadly activities, like eating and swimming.
Some studies don’t just provide data, but are based on correlations and attempt to identify a relationship between two variables. Unfortunately, the conclusions drawn from correlational studies can be highly subjective and even dangerous. Take, for example, a recent survey that identified that people who regularly consume popcorn are less likely to experience heart attacks. Although the findings may be accurate, the study’s correlation does not, in itself, identify popcorn as a reliable heart attack prevention agent. The deduction most make is that consuming popcorn prevents heart attacks, but the study does not offer an explanation as to why those who consume popcorn have less heart attacks – we must draw that conclusion ourselves.
One such conclusion is made by those who note that popcorn, among other snack foods, contain antioxidants – molecules that are thought to prevent diseases such as cancer. This conclusion seems to explain the correlation, but it’s just as likely that those who eat popcorn are more likely to exercise or that they don’t eat as much of other, less healthy snacks. Maybe popcorn does cause heart attacks, but it just causes fewer than ice cream. The study doesn’t tell us how or why the results occurred, which leaves the door open to interpretation and bias. It’s possible that this study was funded by a popcorn company that selected, or even paid, scientists who favor an antioxidant-rich diet to share their opinion. Perhaps the publication of this study will actually result in a greater number of heart attacks due to a massive increase in popcorn consumption by misguided people attempting evade the very fate they incur.
The job of researchers to collect and publish data. The job of writers and publishers is to decide what it means. So next time you hear a study or statistic cited, question the conclusion that follows. It’s entirely possible that it’s worth ignoring.