If you think numbers can’t kill, think again.
Mark Twain first used the phrase lies, damned lies, and statistics in 1907 to describe the persuasive power of statistics to bolster weak arguments. Thirty years later, in his treatise World Brain, H.G. Wells predicted that educated citizens in modern democracies would need a healthy familiarity with statistics to competently navigate emerging technological advances. Understanding concepts of risk and probability was deemed as important as learning to read and write. While widespread statistical literacy continues to elude us, Wells’ prediction, if heeded, could have averted several modern-day medical tragedies.
Most recently, a pandemic-sized overestimation of statistical risk regarding Covid-19 vaccination, fueled largely by a concerted disinformation push, has resulted in a persistent hesitancy in segments of society to “take the jab.” In turn, this has directly led to increases in infection and death rates while prolonging the pandemic. The financial and emotional burdens are staggering; already estimated at tens of billions of dollars in preventable hospitalizations and hundreds of thousands of preventable deaths. Sadly, these numbers are still growing. But statistical illiteracy has led to death throughout history.
RELATED: Is it me, or is it Q?
In 1995, the Committee on Safety of Medicines in the U.K. reported on a slate of studies concluding that third-generation oral birth control pills doubled the risk of life-threatening blood clots forming in the legs or in the lungs. The threat was disseminated widely, as a warning, not only to physicians but to the lay public in the form of emergency announcements in mass media outlets. The resulting panic undermined confidence in oral contraceptives and put women off birth control, leading to sharp rises in previously declining pregnancies, abortions and ironically, blood clots, which are associated with pregnancies and abortions. For every additional abortion, there was also an additional birth. The most pronounced increases occurred in girls under the age of 16, who incurred an extra 800 conceptions the first year. The monetary cost to the National Health Service for the increased abortions alone was an estimated £46 million, or nearly 85 million today.
The catastrophe could have been avoided or greatly minimized had a less confusing, more realistic risk estimate been reported. Since there is never zero risk, what exactly does ”double the risk” mean? The answer involves knowing the difference between relative and absolute risk.
Relative risk is the ratio of the risks for an event between compared groups. Absolute risk measures the probability of an event happening. In the studies reported above, the data showed that 1 out of 7000 second-generation birth control users developed a blood clot, while 2 out of 7000 third-generation users developed one. In this dataset, the increase in absolute risk was 1 in 7000. The relative risk, on the other hand, in fact did “double.” In this example, a 100% increase - from 1 to 2 cases out of 7000. Note that the numbers used are identical in both cases. The difference is in what aspect of the increase is emphasized. Absolute risk focuses on the change in probability; relative risk reports the magnitude of the change in the number of cases, no matter how small the probability.
Reporting relative risk creates an inflated sense of the risk, or, when used to describe the effects of medicines on improving health outcomes, an exaggerated sense of risk reduction. Absolute risk more clearly describes the probability of outcomes and is the recommended statistic to communicate health information to the public. When the next pharmaceutical ad pops onto your screen, if a number is reported, rest assured it will probably be the misleading relative risk statistic.
A different kind of statistical issue that has also led directly to deaths occurs when the accuracy of an individual test result is improperly reported or not provided at all. To understand this problem, first know that tests generate two kinds of errors: False positives and false negatives.
A false positive (or false alarm) occurs when a test reads positive in people who do not have the disease. The false-positive rate is the proportion of positive tests among clients without the condition. A false negative (miss) occurs when a test reads negative in someone who has the disease. The false-negative rate (miss rate) is the proportion of negative tests among clients with the condition. The specificity of a test is the proportion of negative tests among clients without the condition; the sensitivity is the proportion of positive tests among clients with the condition.
Note that there is never 100% certainty in any test; every test generates error. A rare, worst-case scenario was documented in a referral involving a 36-year-old American construction worker who tested negative on ELISA tests 35 times before it was finally established he was infected with HIV. Screening tests like mammograms generate false positives (false alarms) surprisingly often. When women participate in a 10-year program of annual mammography, the chances of false alarms multiply: Fully half of woman without cancer can expect one or more false-positive mammogram results. Anyone participating in screening should be informed that most suspicious results are false alarms. Harm and even death can result when overdiagnosis occurs and overtreatment follows receipt of false positive results. Most treatments also exact side effects.
Across the Atlantic and a few years before the pill scare in England, the HIV epidemic was well underway in the United States. At an AIDS conference it was reported that out of 22 blood donors in Florida notified that they had tested positive for HIV with the ELISA test, 7 committed suicide. Years later, a medical textbook describing the tragedy pointed out that “even if the results of both AIDS tests, the ELISA and Western blot are positive, the chances are only 50-50 that the individual is infected.” For individuals from low-risk groups, there is a coin flip’s chance that the test is wrong. The implication is that the actual odds of having HIV based on the positive test result were not communicated or were miscommunicated to the low-risk population of blood donors. Later, direct study of this possibility produced results nothing short of shocking.
Investigators examined the quality of HIV counseling provided to heterosexual men with low-risk behavior, by sending an undercover “client” (research confederate) to 20 public health centers in Germany, who then took 20 HIV tests. The client made it clear up front that he belonged to no risk group, like most people who take HIV tests. In mandatory pretest counseling sessions with physicians, the client asked the following questions: “Could I possibly test positive if I do not have the virus? And if so, how often does this happen? Could I test negative even if I have the virus?” The following are the actual, transcripted responses:
1 ‘‘No, certainly not’’
2 ‘‘Absolutely impossible’’
3 ‘‘With absolute certainty, no”
4 ‘‘No, absolutely not’’
5 ‘‘Never’’
6 ‘‘Absolutely impossible’’
7 ‘‘Absolutely impossible’’
8 ‘‘With absolute certainty...no”
9 ‘‘The test is absolutely certain’’
10 ‘‘No, only in France, not here’’
11 ‘‘False positives never happen’’ 12 ‘‘With absolute certainty, no’’ 13 ‘‘With absolute certainty’’
14 ‘‘Definitely not... extremely rare’’ 15 ‘‘Absolutely not’’ . . . ‘‘99.7% specificity’’
16 ‘‘Absolutely not’’ . . . ‘‘99.9% specificity’’ 17 ‘‘More than 99% specificity’’
18 ‘‘More than 99.9% specificity’’ 19 ‘‘99.9% specificity’’ 20 ‘‘Don‘t worry, trust me’’
Seventeen of the twenty counselors (85%) provided statistical misinformation to the client; the first sixteen contending that the test was incapable of error. Although counselors 14 to 16 first claimed that no false-positive test results could occur, when the client challenged this contention, they changed their minds (unlike the others, who stood by their initial claims). Only three counselors (17–19) related the information that false positives can and do occur since the specificity is not perfect. Counselor 20 provided no information whatsoever, instead asking for reliance on blind trust.
We would do well to revisit H.G. Wells’ call for collective statistical literacy. My high school math teacher once joked, “There are three types of people in this world: Those who are good at math, and those who are not.” So, are you the type good at statistics? Test your understanding of probability by tackling the problem below, crafted by renowned research biostatistician Gerd Gigerenzer of the Max Planck Institute for Human Development:
Mammography screenings are conducted in a certain region. You know the following information about the women in this region:
-
The probability that a woman has breast cancer is 1% (prevalence)
-
If a woman has breast cancer, the probability that she tests positive is 90% (sensitivity)
-
If a woman does not have breast cancer, the probability that she nevertheless tests positive is 9% (false-positive rate)
Using the above information, and information in the diary, choose the best answer to the question below. Then check your answer by clicking on the “zoel” at the top of this diary. The answer will appearin the column below the image of the tree. Good luck!