If you’re anything like me, the prospect of doing mental math (especially when school is not in session!) is less than exciting. But let me ask you a simple question. What number lies in the middle of 1 and 9?
Your answer? Probably 5. A newborn’s answer? 3.
A deeper look into these conflicting responses reveals an interesting dichotomy. To find 5 for an answer, you might have taken the average of 1 and 9 ( (1+9) ⁄ 2 = 10⁄2 = 5). This method makes sense if you regard the distance between each unit value as a discrete and equal chunk. On the other hand, if you consider numbers on a logarithmic scale — that is, in terms of ratios — your answer will be very different. Since 30 is 1, and 32 is 9, it follows that the middle of 1 and 9 is 31 = 3.
In 2012, neuroscientist Stanislas Dehaene of the College of France found that newborns may have an innate sense of quantity, before perhaps they even learn their own names (1). When the newborns were presented with pictures of objects that varied in type, color and number, the areas of their brains excited by changes of quantity differed from those excited by changes of type or color. Through further testing, Dehaene realized that the newborns could distinguish 1 from 2 and 8 from 16, but not 15 from 16. Perhaps, he theorized, the children considered the distance between 1 and 2 to be different from the distance between 15 and 16, even though the actual value is the same — at least, in consecutive models. In terms of ratios on a logarithmic scale, however, 16 is only about 1.07 times 15 while 2 is double 1.
Logarithmic relationships can be found across countless of disciplines. Population models in ecology, pH values in chemistry, earthquake intensity in geology, sound intensity in music, celestial navigation in astrophysics, and sensory mechanisms in biology. This pattern begs a looming question: does math model nature or does nature conform to math? Most of us were taught in school that each number has a distinct quantity and a fixed place among other numbers. However, math has not always been a part of human history, and some languages do not even have numbers that extend beyond three (2). Arithmetic, proofs, calculus, and formulas were all invented by humans. Through his studies, Dehaene proposed that perhaps logarithms are more natural to humans (no pun intended) than the math learned in school after infancy.
One advantage to thinking about quantities on a logarithmic scale is that these values can be stored more easily in our memories and at a lower risk of mistakes (3). For example, if you were a hunter-gatherer living in the wild, you might care more about distinguishing the difference between being attacked by one wolf and two wolves than by sixty wolves and sixty-five wolves. Despite the actual difference in values, your perceived risk of injury would likely not change drastically with five more wolves if you are already being approached by a pack of sixty wolves.
Risk perception and understanding numbers takes on an entire other meaning in the world of social economics. Since people tend to infer large quantities from bigger numbers, advertisers, speakers, and writers alike can take advantage of this misperception (4). For example, a two-week clinical trial seems much shorter than a 14-day trial. Sometimes, we even anchor our estimates to the first visible number in a string of numbers so that the sum of 5 + 6 + 7 + 8 seems smaller than the sum of 15 + 5 + 4 + 2. Arguably the most common yet potentially the most impactful misperceptions are improper probability judgments.
In the past, studies found that most people overestimate the value of low probability outcomes yet underestimate the damage the risk will cause if they actually occur (5). This mistake emerges most often when people think about tail events, high-risk events such as natural disasters or pandemics that have a low probability of taking place (on a statistical curve, these events would be represented on the tails of the graph). Imagine a person swimming in a pool who learns that she has a 0.02% chance of being struck by lightning. If she perceives this risk to be higher than it actually is, she may act in accordance to the danger and exit the pool. But if she knows at the same time that her lifetime odds of drowning in a pool is 12%, why would she enter the pool at all?
The more researchers look into the subjectivity of risk perception, the more complicated their findings become. Some studies counter previous research and show that most people underestimate the likelihood of rare events and possibly put themselves at risk, such as refusing to be vaccinated for a disease that one has only a 1% chance of contracting. Today, researchers continue to debate whether these opposite trends can coexist (6).
People form risk beliefs on a number of variables including their personal statistics (age, weight, build, medical history), their own fears (if the media profusely covers footage of terrorism, the acts may begin to seem more common than they actually are), and even the consequences of the low probability risk taking place. For example, if there is a 3% chance of rain today and a 3% chance your plane may crash due to malfunction, perhaps your perceived view of the damage caused by the risk (getting wet vs. being in a plane crash) affects your choice of leaving the umbrella at home (underestimation) or refusing to board the plane (overestimation).
In the end, data interpretation is extremely complex. We still need doctors for accurate medical diagnoses; otherwise, we would ask people to report their symptoms to a computer and let the machine diagnose them. Studies on risk perceptions and innate numbers show that even our most objective decisions are not made entirely without bias. Although math differs from our original sense of quantity, it also allows humans to manipulate patterns and proofs that can be just as beautiful as they are complicated. And that makes all the difference.