Mind Control

Robotic prosthetic limbs are truly amazing. These devices allow those who have suffered catastrophic, life-altering illnesses and injuries to regain lost mobility or dexterity and to experience aspects of life that were once inaccessible. In addition to improving the quality of life for millions of people, robotic prostheses have also inspired us to imagine new, exciting ways that the technology could be used.

While modern electronics continue to offer more immersive and intuitive interfaces, these inventions are still bound by the limitation of physical interaction. Touchscreens, mice and keyboards are very useful devices, but they still require us to translate our thoughts into actions before accepting input. This might lead us to wonder, “wouldn’t it be great if we could control devices with our minds?”

Although telepathic communication is often depicted as a lightning-quick, visceral technology in science fiction, there’s actually no convincing evidence that it would increase the speed or ease of communication, and the details of such interfaces appear anything but simple and intuitive. Let’s find out why this is the case.

Most of us believe that we can think faster than we can move or speak. This is based on the false assumption that the physical body merely limits the conscious mind. In actuality, the truth is often the opposite.

Our bodies not only optimize our cognitive abilities by sending important data to the brain, prioritizing things that require our immediate attention, they also carry out critical, complex tasks without conscious direction, often without our knowledge or consent, which frees our minds to spend attention in other ways. Examples include everything from walking and talking to biking and typing. By restricting our interface to only accept conscious thought, we are actually forcing ourselves to take control of automated systems, which impedes our mental capacity. It forces us to think about every individual instruction rather than the function as a whole. Imagine trying to think out a sentence one word or even one character at a time.

Another problem is that we don’t actually know how fast our brains can think. We also don’t know how fast they think whatever type of thoughts a telepathic device would accept. What we do know is that a comfortable rate of speech is about 150 words per minute and that skilled typists can reach upward of 120 words per minute. It’s likely that an improved keyboard configuration would allow for even faster speeds, so the disparity between speech and typing is actually very small. We also know that the average person reads at about 250 words per minute, which is about as fast as an auctioneer speaks. Using speed reading techniques, it’s possible to achieve a pace exceeding 500 words per minute. While comprehension at these rates usually falls between 60 and 70 per cent, these figures reveal that our brains can actually accept data far faster than they can generate it.

Of course, it’s possible that our rates of comprehension and speech are limited by our senses, so we we might accept and transmit data faster if we removed our mouth, fingers, ears, eyes and other body parts from the equation. But as it stands, the average person can type and speak at a rate similar to how they read and listen, which means that the bottleneck that telepathy would supposedly overcome has not yet been observed.

If we take a look at how modern robotic prostheses work, we observe yet another hurdle. Robotic limbs pick up signals sent from the brain, but they do not sense them within the brain. They merely intercept instructions in the nervous system on the way to the muscles. The difference is a significant because the type of thought that moves a limb is very different from other thoughts that occur in the brain. And this is precisely the problem: there are many different types of thoughts.

Our brains carry out a variety tasks, both conscious and otherwise. They are constantly sending and receiving signals to and from various locations in the body, monitoring and controlling our cardiovascular, nervous, endocrine, muscular, respiratory, lymphatic, urinary, excretory, reproductive, digestive and immune systems. Our brains are also remembering the past, perceiving the present and predicting the future, and they’re usually doing more than one of these at any given moment.

The thought protocol for each functions is unique and must be distinguished if we plan to implement telepathic technology, for even the way we ponder a simple idea can vary significantly. We don’t just think in complete, coherent sentences, like mind-readers would have us believe. Our brains process emotions, sensations, opinions, images, music and ideas, and we’re thinking all these things while both consciously and unconsciously controlling our bodies. To illustrate the variation in thought, let’s examine all of the ways that we can think about a dog.

First of all, thinking the word dog is different from thinking about the word dog. It’s different from saying the word dog, thinking about saying the word dog and reading the word dog. It’s also different from thinking about a dog, some dogs or dogs in general, and it’s different from imagining or remembering a dog. It’s different from wanting a dog, missing a dog and loving or hating a dog. It’s different from looking at a dog, looking at a picture of dog, imagining a picture of a dog, and it’s different from imagining a picture of the word dog. It’s different from thinking about what it’s like to be a dog, from wanting to be a dog, and it’s different from actually being a dog. And thinking the word dog is different than thinking about thinking the word dog, and it’s different from considering thinking the word dog. Of course, considering thinking about the word dog is impossible, which brings us to our next point.

There’s a big difference between thinking something and doing or saying it, but if devices are controlled by thoughts directly from the brain, how would they know the difference? If someone with a prosthetic arm imagines punching another person in the face, the arm doesn’t do it, because it is merely sensing signals in the nervous system. But if we were controlling a device purely through thoughts in the brain, how would it distinguish which thoughts to obey and which to ignore?

If we consider taking an action, our body does not execute that action until we have made the decision to send the signal to our muscles. But in the brain there is no such confirmation through action. If we were to try and write an e-mail using a telepathically-controlled computer, how would we separate the words we wanted to send from the words we were considering? And how would we control punctuation, spacing, format and other details? Would we have to construct each sentence using individual words, or would we simply send raw thoughts, ideas or emotions? Would we be able to mute incoming signals or control their priority or storage? Would we be able to transmit images, music and other media? How about emotions? How would our brain receive emotional signals? And how would we deal with distractions and multitasking?

One possible solution to a few of these issues could be to use the nervous system to communicate. This would involve training our minds to control imaginary limbs that we pretend are part of our bodies. The instructions could then be translated into other signals that could be interpreted by a device. We can prove that this is possible by simply imagining that we have another set of arms beneath our normal arms and then imagining moving them around. This produces an eerie sensation that is likely similar to phantom limb syndrome.

Another less efficient option would be to confirm thoughts by thinking the words out loud (or speaking internally). It’s much harder to articulate how this type of thought works, but it’s probably best described as strongly imagining saying a word. Unfortunately, this solution would mean that we could only transmit words and only do so one at a time.

One of the more serious problems with telepathic technology would be deciding exactly what would be transmitted. Thoughts would have to converted into electrical signals, but our thoughts are usually very abstract, and the brain hides the complexities of most of its functions from our consciousness. In addition, the brain also prioritizes, categorizes and filters incoming information, so sending mere words would not only be incredibly difficult, but also incredibly incomplete when compared to the advanced level of thought that normally occurs in the mind.

In addition to all these barriers, there is also the issue that our brains, while absolutely amazing, are quite terrible. We are constantly overlooking, misinterpreting and forgetting things, and we get distracted easily and often. Just stop and think about your thoughts for a moment. Are they ordered, logical, focused and useful? Are they even coherent? The brain is a complex, damaged, dysfunctional machine, so if we want mind control, we must control our minds and do so in a much different way than we do now.

There’s also the serious and inescapable problem of connecting a human brain, which both controls our bodies and defines who we are, to a electronic device that can be forged, faulty or even compromised. That’s right, hackers could potentially gain access to our minds and monitor, steal, copy, corrupt or destroy our thoughts and memories. They could also take control our bodies, forcing us to obey their instructions, or even tell our hearts to stop beating.

Finally, though this is more of an indirect and ethical issue, it is interesting to note that even as society is beginning to recognize and prioritize the importance of regular physical activity, technology continues to alleviates us from physical duties. Standing desks, for example, have recently become a trendy way to improve our health. But with telepathically-controlled devices, we certainly don’t need a desk, and we may not even need to get out of bed. In fact, we may not need to wake up or even have bodies.

Thoughts aren’t what you thought. Think about it.

In Memoriam

We make them always, all the time,
and out of every thing.

We make them out of friends and facts,
and out of songs we sing.

Locked inside the cells within,
they’re with us all the time.

Shaping and defining us,
both awful and sublime.

They can’t be killed or stolen,
but most are changed or gone.

And some are hidden deep,
so we can carry on.

They follow, charm and haunt us,
no matter what we do.

Until one day they slip away,
and we become one too.


Uncertainty about the future is something that plagues us all. Indeed, in every civilization throughout history there have been those who claim to have special insight into events that have yet to unfold. In our eagerness to rid ourselves of our fears, we often reward these people handsomely for their supposed knowledge.

Although astrology and mediumship are still popular in Western culture, a more modern profession of precognition has recently arisen: the futurist. Rather than observing heavenly bodies or communicating with spirits to obtain special knowledge, a futurist uses their knowledge and expertise in a given field to predict the future. One of the most well-known of this new breed of  fortune teller is Ray Kurzweil, and he’s made a number of shocking predictions about the future of technology and ascension of artificial intelligence. While his claims have brought him great fame and profit, he’s certainly no prophet.

The fact that he doesn’t deserve the label of prophet is not a commentary of the accuracy of Kurzweil’s predictions, though they are certainly worthy of suspicion, but rather an important distinction about the method by which his knowledge is attained. A prophet is someone who communicates a message on behalf of a divine or supernatural source. While Kurzweil’s predictions are certainly otherworldly, they are not the result of divine inspiration or supernatural powers (as far as we know).

Another difference is that prophets usually do not benefit from their insight, but freely offer their knowledge to help others, usually in response to a divine command. Without getting too deep into the issue of discerning the authenticity of a prophecy, we can probably assume that the presence of praise and reward is probably a strong indication of a false prophet. Now let’s take a closer look at prophecies and what it means to be a prophet.

Nostradamus is one of the most popular of those who claimed to predict the future. Born in 16th century France, he produced thousands of predictions. Some of these were accurate, some inaccurate but most are too vague to determine their meaning. Consider Nostradamus’ quatrain #1-35:

The young lion, shall overcome the older

On the field of battle, by singular duel;

Through armor of gold, his eye will be pierced,

Two wounds in one, then to die a cruel death.

Those who believe in Nostradamus’ abilities claim that this verse describes the accidental death of King Henry II of France during a jousting tournament. Followers also claim that Nostradamus predicted the rule of Hitler, the September 11 attacks, the conquest of Napoleon, the Kennedy assassinations and many other major events. However, even if we can determine with certainty that the predictions are referring to these historical events, of what use are they? After all, the events still happened. And if we’re only able to associate a prediction with an event after the event unfolds, then it serves no purpose other than to astonish.

Nostradamus admitted that he wasn’t a prophet, and he based his predictions on a number of sources, including astrology, Biblical text and other works of prognostication. He also didn’t meet the requirement that he help others with his clairvoyance. This doesn’t mean that Nostradamus meant harm or didn’t care about others. Perhaps he meant well, but without specific instruction on how to respond to a prediction, it is impossible to help. And if he actually could predict the future, Nostradamus would have known that his predictions wouldn’t change anything. A true prophecy is meant to be understood before it unfolds, and it includes a response. This leads us to our next point: the purpose of a prophecy.

Aside form their divine or supernatural origin, prophecies tend to function as warnings, usually warnings of catastrophe. Astrologers, mediums and futurists tend to focus on the positive. After all, who would pay for a prediction that makes them upset? On the other hand, just because a prophecy warns us of a threat does not guarantee that it is authentic. There have been plenty of false prophecies about the end of the world, including the famous Mayan apocalypse, which was supposed to occur in 2012. With all of these predictions of the end of the world, it seems that we can be certain of one thing:

“No one knows the day or hour when these things will happen…” (Matthew 24:36)

Another trait of a prophecy is that it is specific. As we mentioned earlier, a vague prediction is of no use to anyone. While it’s easy to make ambiguous statements about lions, wounds and golden armor, it also draws much less attention than a claim about the day the world will end. The specificity of a prediction is usually also proportional to the number of predictions made, offering a choice between quality and quantity. In addition to being extremely specific, a prophecy usually contains only a single prediction.

In summary, here are the defining characteristics of a prophecy:

  • Its origin is divine or supernatural, not based on mere observations.
  • It doesn’t benefit the prophet.
  • It is understood before it unfolds.
  • It warns of a catastrophe and includes a call to respond.
  • It’s very specific and usually contains a single prediction.

Alright, so now we can identify a prophecy from a prediction, but predictions actually come in many shapes and sizes. Every day we ask meteorologists to make a prediction about the weather, and it’s a relatively easy one to make. This is because of the high availability of information about coming trends and the fact that there are only a limited number of possibilities. It’s also easier to predict something that’s closer to the present, which is why a weather forecast rarely reaches beyond a week or two.

The biggest tool for predicting the future is examining the past, or more accurately, comparing the past to the present. If we can find a recurring trend in the past and identify where we are in the trend, then we can theoretically predict what will happen next. Economists use this tactic to predict booms and busts. Here’s an example:

From our position in the present, we can use a number of different strategies to make our prediction. We can look back at a recent significant event, and use that to make our guess:

Or we can look at a recent or ancient measurement:

There are still other strategies to employ. We could take a number of measurements and average them, or we could try to find an algorithm that describes the occurrence of booms and busts. We could even ignore all past information and consider only current trends. Part of the reason why the predictions of futurists like Kurzweil are so extreme is that there isn’t a lot of past information on modern technology. There’s only been one computer age, one Internet and one Google, so it’s hard to say where we’re headed.

One of the great predictions of the 20th century was made by Gordon Moore in 1975. He observed that transistor density on integrated circuits was increasing at a steady rate, so he predicted that the transistors count would continue to double every two years. This prediction has largely proven accurate, though some say this is merely because the semiconductor industry uses it to set research and development targets.

However, even a prediction heralded for its longevity and accuracy hasn’t yet outlived its maker and may soon be proven irrelevant. The capacity to build smaller transistors will eventually end as they approach the atomic scale. In addition, quantum technology threatens to shatter the entire framework of computing architecture. Overall, the rapid change in technology makes it an exceedingly difficult subject to forecast.

If we take a look back at films that depict the future (our present), we scoff at their interpretations. Decades later, the flying cars, jet packs, food capsules, laser guns, tiny cellphones and robot dogs we saw on the big screen are nowhere to be found. Part of the reason for this folly is the inclination to imagine improved versions of things we already use, essentially futurizing things found in the present. This is because our imagination is largely limited by what we’ve already seen.

Visionaries like Gene Roddenberry were able to imagine a future quite different from our own, with inventions that weren’t just improvements, but whole new concepts never before seen or imagined. Yet even Roddenberry could not predict some of the advances in technology we see today. Although we may not have yet conquered poverty, disease or intergalactic travel, things we do every day on our smartphones were beyond his comprehension.

Another factor that makes prediction difficult is the chance of catastrophe. It’s possible that we establish a colony on Mars in the next 50 years, but it’s also possible that a massive earthquake hits Washington, D.C. and levels NASA headquarters. The problem with catastrophes is that we can’t be sure when, where or how they will happen, just that they will happen.

By contrast, there are also trends that come and go, cultural shifts brought on by subtle and gradual changes to our collective psyche. If we could turn back the clock to July 21, 1969, we likely couldn’t find anyone who would predict that we wouldn’t visit any other planets in the next 50 years, yet here we are still stuck on this boring old dust bowl. The lunar landing ignited imaginations and had a huge impact on television and film, but our interest in outer space fizzled. Likewise, virtual reality was a huge trend in the 1980s, but is only new seeing a resurgence 30 years later.

We also make a huge error when we consider the advances that humans have made in recent years and assume that everyone has experienced these improvements and that there have been no consequences. Billions of people across the globe have yet to experience sanitation and safe drinking water, which makes the whole futurist thing seem kind of vain and meaningless. We also tend to think of technology as the solution to the world’s problems, ignoring the huge environmental cost and health hazard of manufacturing integrated circuits and other electronic components. We like to think that we’re on a trajectory toward perfection, but in many ways we’re worse off than we’ve ever been. Here are just a few examples:

  • Obesity is a global health crisis.
  • Voter turnout continues to fall.
  • Education is failing our youth.
  • Social media, video game and pornography addiction are increasing.
  • We’re consuming and corrupting the Earth’s resources at an unsustainable rate.
  • The gap between the rich and poor is growing.
  • Depression and anxiety are on the rise.
  • More children are being raised without both parents.
  • Large sections of the population have no legal protection.
  • The government has alarming levels of surveillance on its citizens.
  • Sexual promiscuity among adolescents is increasing.

Of course, it’s not all doom and gloom. War, poverty, and disease are on the decline (in most places). The point here is merely that the world is a big, complicated place, and that it’s easy to believe that everyone’s lives are as good as ours and get caught up in the excitement for what’s ahead. The world is getting better, but not for everyone and not in every way.

So what’s the future going to be like? Well it’s probably going to be similar to the present. Despite all of the advances in technology and the social, political and economic change, we still get up, go to work, pay our rent, cook our food and so on. But predicting that things will generally stay the same is not interesting or provocative. If we want people to heed our warning or buy our book, then we have to predict something that grabs headlines. Let’s take a stab at creating our own prediction. Here’s how we do it:

First we need to make sure that people pay attention to us, so we should probably choose a subject that is interesting, relevant (or seemingly so), and a little controversial. How about the Internet?

Next we need to choose a timeline. We need to pick a date that is close enough to seem meaningful, but distant enough that we can’t be proven wrong anytime soon. Choosing a point too far in the future also makes our prediction seem less credible, since we have to provide a basis for our forecast. We should also choose a number that seems significant. Let’s go with 20 years.

As for actual the prediction itself, we merely need to spot a trend, then choose a point in recent history and extrapolate a seemingly-reasonable projection into the future. Access to the Internet is on the rise, so let’s predict that in the next 20 years everyone on the planet will be online. Now we just need to find some numbers to back it up.

Year Population (m) Users (m) Users (%) Increase (m) Increase (%)
2005 6514 1024 15.72
2006 6593 1151 17.46 127 1.74
2007 6673 1365 20.46 214 3.00
2008 6753 1561 23.12 196 2.66
2009 6834 1751 25.62 190 2.51
2010 6916 2019 29.19 268 3.57
2011 6997 2224 31.79 205 2.59
2012 7080 2494 35.23 270 3.44
2013 7162 2705 37.77 211 2.54
2014 7243 2937 40.55 232 2.78
2015 7324 3174 43.34 237 2.79

Pulling up some figures on global population and Internet access, we can clearly see that the number of people online is rapidly increasing. In the past 10 years, we’ve seen it jump from around 15 per cent of the world’s population to over 43 per cent, so it’s definitely believable.

Now we should think about the specificity of our prediction. It shouldn’t be so vague that it is indecipherable, but it also shouldn’t be so specific that people can agree on its meaning and critique it. We also want to leave room for reinterpretation, should things take an unexpected turn. For these reasons it’s important to choose our phrasing very carefully.

Let’s go with this: all people will have access to an Internet connection in the next 20 years.

We use all people instead of everyone because it could refer to groups as well as individuals. We also say have access to an Internet connection, not be connected or have Internet access, because it’s more broad. After all, anyone with the potential to have electricity and a satellite dish is already included. We also say in the next 20 years instead of by 2035 because the year 2035 seems like it’s further away.

And there we have our prediction. Now we just need to give ourselves a fancy title that doesn’t require any credentials, like futurist or technology expert, and we’re on our way to fame and fortune.

The Book Is Always Better

The arrival of a blockbuster film is the perfect occasion to spend some time with friends. And while there is little opportunity to comment on the movie as it unfolds, lively discussion is sure to follow soon after exiting the theater.

We speculate wildly about the meaning of events in the film and ask each other all kinds of questions, including, “who was your favorite character?” and, “what did you think of the ending?” But no matter the direction of the conversation, someone will inevitably make the comment, “the book was better.”

The insignificance and idiocy of such a statement is only exceeded by its preposterousness and pretentiousness, for the idea that a story told in film could somehow surpass the original literature is as unlikely as it is unreasonable.

The first reason why this statement sucks is that it says more about the person making the statement than the film or book. The kind of person who says, “the book is better,” is someone who is so insecure that they need to let other people know that they read. And in addition to making sure that everyone knows they’re literate, they compound the comment by implying that they are part of a special club with the right to judge the film from a higher vantage. By stating that the book is superior to the movie, they imply that those who enjoyed the movie are naive and uninformed.

Another frustrating thing about this comment is not that it isn’t true, it’s that it can’t be not true – the book is always better. There’s a number of reasons why this is the case, first among them being that the movie only exists because the book is awesome. Think about it: of course the book is great, why else would they make a movie about it? And even if by some miracle the book and movie are similar in greatness, the book always wins out because it’s longer, and thus more elaborate and detailed, and the book came first, so it carries a certain nostalgia that makes it more attractive.

The third reason why this statement should never be uttered is that it compares two forms of media which are fundamentally very different. While literature and film can be aimed at the same objective, like telling a story, they go about it in completely different ways. Books don’t have special effects or actors, but they also aren’t limited by concrete incarnations of scenery or characters. By contrast, movies don’t have detailed, poetic descriptions, but they can use breathtaking visuals and powerful music.

Comparing a book to a film is like comparing a picture to a song, a bed to a couch, a river to an ocean, a dessert to an appetizer or a guitar to a piano. They can sometimes be used to achieve similar goals, but they go about it in very different ways.

Stop telling people that the book is better.

Logical Link Control

While humans are much smarter than other creatures, our intelligence varies greatly from person to person. Some of us are geniuses, most are average and some are living with a mental disability. While most of us identify academic or general intelligence as the primary indicator of mental aptitude, there are other ways we can be smart. According to developmental psychologist Howard Gardner, there are actually nine ways of measuring intelligence:

  1. Musical
  2. Visual
  3. Verbal
  4. Logical
  5. Bodily
  6. Interpersonal
  7. Intrapersonal
  8. Naturalistic
  9. Existential

Of course, expanding the term to include these categories is highly subjective, and it erodes the traditional understanding of intelligence as the capacity for reasoning and understanding. And if intelligence can be narrowed to such specific abilities as music or physical ability, then why not include additional classifications for programmers, powerlifters, comedians, cashiers, magicians, memorizers, sharpshooters and competitive eaters? Just as redefining art makes everyone an artist, a more inclusive understanding of intelligence means everyone is highly intelligent. This would also mean that many animals, and even machines, are more intelligent than humans, which is something that we know isn’t true. While general intelligence may be difficult to define, opening the door to alternate meaning only weakens its meaning.

One of the abilities associated with general intelligence is logical comprehension, which is the ability to analyze, comprehend, abstract and navigate the layers of a causal system. A wonderful example of this occurs in the classic 1987 film The Princess Bride, when the then-masked Westley challenges a Sicilian named Vizzini to a battle of wits. In a lively and captivating back-and-forth between the two men, Vizzini attempts to discover, by pure reason, which of two goblets of wine has been poisoned by Westley with a fictitious toxin known as iocane powder. Let’s see if we can follow the layers of reasoning.

  1. The first line of reasoning is easy to comprehend. In it, Vizzini he asserts that “only a fool would drink reach for what he was given.”
  2. He then claims that Westley must know that he isn’t a fool, stating that he “can clearly not choose the wine in front of [Westley].”
  3. Vizzini goes on to accuse his opponent of knowing that he isn’t a fool, which means that he shouldn’t drink his own wine.
  4. He then deduces that because the poison originates in Australia, a land “entirely populated by criminals,” which implies that Westley would anticipate not being trusted, meaning that Vizzini should not drink the wine in front of Westley.
  5. Aware that Westley must have predicted his ability to determine the poison’s origin, Vizzini reverses his position again.
  6. Vizzini then accuses his opponent of poisoning his own goblet and planning to trust his physical strength to withstand the poison.
  7. He then points to Westley’s education, and therefore knowledge of his own mortality, as the reason why Westley would “put the poison as far from [himself] as possible.”
  8. In a final attempt to dupe his enemy, Vizzini distracts Westley and switches the goblets before both men drink.

While this depth of reasoning is impressive, the layers don’t actually require placement in a precise order. After all, Vizzini could have pointed to the poison’s Australian origin at the beginning, and arrived at the same conclusion. Logical comprehension isn’t just about understanding complexities, but also following a path of thought. Here’s a more structured example that becomes increasingly complex as layers are added. With each statement, a negation is added, inverting the meaning of the sentence.

  1. I will be going to the party.
  2. I won’t be going to the party.
  3. I won’t be not going to the party.
  4. It’s a lie that I won’t be not going to the party.
  5. It’s not a lie that I won’t be not going to the party.
  6. It’s false that it’s a not a lie that I won’t be not going to the party.
  7. It’s not false that it’s not a lie that I won’t be not going to the party.

Most people lose the ability to comprehend the meaning of these sentences somewhere between levels 2 and 4. However, many realize that it’s not necessary to understand the entire sentence at all. If we merely count the number of negatives, we can determine that the person will be attending the party if number is even, and they won’t be attending if the number is odd. Now let’s consider another example that illustrates a more complex logical thought process.

Imagine you’re engaging in a game of rock-paper-scissors with someone. But before you begin, your opponent tells you that he will choose rock. Is he telling the truth? How would you deduce what to choose next? Well you might think something like this (see how many layers you can follow):

  1. I know that he won’t actually choose rock, like he said, so I’ll choose scissors to cut his paper or tie with his scissors.
  2. He knows that I know he won’t choose rock, so he’ll choose rock to crush my scissors, since he knows that scissors are best counter to someone not choosing rock.
  3. I know that he knows that I know he won’t choose rock, so I’ll choose paper to cover the rock he chose to crush the scissors I chose in response to his rockless strategy.
  4. He knows that I know that he knows that I know he won’t choose rock, so he’ll choose scissors to cut the paper I chose to cover the rock he chose to counter my scissors, which are perfect against an opponent without a rock.
  5. I know that he knows that I know that he knows that I know he won’t choose rock, so I’ll choose rock to crush the scissors he chose to cut the paper I chose to cover the rock he chose to counter the scissors I used against his non-rock.
  6. He knows that I know that he knows that I know that he knows that I know he won’t choose rock, so he’ll choose paper…

At this point the cycle would continue indefinitely. assuming that either of us could actually fathom such strategies. It is possible, however, to simply count the occurrences of the word know in order to formulate the solution (just as we do with multiple negatives), but again, this circumvents actual understanding. Let’s take a look at an example without a pattern that can easily explain its meaning.

Imagine a sprinter who only changes speed in increments of 1 m/s. After completing each one-second interval, he will travel a distance equal to his velocity at the last interval, and his velocity will increase by his acceleration at the previous interval. Let’s also imagine that our sprinter continues to accelerate his acceleration during the experiment.

Time 00:00 00:01 00:02 00:03 00:04 00:05 00:06 00:07 00:08 00:09 00:10
Position (m)  0  0 1 3 7 14 26 46 79 133 220
Velocity (m/s)  0 1 2 4 7 12 20 33 54 87 137
Acceleration (m/s^2)  1 1 2 3 5 8 13 21 33 50 73
m/s^3  0  1 1 2 3 5 8 12 17 23 30
m/s^4  0 0 1 1 2 3 4 5 6 7 8
m/s^5  0 0 0 1 1 1 1 1 1 1 1

It’s a little difficult to determine what’s happening here, but our sprinter is basically adding another layer of acceleration after each interval. Let’s describe each of the characteristics of the sprinter.

  1. Velocity (m/s) : change in position per second
  2. Acceleration (m/s^2): change in velocity per second
  3. m/s^3: change in the rate of acceleration per second or changing how fast we accelerate
  4. m/s^4: change in the speed of the rate of acceleration per second or changing how fast we’re changing our acceleration
  5. m/s^5: change in degree of the speed of the rate of acceleration per second or changing how fast we’re changing the rate at which we change our acceleration

If it isn’t already clear by now, things tend to get complicated really quickly after the third step. Part of the problem is that our brains are less able to recognize patterns among similar symbols. But even when we vary the terminology, the process is extremely difficult to follow. Here are four more examples of logical processes that can be hard to follow:

  1. Nested loops
  2. Layered arguments
  3. Recursion
  4. Paradoxes

It’s difficult to pinpoint the exact issue, but it seems to be related to the limits of our mind’s working memory described by cognitive psychologist and Mad Max director, George Miller. Miller observed that humans can only keep track of 3 to 7 things at once. It seems that keeping track of a logical process is more difficult than independent parts. The difference here is that layers seem to compound the complexities as they’re added. Let’s imagine a system for using our working memory to keep track of both independent and layered thoughts. Here’s how it might work:

  1. We start with 3 to 7 memory units to spend on thoughts.
  2. Each independent thought costs 1 unit.
  3. Each layered thought after the first costs 2 units.

If our first layered thought counts as an independent thought, with a cost of 1, and each following thought costs 2, then our results confirm what we observed:

  1. We can track 3 to 7 independent thoughts.
  2. We can track 2 to 4 layered thoughts.

So what can we learn from all of this? For starters, we can confirm that measuring intelligence should be done in terms of reasoning and comprehension, not words or emotions. We also know that the complexities of some processes can be circumvented by simply counting the occurrence of a word or phrase. In addition, we know that tracking a logical process has roughly twice the mental cost of tracking independent thoughts. So next time you encounter a logical process, do the following:

  1. See how many layers deep you can go.

School, Murder and Pride

What do school, murder and pride have in common? The answer is that they are all names for groups of animals.

As the Childlike Empress taught us, a new name can hold incredible power. An opportunity to name something new provides the person ascribing the name with the opportunity to achieve immortality. This is one of the reasons why product names are so inconsistent, and perhaps this is also the reason why scientists exercise such extreme creativity when it comes to classifying and categorizing things in nature. Scientists are vain, selfish creatures, viewing discovery as merely a path to greatness.

While it’s clear why each species of creature requires its own name, the reason for each having a unique term to describe a group is not as obvious. Likewise, it’s unclear why different animals have been given distinct names for their young as well as their male and female incarnations.

Here’s a table listing just a few examples of the frivolous, ridiculous terms associated with various species:

Animal Young Female Male Group
Alligator Hatchling Cow Bull Congregation
Ant Larva Queen, Soldier Drone Army
Cat Kitten Molly Tom Cluster, Kindle
Chicken Chick Hen Cock Flock, Peep
Chimpanzee Infant Empress Blackback Troop
Crab Larva Jenny Jimmy Cast, Dose
Crow Chick Hen Cock Murder
Deer Fawn Doe Buck, Stag Herd
Donkey Foal Jenny Jack Herd
Dragonfly Nymph Queen King Cluster
Duck Duckling Hen Drake Flock, Raft
Fox Kit Vixen Tod Leash, Skulk
Goat Kid Nanny Billy Herd
Goose Gosling Goose Gander Gaggle
Hawk Eyas Hen Tiercel Cast, Aerie
Horse Colt, Foal Mare Stallion Band
Jellyfish Planula, Polyp Sow Boar Bloom, Fluther
Kangaroo Joey Flyer, Jill Boomer, Jack Court, Troop
Lion Cub Lioness Lion Pride
Pig Piglet Sow Boar Drove
Salmon Fry Hen Buck School
Seahorse Seafoal Seamare Seastallion Shoal
Sheep Lamb Ewe Ram Flock
Swan Cygnet Pen Cob Flock
Turkey Poult Hen Tom Rafter
Whale Calf Cow Bull Pod
Wolf Pup Bitch Dog Pack
Wolverine Whelp Angeline Wolverine Pack

There are many more ridiculous and useless terms not mentioned here – enough to ensure that a general naming strategy can never be constructed.

Now some of these terms are actually useful. After all, it would be helpful to use different words to distinguish male animals from female, mature from young. We also need a term, a single term, to describe a collection of animals. In addition, some of these words actually describe form or function. A larva, for example, isn’t just a newborn insect or crustacean, but a stage of development with unique characteristics. Likewise, describing a group of bees as a colony says a great deal about the behavior and organization of the organisms.

But a vast majority of these names just cause confusion and misunderstanding, for we all know that a cow is a bovine, not an alligator, and that a hen is a chicken, not a salmon. And what comedian chose the terms belonging to seahorses?

Trying to create and remember unique terms for each species or genus is incredibly impractical, and many of these terms have alternate meaning, like school, murder and pride. Also, everyone can understand what the phrase a herd of horses implies, just as we can clearly comprehend what is meant by the phrase a baby goat – we don’t need to use band or kid.

In addition to naming groups of animals, there are also names for groups of people based on their vocation: a faculty of academics, a team of athletes, a slate of candidates or a college of cardinals. As with animals, this is useful when it describes something practical, but this is rarely the case.

We should just stick to using terms that we all understand: baby and herd, man and woman.

Low Flying Aircraft

Most advertisements are irritating, fraudulent and meaningless. This includes sales promotions, public advisories and road signs. Behold the most useless notification ever presented:

Signs like this are often posted on the side of the road near airports in order to notify drivers that planes may be landing nearby. But what, exactly, is the purpose of doing this? What are drivers expected to do? Stop and wait for planes to land? Some say that the purpose of the sign is simply to inform drivers so that they are not startled when they see an aircraft approaching, but there are a few problems with this interpretation.

The first issue is the fact that these signs are usually posted near airports, and airports are far more visible than the signs themselves. Second, some versions of the sign include “CAUTION”, “WARNING” or “DANGER” or even an image of an airplane bouncing off the roof of a car, which implies that the purpose of the sign is to inform drivers of a threat. The final problem is that it is not common practice to use signs to notify drivers of things they can do nothing about. Drivers are constantly distracted by the beauty of nature, the peculiarity of pedestrians and the hideousness of modern exterior home design, but we don’t put up signs with “CAUTION BEAUTIFUL FOREST”, “DANGER WEIRD PEOPLE” or “WARNING UGLY HOUSE” on them.

We don’t need to know what’s above us when we’re driving, especially when we can’t do anything about it.

No Check, Please

What is this?

This is a check, and it used to be a perfectly reasonable way to pay for things. Now it’s just a pain in the neck.

Before direct deposit, online banking and debit cards, the only way to get money in or out of a bank account was to go to the bank or an ATM. And if you wanted to transform a check into money, then you had to go into the bank and wait in line to speak with a teller.

Money can be used to buy things because money is a representation of wealth – a medium used to exchange goods and services. Money allows us to turn bacon into blueberries and garments into gold. Without money, you’d have to find someone who had what you wanted and wanted what you had.

A check isn’t a representation of wealth, it’s a representation of money. It’s a certified IOU – a piece of paper that promises the future exchange of money. You can’t spend a check or trade it for something you want. The only thing you can do with a check is take it to the bank. And this is real problem with checks: if you give one to someone, it means they have to go to the bank.

Thanks to the technologies mentioned earlier, we rarely have to go to the bank anymore. This is great news because going to the bank costs time and is really boring. But when someone writes us a check, they make us go to the bank, waste our time and make us bored.

So why do we write checks? The answer is what we just mentioned: going to the bank sucks. We don’t want to go to the bank and withdraw the money to pay someone, so we write them a check and make them go to the bank. So when you write someone a check, what you’re really doing is sending them on an errand to the bank. You’re telling them, “I didn’t want to go to the bank, so now you have to go.”

Until the person goes to the bank, they can’t get the money they’re owed. They can only have it on the condition that they go waste a bunch of time doing something that you don’t want to do. So when you’re sitting there writing their name on that little rectangular piece of paper, you’re actually subjecting them to a twisted treasure hunt.

A check is a treasure map – a treasure map that leads to the bank.


When you move the little virtual slider on your music player to adjust the volume, do you know what you’re changing? Obviously you’re adjusting the volume, but what characteristics are you manipulating in order to modify the sound that comes out of the device? And why is sound still clearly audible when the volume control is set to 5 percent? It turns out that sound is actually really complicated, and there’s probably no chance of understanding the science behind it without a great deal of studying. But studying is boring and hard, and it’s way more fun and way easier to read an article that simplifies a concept so that we can pretend to understand it and feel smart. Here we go.

Before we try to understand something like a volume control, we first need to know how sound actually works and how we interpret and measure it. Sound is air vibrating. That’s right, sound is the vibration of air pressure. The frequency of the sound is determined by the wavelength, which is the number of oscillations per second (hz). Frequency doesn’t determine loudness, but the pitch of the sound.

The amplitude of the sound wave is the distance between the wave’s peaks and troughs, and it indicates sound pressure. It is the pressure of the wave as it enters our ear which determines the loudness of a sound. But the perceived loudness of a sound is also influenced by frequency, distance, duration, interference, atmospheric pressure and particle velocity.

Loudness is the ear’s subjective measurement of sound, similar to how our skin senses the temperature of a surface. As with temperature, our body doesn’t say anything concrete about the sound in our ears, it just tells us how loud it is and what it sounds like. The intensity of a sound is probably the closest measurement to objective loudness, and it’s an expression of the sound power per unit area (power is equal to amplitude squared).

Interestingly, our ears interpret sound on a logarithmic scale, so a sound with twice the intensity of another does not seem twice as loud to us. Similarly, a tone with twice the frequency of another doesn’t seem twice as high. This means that it’s more difficult to differentiate between sounds at higher frequencies and amplitudes.

Our logarithmic hearing also causes a problem when attempting to plot sound intensity. The minimum intensity detectable by humans is somewhere around 10-12 watts/m2, and an intensity that can begin to cause pain is about  10 watts/m2. This means our maximum intensity level is one trillion times as intense as our minimum. Working with such huge values make plotting extremely difficult, and it doesn’t convey our perception of sound very well.

For example, let’s imagine that we’re at a rock concert with the sound blasting at 10 watts/m2. Our friend leans over and says something in a normal speaking voice of 10-6 watts/m2. Would we say that the rock concert is a million times louder than our friend? Probably not (at least not literally).

In order to correct this issue, we use a logarithmic scale to measure sound. The most common unit of measure used to represent the loudness of a sound is the decibel. However, the decibel isn’t really a unit, but a logarithmic relationship between two quantities of the same unit (usually some form of power). When used in the realm of sound measurement, decibels often represent sound intensity (dB IL) or sound pressure level (dB SPL). Intensity is equal to pressure squared, so the conversion between intensity level and sound pressure level is simple. The intensity decibel ratio is defined as

IdB = 10 log10(I1/I0)

where I1 is the sound intensity and I0 is the reference level (the threshold for human hearing). Plotting intensity using the decibel gives us a logarithmic graph that more closely represents the ear’s perception of sound.

So instead of measuring a massive range of values of sound intensity in watts/m2, we can simply present the same range of values as a scale between 0 dB and 120 dB. This is similar to another logarithmic scale, the Richter scale, which was created by American writer, actor and comedian, Andy Richter, and it measures the magnitude of an earthquake.

While the decibel allows us to more easily understand and convey the range of human hearing, it doesn’t really solve the problem of comparing loudness using numbers. A 10 dB increase will approximately double the perceived loudness of a sound. This means that the 120 dB rock concert is roughly 32 times louder than our friend speaking at 70 dB. But if the rock concert isn’t a million times louder than our friend, and it isn’t 32 times louder, how much louder is it? Andy Richter understood the dangers of using scales like the decibel, even stating that, “…logarithmic plots are a device of the devil.”

Since hearing is subjective, there isn’t really a precise formula for comparing loudness. However, we should at least use a scale that intuitively makes sense to us. To accomplish this, we can use weighting techniques to apply curves to the decibel scale. This produces measurements that are correspond more closely to the way we perceive loudness. In addition to correcting the scale, we must also deal with other factors like duration and frequency, because humans interpret loudness differently over time and at different frequencies. Weighting isn’t perfect, but it makes relating sound levels much easier to understand.

Okay, so now that we know a little bit about sound, let’s try to solve our original problem: what is our media player volume control actually controlling? Is it manipulating the power, pressure, intensity, a logarithmic ratio or an entire equalizer full of values? The answer is that it depends on the software.

Most media player volume controls are represented by a little slider, and they come in a variety of shapes and colors. Almost all of these controls are graphically linear, which means that they appear to adjust volume in a linear way. Many of them even display a percent symbol, which clearly indicates that we are setting the volume to a portion of the maximum corresponding to the distance from the left side of the slider.

Some controls implement another dimension, such as color, giving the perception of a logarithmic control. These are usually semi-linear in actuality because the slider contradicts the logarithmic dimension by indicating linearity in another way (like a percent symbol). A logarithmic control is one that represents the change in perceived loudness, and these are rare.

The problem here isn’t that everyone’s using linear controls. On the contrary, a linear control is precisely what a user should expect. After all, users don’t really care about what’s going on behind the scenes – the implementation – they care about the interface, and the interface should be simple and easy to use. Users don’t need or want to know about decibels and sound intensity, they just want to set the volume level to a desired level based on what their intuition tells them about how the control should work. In this matter, right and wrong are a matter of preference. If the control behaves the way users expect, then it is designed correctly.

Simple volume controls adjust loudness two basic ways: by using a decibel scale, and by using a weighted scale. With the first method, we’re controlling a logarithmic scale using a linear control. This means that the perceived loudness increases at a greater rate as the level approaches its maximum, and it also means that the sound is still audible at very low settings (often under 5 percent).

The second method uses weighting to give a more authentic feel. If the maximum setting is a reasonable listening volume (70 to 80 dB), then we should 50 percent of that to be half as loud, not half of the sound intensity or half of the decibels. We should also expect very low settings to be barely, if at all, audible. And this is roughly what we get with this type of control.

Of course, there are many other volume controls – both internal and external – manipulating the loudness of the sound, and each one does it differently.

Next time you’re listening to music or watching a video, check what type of volume control the software uses.