The Brain: Part I

What makes us who we are?

This question seems intriguing, but it’s actually far too vague to have any real meaning. This is also the case when people ask, “what is the meaning of life?” They think they’re being insightful, but without specifying what they’re trying to discover, the answer is made indiscernible, and the question becomes useless. To illustrate this problem, try to determine the meaning of the subject in any of the following questions:

  • What is the meaning of broccoli?
  • What is the meaning of basketball?
  • What is the meaning of five dollars?
  • What is the meaning of a question that asks about the meaning of life?

As we can see, asking such poorly-phrased questions leaves far too much room for interpretation. It’s likely that they meant to ask something like, “for what reason was life created?” or “what is the purpose of  human existence?” Now let’s return to our original inquiry, improving its structure in order to allow for a meaningful answer.

What properties possessed by an individual human distinguishes them from other humans?

Even this more pointed question still retains many different avenues of response. After all, a fingerprint is unique and distinguishes each human from all others. But the question seems philosophical in nature, so it probably doesn’t aim to address the mere physical. Its phrasing also suggests that there’s more than one correct answer, though we’re probably just looking for the most interesting and insightful one. Let’s try again.

What feature of a person contains the most crucial and meaningful components that make them a unique individual?

Now we’re onto something. We all know that everyone is unique, and we can easily distinguish one person from others, so let’s start our question for a solution by examining the ways that we tell each other apart and see if one of them satisfies our inquiry.

The first and most obvious way that we recognize each other is by our appearance. It’s true that we’re covered by clothing and makeup much of the time, and it’s also true that each of us has our own fashion sense that we pretend is unique, and yet we’re still able to recognize each other in a swimsuit or bizarre costume. This is because there’s a special part of the brain responsible for facial recognition, and it helps us tell others apart. However, few would agree that our bodies or our faces make us who we are. In the 1997 action movie Face/Off, FBI agent Sean Archer and criminal mastermind Caster Troy (played by John Travolta and Nicholas Cage irrespectively) have their faces switched. While the premise and execution of this film is obviously bad, it teaches us that we aren’t defined by our appearance. There are also those who tragically suffer amputations or facial deformation, and while they may ask serious questions about their own identity and purpose, others certainly identify them as the same person.

Those who subscribe to a materialistic view would likely argue that it’s our genetics that make us who we are. According to them, since everything can be explained by natural processes, then everything about us is derived from our genes: our appearance, ideas and abilities. On top of that, each of us has our own unique genetic code, or do we? Identical twins actually share the same DNA and, while irritatingly similar, they aren’t the same person. If two people with identical genetics can be distinct, then this can’t be what defines us.

The world is full of those whose quest in life is fame and wealth. In many ways their identity is tied to their notoriety and possessions, but even pop culture recognizes that money doesn’t define who we are. In her autobiographical 2001 hit single Jenny From The Block, Jennifer Lopez pleads with audiences, “don’t be fooled by the rocks that I got,” and goes onto claim that despite her wealth and status, “[she’s] still Jenny from the block.” This implies that her identity remains static despite the fact that she, “used to have a little, now [she] has a lot.”

If it isn’t her wealth and fame, maybe Ms. Lopez’ talents and accomplishments as a dancer, singer, songwriter, author, actress, fashion designer and producer that define her. After all, each of us possess unique skills and abilities that make us special (at least that’s what our mothers told us). It’s true that our skills, abilities, achievements, vocations and interests define us to a degree. An example of the value we place on our job is the fact that the first question we ask a new acquaintance is often what do you do? Many of us derive our identity primarily from our profession. However, when we encounter failure, disability or retirement, we’re still us.

So clothes don’t make the man and neither does the body. Our genes don’t make us unique individuals. On top of that, wealth and fame don’t define us and neither do our abilities or achievements. So what could it be? Perhaps we can find the solution by examining cases of people who are no longer identified as the person they once were. Unfortunately there are millions of examples of such cases.

Dementia comes in many forms, the most common being Alzheimer’s disease. Those suffering from dementia experience a number of symptoms including memory loss, memory distortion, speech and language problems, agitation, loss of balance and fine motor skills, delusions, depression and anxiety. These symptoms are caused by changes in the brain brought on by nerve cell damage, protein deposits and other complications. In advanced cases, the person may become unrecognizable to loved ones. Visiting family members may be shocked to find their relative or friend using abusive language, exhibiting violent aggression or making inappropriate sexual comments.

Brain damage can also produce equally drastic changes in people. In her article After Brain Injury: Learning to Love a Stranger, Janet Cromer details the story of her husband, who suffered anoxic brain injury. She discusses the impact of brain injury on her husband’s memory, communication, behavior and personality. She notes that the experience is like getting to know him all over again, summarizing it this way: “Imagine feeling like you’re on a first date even though you’ve been married to this person for… 30 years?”

It’s clear that our identities are largely defined by our personalities. The things we love and hate, the ways we think and act, even our way of standing perfectly still – they all define who we are. When these things change, we change. But there’s more to us than simply what we think, do and say.

The other way that we can observe changes in identity is though memory loss. In addition to the aforementioned cases of dementia, retrograde amnesia can also impair or rewrite personal identity. While most of us have no experience with amnesia, it’s obvious that a loss of knowledge of identity is a loss of identity. After all, how can you be someone you’ve never heard of? But memories don’t just allow us to recognize our own identity, they also define us, for we are obviously and seriously affected by our experiences. Brain scans reveal that those who have been traumatized, especially at a young age, actually show clear physical changed in the brain.

Though it’s pure fiction, there are cases in which we accept that a person’s identity has changed. Hollywood provides us with many examples of instantaneous change of identity due to mind transfer. In the 1976 film Freaky Friday, a mother and daughter miraculously have their memories switched (as well as their personalities). While the story may not be the most plausible, we clearly understand that the two characters are no longer themselves. 2001’s K-PAX tells the story of an alien being called Prot who inhabits the body of a human named Robert Porter. At the end of the film, the alien abandons its human form, leaving behind a catatonic Porter. Upon his departure, Prot’s former body is no longer recognizable by his friends, one of whom remarks, “That’s not Prot.”

These examples also illustrate how important memory is to our identity. Without the transference of memory, the characters would retain the knowledge of their past, including their own identity. And this is precisely why the existence of reincarnation is largely inconsequential. If we possess only the memories of ourselves, then it doesn’t matter if our life is the continuation of another. If a person experienced reincarnation or a mind transfer, but did not retain any memory, then they would be unable to identify as anyone but their current self and would therefore possess a unique identity. So don’t do good for the sake of your reincarnated self, for the being you will be will not be you.

And so we have our answer: it is our personality and memory that make us who we are. And although there is great uncertainty about how it actually works, these features are produced and stored in the brain, which somehow projects consciousness (also known as the mind). Our minds allow us to perceive, think and imagine, and while its existence is arguable metaphysical, the mind gives rise to identity. So identity is actually stored in and generated by the brain.

Now we can rest in the knowledge that our identity is safely locked inside a squishy mass hidden behind a quarter inch of bone. Unfortunately the brain remains a very mysterious and peculiar thing. In part II we will explore some of the curiosities and limits of this mighty organ that defines us.

Two-legged Friends

Dogs are often called our four-legged friends, but this label is inaccurate in more ways than one. First of all, many dogs are not friendly. Each year, nearly 4.5 million Americans are bitten by dogs, and half of those are children. That means a child is bitten by a dog every 16 seconds in the United States alone. The other inaccuracy in this title has to do with the physical anatomy of canis lupus familiaris (the domestic dog).

According to the dictionary, a leg is defined as a limb used for support or mobility. Naturally this would imply that humans have two legs and dogs have four. But our understanding of what constitutes a leg varies depending on the species. Gorillas, for example, walk using all four limbs, yet most would agree that they have only two legs. Most contend that the front two limbs of a gorilla should be considered arms because they are used in much the same way that we use them: to forage for food, to use tools and to scratch ourselves. However, dogs use their front limbs for digging, climbing and adorably covering their faces, yet these appendages are somehow not awarded the title of arms. But if a dog’s front limbs aren’t arms, what are they and why?

A common understanding of the distinction between arms and legs is the idea that arms have hands. Proponents of this view would argue that a gorilla’s front limbs should be considered arms because they have hands with opposable thumbs, but there are many other creatures with hands that have thumbs, including the giant panda, the chameleon, the opossum and some species of reptiles, rats and frogs. So not only does the hand-arm theory imply that rats and frogs have arms, it also would mean that a gorilla has no legs at all because its feet have opposable toes as well. In addition to these complications, this understanding fails to address the fact that many animals, including the dog, have significant anatomical differences between the front and rear appendages.

There are yet others who subscribe to the if-it’s-not-a-leg-it’s-an-arm movement (IINALIAA), which implies just that: any limb not used for mobility is an arm. While this idea perfectly explains the anatomy of bipeds such has humans and kangaroos, it also implies that gorillas don’t have arms and that birds might actually have arms. Since this argument specifically tackles the issue of identifying arms among legs, it doesn’t effectively address limbs such as wings, which, while they aren’t legs, are used for mobility. In addition, it’s obvious to everyone that a gorilla’s front limbs are much more armish than a bird’s wings.

Each of these explanations falls short of satisfying our understanding of the difference between arms and legs, and so we have a problem. Both gorillas and dogs use all four limbs for mobility, have different front and rear appendages and use the front two for special functions, and yet we deny dogs arms. What’s not in dispute here is the nature and function of a leg – any child can tell that legs are used for walking. What is in dispute is what makes some legs arms.

To take a brief break from animals with controversial limbs, let’s take a look at a creature with an anatomy that we can all agree upon: the centipede. Centipedes are totally disgusting and possess anywhere between 20 and 300 identical limbs, each used solely for mobility and freaking people out. There’s no debate about whether any of these legs are actually arms because the only purpose of each limb is movement and all of them are the same – and that’s where the difference lies. When we inspect the anatomy of humans, gorillas and dogs, the one feature that they all share is an obvious design difference between the front and rear limbs. And not only is the form and function of each limb set unique, the structure of the joints that connects the limbs to the body is also different.

As illustrated above, dogs possess both a set of hips and a set of shoulders, and everyone knows that shoulders connect to arms. Also, if the front and rear limbs of these animals are so different, why should we give them the same name? If gorillas have arms, then arms can be used for mobility. And if kangaroos have arms, then arms don’t need hands with opposable thumbs. So if dogs have shoulders and if arms can be used for mobility and don’t require hands, then we’re left with only one conclusion: dogs must have arms.

The error lies in the false belief that an arm is defined by what it doesn’t do instead of what it does. A leg does not become an arm when it stops being used for mobility; a leg becomes an arm when starts being used for more than mobility. Just think of a panda laying on its back eating bamboo. Is it really using its legs to grab hold of the shoots and bring them up to its mouth? Of course not!

This new understanding of limbs is sure to make some people uncomfortable. After all, what about horses, hamsters, llamas and lemurs, seals, skunks, tigers and turtles? Surely the entire animal kingdom must be reexamined in order for their limbs to be properly classified. But just because a proposition implies a difficult solution, it doesn’t mean it’s incorrect. In fact, it’s likely evidence that the opposite is true.

Addicted

We’ve all heard someone say that they don’t have an addictive personality. This arrogant statement is usually followed by a citation of all the most common addictions to which they are not enslaved. But their claim isn’t just pompous and irritating, it’s also inaccurate.

When someone claims that they don’t have an addictive personality, what they mean to say is that they aren’t susceptible to the most common forms of addiction. This statement is also false, since they would likely become addicted if they were forced to undertake the addictive behaviors to which they believe they are immune.

If we were to give such a person a hit of crystal meth, for example, the outcome would likely be equally grim for them as any other person. What they should be saying, if anything at all, is that they have no desire to engage in common addictive behaviors or that the limited behaviors they have engaged in have not yet produced an addiction (that they’re aware of).

Sometimes people will make claims about the addictive properties of a substance or behavior in order to further their argument against it. For example, those who believe that humans should not consume wheat gluten will point to the fact that it contains opioid peptides, which are from the same family as opium. Obviously something that is related to opium must be bad, right?

Well opioid peptides are actually produced naturally in the body and are found in other foods such as soybean and spinach. Apart from that, the fact that something is addictive doesn’t necessarily mean that it should be avoided. People suffer a wide variety of addictions, including addictions to exercise, reading, whistling and social media, but that isn’t reason enough to conclude no one should engage in these behaviors. We all know that addiction can be dangerous, but our understanding of this issue is often limited to a narrow group of common afflictions. Most of us would define addiction in a dictionary-like manner, and it would look something like this:

uh-dik-shuhn -noun

1. the state of having strong compulsion to repeatedly consume something or perform an action. Every night Danny goes to the bar and gets drunk; I think he might have an addiction.

Although this definition is certainly accurate in many cases, it’s far too vague to be used to determine whether or not someone has an addiction, much less what should be done about the thing to which they are addicted. In order to illustrate this, please indicate which of the following subjects are and are not addictive:

  • Adrenaline
  • Alcohol
  • Approval
  • Caffeine
  • Carbohydrates
  • Chatting Online
  • Chocolate
  • Cigarettes
  • Cocaine
  • Collecting Things
  • Computers
  • Driving
  • Exercising
  • Fame
  • Fashion
  • Fat
  • Food
  • Friendship
  • Gambling
  • Heroin
  • Humming
  • Lip Balm
  • Love
  • Marijuana
  • Money
  • Monosodium Glutamate (MSG)
  • Music
  • Pharmaceuticals
  • Piercings
  • Pleasure
  • Pornography
  • Procrastination
  • Reading
  • Relationships
  • Relaxing
  • Risk
  • Salt
  • Sex
  • Sitting
  • Shopping
  • Sleeping
  • Social Media
  • Spirituality
  • Stealing
  • Studying
  • Sugar
  • Talking
  • Tanning
  • Tattoos
  • Television
  • Thinking
  • Travel
  • Vacations
  • Video Games
  • Vitamins
  • Whistling
  • Work
  • Writing

Many of our definitions, opinions and interpretations are easily shattered by the introduction of subjects that lay on the fringe, and this is also true of our understanding of addiction. Everyone knows that cigarettes and gambling are addictive, but what about things that are less-obviously bad, like lip balm, music and talking? If an addiction is simply a powerful compulsion, then aren’t we all addicted to everything we crave? Perhaps a more thorough understanding of addiction is necessary. The American Society of Addictive Medicine uses a more comprehensive definition:

“Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry. Dysfunction in these circuits leads to characteristic biological, psychological, social and spiritual manifestations. This is reflected in an individual pathologically pursuing reward and/or relief by substance use and other behaviors.

Addiction is characterized by inability to consistently abstain, impairment in behavioral control, craving, diminished recognition of significant problems with one’s behaviors and interpersonal relationships, and a dysfunctional emotional response. Like other chronic diseases, addiction often involves cycles of relapse and remission. Without treatment or engagement in recovery activities, addiction is progressive and can result in disability or premature death.”

But even this nuanced understanding fails to differentiate between an addiction and a simple urge or craving. Let’s analyze each key component of the definition.

“Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry. Dysfunction in these circuits leads to characteristic biological, psychological, social and spiritual manifestations.”

This section merely describes addiction as a disease of the brain that manifests in a multitude of dimensions and doesn’t explain what it looks like.

“…an individual pathologically pursuing reward and/or relief…”

The term pathologically is used to imply that the subject is suffering from a mental disorder, but this must be established beforehand in order to determine whether or not the subject is addicted, so it doesn’t really help us here. Also, this description defines pretty much every behavior, since we are constantly eating to relieve hunger, entertaining ourselves to relieve boredom and so on.

“Addiction is characterized by inability to consistently abstain, impairment in behavioral control, craving, diminished recognition of significant problems with one’s behaviors and interpersonal relationships, and a dysfunctional emotional response. Like other chronic diseases, addiction often involves cycles of relapse and remission. Without treatment or engagement in recovery activities, addiction is progressive…”

Again, abstaining from most healthy behaviors will also produce these symptoms. In addition, sometimes people can regularly engage in what many would consider an addictive behavior and suffer no ill effects. There are also people we might consider addicted who may suddenly abandon the behavior and immediately experience permanent freedom from the craving. Finally, the requirement that addiction must continue to progress excludes those who can maintain a constant level of craving even if it completely disrupts their life, so this should not be necessary for the definition.

“Without treatment or engagement in recovery activities… [addiction] can result in disability or premature death.”

Now here’s where it gets interesting. Most definitions of mental disorders include a similar requirement that the condition negatively impacts the subject’s life in some way. Many mental disorders are merely extreme forms of a natural, healthy behavior. We all feel some anxiety and paranoia, we all feel depressed at times, we’re all traumatized by our past and we all have mood swings. For most of us, of course, these symptoms are within manageable levels, so they are not classified as a mental illness. However, addiction is different. Addiction is something that we all naturally feel toward certain things. Hunger, for example, is a very powerful craving that can lead to any and all of the symptoms listed above, and yet we all know that hunger is not an addiction.

So there appears to be a hidden ingredient in addiction that these definitions have failed to include, and it’s something that we all understand. We all know people who suffer from addictions – we know what addiction looks like – yet we can’t articulate it. We know that the man who drinks his paychecks away is addicted, we know that the woman who keeps taking painkillers long after her surgery is addicted and we know that the teenager who stays up till 3 A.M. every night playing World of Warcraft is addicted. We know this regardless of any consequences they suffer, the growth of their addiction and presence of a relapse. There’s just something fundamentally different between their cravings and something natural like hunger.

Here’s a new definition of addiction that attempts to grasp what makes addiction recognizable:

uh-dik-shuhn -noun

1. a powerful synthetic craving brought on by the introduction of unnecessary stimulus, or a natural craving magnified to an extreme degree.

This definition obviously doesn’t include the aforementioned requirement of a negative impact on the subject’s life. Although this is an important part of diagnosing a mental illness, this is not a medical definition; it is meant to help everyday people understand and recognize addiction.

Another issue that we’ve avoided until now is the question of which substances and behaviors are addictive. As we’ve already discussed, people can become addicted to a huge variety of substances and activities, but that doesn’t mean that these things are addictive, does it?

To answer this question, we could take a scientific approach and discuss substance dependencies, how habits are formed and chemical reactions in the brain, but we’ve already learned that such subjects are not relevant to our everyday experience. In other words, who cares whether or not we can prove that a substance or behavior has innate addictive traits?

One obvious answer to this question would be to inform people about which substances or behaviors to avoid, but we already know that people can become addicted to almost anything and that just because something is addictive doesn’t mean that it’s bad for us. In addition, the fact that something is both highly addictive and extremely bad for us doesn’t mean that society will reject it. After all, 87.6 percent of Americans consume alcohol, 15 percent of those people abuse it and around 90,000 Americans will die this year as a direct result of alcohol consumption. And yet we still promote drinking in films and television, advertise it in magazines and on billboards and sell it at grocery stores and at sporting events.

If you’re concerned about an addiction, don’t rely on the government or the Internet to help you make a decision. Listen to those who love you.

Proximity 7

“If you would have seen what I’ve seen, then you’d understand.”

His gaze leaves the room as memory engulfs him. Hell is not a fantasy, but history. Visions of severed limbs and hollow faces wash over his fragile heart like lifeless bodies breaking against distant shores.

Our memories are as much a part of us as our DNA. Both define who we are, and just as we cannot edit our genetic code, we cannot rewrite our history. It’s impossible to escape that which we have endured, and so we shoulder burdens of fear, anger and remorse. We like to imagine that we choose who we are, what we think and how we feel, but the reality is that we are blessed, scarred creatures, refined and haunted by our past.

We dare not speak of war to the veteran or of love to the widow. We dare not grumble about bright lights to the blind or stagnant wages to the unemployed. We would never fret over a pimple in the presence of those who are disfigured, and we would not make light of a disability in the presence of someone who suffers its reality every day.

Those who have experienced intense, life-changing events or conditions often develop an increased sensitivity to such things, and when we’re in the presence of someone who has been through a traumatic experience, we also become sensitive to that experience.

This is why ads that depict starving, suffering children have little effect on healthy, first-world citizens. We certainly don’t doubt that those children are suffering or that they need help, but they are outside of our proximity. While televisions may transmit images and sounds effectively, they do not transmit experience, and they do not put us in the presence of those suffering children.

The problem is not that we have been desensitized to these things, since that would imply that we were once sensitive to them. Neither is the problem that we are insensitive, for we can certainly feel compassion, empathy and other emotions. The truth is that we aren’t sensitive to these issues because we have never been sensitized to them. Without living in poverty or experiencing starvation, we can’t help but underreact. We all know and agree that poverty and starvation are bad, but we know it in a theoretical, moral sense; we don’t know how it feels to be poor and starving.

The simple fact that something is true usually isn’t enough to invoke a reaction. This is why advertizers use music, drama, sex, controversy and comedy to provoke us. They’re trying to make us understand, make us imagine enjoying the ice-cool soda on a hot day or driving the elegant, powerful sports car down winding rural roads.

The impotence of mere truth is also why we find it much easier to hurt people we can’t see or don’t know. We’re much more likely to berate others on the Internet, steal from a faceless corporation, get angry at other drivers or even crush the dreams of unknown opponents in a competition. It’s not that we believe that these actions are any less wrong in such situations, but we don’t have to look someone we know in the eye when we commit them.

Although we might believe that an action is wrong, the severity and impact of that belief changes depending on our company. We never swear in front of our grandmother, we don’t say retard near those with mental disabilities and we wouldn’t complain about our big feet to someone with no legs. Our sensitivities are constantly fluctuating as we engage with different people. This basically means that each of us suffers from a form of dissociative identity disorder, since we are constantly changing from one person into another.

Some would argue that the problem is the increased sensitivity of those affected by traumatic experiences. They might say that it’s unfair and unreasonable to expect others to be sensitive when they only became sensitive through personal experience. This is a sensible conclusion, but it ignores a simple, yet important question: which level of sensitivity is correct?

Consider the story of a man who tragically lost his home and family in a fire. To him, having smoke detectors, a fire extinguisher and an emergency escape plan now seems extremely important. After dealing with his loss, he chooses to begin a crusade to educate others on fire safety. Because of his experience, the man’s concern is heavily weighted toward this issue. This doesn’t mean that his warnings are invalid or exaggerated, only that his ability to perceive the truth of the danger of fire was revealed by his proximity to such events. He is now subject to increased sensitivity and awareness of the issue. He no longer finds certain jokes or comments funny and reacts to imagery of fire and smoke differently than another person might. He is now subject to what are called trauma triggers – experiences that trigger a response from someone who has been traumatized.

Critics of this man’s position might point out his lack of concern for healthy living or earthquake safety. After all, while it’s true that fire safety is important, there are a multitude of threats endangering our families. These people would argue that the man’s perception has been tainted and that he now possesses an intense bias toward fire safety. In this civilized, scientific era, empirical knowledge is supreme, and those who allow bias, emotion and experience to influence their decisions are considered foolish, since we can easily show the inconsistencies in their positions.

Others would maintain that the man’s crusade for fire safety, though extreme, is an important contribution. They would agree that his reaction is emotional and possibly unreasonable, but they would never discourage or criticize him for focusing on only one issue. This is probably the most acceptable and popular position, but it ignores the profound possibility that this man’s vision is not distorted by his experience, but clarified. What if the man’s crusade to promote fire safety is merely a reasonable reaction from someone surrounded by people who don’t understand what he now sees so clearly? It may be that he is still blind to other dangers, but this only means that we are even more blind.

So what is the proper reaction from unsensitized folk, given what we’ve learned? Here are some options:

  1. Ignore the issue and continue adapting to the sensitivities of others without feeling them ourselves.
  2. Force ourselves into acts of charity regardless of our sensitivity to the issue.
  3. Immerse ourselves in suffering to gain sensitivity.

The first solution is obviously wrong because it’s boring and has the word ignore in it. Aside from that, it evades the question of whether or not suffering enlightens or misguides us.

The second option is a good choice if you’re interested in resuming your normal life while appeasing any sense of guilt or obligation to others. It’s true that acts of love done without love still make a difference, but they ignore the systemic problem of a lack of concern for others, which will eventually lead to increased suffering.

The final choice seems frightening and overwhelming, but we may be able to sensitize ourselves without bringing suffering on ourselves. As we mentioned earlier, simply being near someone is enough to temporarily grant us sensitivity, so perhaps we can permanently sensitize ourselves by allowing them to share their lives with us. This is possible because there are more ways to experience something than personally living through it.


The image above shows that there are actually four levels of experience. Those in the first, outer-most level have absolutely no experience with the subject. They’re usually fearful, curious and skeptical of the new experience, since they haven’t even heard of it before.

The second level is the one that most of us would would identify with, and it’s one of the two we’ve been discussing. All of us have heard of cancer, poverty, AIDS and political persecution, but we’ve never experienced these things, and we aren’t close with anyone who has. We may have casual acquaintances or distant relatives who have lived through these experiences, but we don’t know them well enough to understand their situations.

Let’s skip ahead to the fourth and inner-most level, which is another one we’ve been talking about. Those in this level have personally experienced suffering and exhibit the increased sensitivity we’ve already discussed.

The third level of experience is the path to sensitivity we’re looking for. People with second-hand experience have shared a close relationship with someone in the first level. An example might be a person whose immediate family member suffered a serious illness. Some might argue that the line between first and second hand experience is blurry, and that’s precisely the point. By sharing in the suffering of others, we can permanently adopt their experience and sensitivity. We have done more than witness the suffering of another, we have endured it with them.

Now let’s return to the question of whether suffering enlightens or misguides us. First, by asking such a question, we’re making an assumption that there is a default perception. But just as there is no correct price of fuel, there is no correct way to perceive the world. Again, some would say that the correct understanding is one of unbiased empiricism, but a purely empirical worldview would remove love, honesty, morality and human value from the equation. Without these things, we cannot discern whether or not suffering is bad or human extinction is good. On top of that, empirical knowledge is merely something that some of us prefer, so someone who promotes empirical knowledge is actually revealing their biased toward empiricism.

To say that there is no correct way to perceive the world seems a bit extreme, but we’re not talking about moral relativism, religious pluralism or the rejection of scientific theory. We’re not here to debate the existence of reality; we’re talking about the lens of experience through which we view the world. To put it scientifically, we’re talking about the combination of chemicals and electrical impulses in the brain that make up our attitude, outlook, emotions, values and overall state of mind.

In contrast to the more extreme examples of suffering we’ve used, here are some minor influences that are constantly altering our perception:

  • Confusion
  • Pain or discomfort
  • Lack of sleep
  • Hunger or thirst
  • Drugs and alcohol
  • Caffeine
  • Exercise
  • Sexual arousal
  • Joy
  • Anger
  • Boredom
  • Solitude
  • Social awkwardness
  • Confidence
  • Uncertainty
  • Greed
  • Guilt
  • Obligation
  • Competition
  • Inspiration

As we can see, our outlook on life is changing every moment. Even drinking a glass of water or taking a few seconds to look out the window can make us feel better, and a simple compliment from a stranger can change what we think of ourselves. Likewise, staying up too late or encountering an irritating person can put us off, and receiving a bill in the mail can change our attitude toward finances.

A person may try to argue that our perception is correct when we are free from all of these influences, but not only is it extremely unlikely that we would be able to attain such a state, a position of perfect balance would also be the result of external influences. Even the air we breath makes us who we are, for variations in the oxygen content of our environment affects our physiology. Perhaps oxygen is actually a hallucinogen that causes us to imagine that our existence is meaningful. Whatever the case, our perception is obviously unstable.

In addition to the our normal fluctuation, intense experiences induce brief states of extreme emotion. Having valuable property stolen can affect us for life, but it also affects us in a more powerful, temporary way when it initially occurs. Learning that we have lost important, irreplaceable items will cause us to realize the insignificance of our other problems. But then learning that a family member has been diagnosed with a life-threatening disease makes us immediately forget our stolen property.

So not only is our perception vulnerable to minor temporary changes and powerful permanent changes, it’s also subject to even more powerful temporary changes brought on by the same experiences that affect us for life. And in these moments of extreme emotion, are really experiencing clarity? And if we are, how can we say that this is the correct perception, since it’s impossible to maintain such a view? Well, if there’s no correct way to perceive the world, and if our perception is constantly and sometimes drastically changing, it would seem impossible to determine whether traumatic experiences enlighten or misguide us. But just because we can’t be the best, it doesn’t mean we can’t be better. Perhaps instead of attempting to identify a perfect state of understanding, we should merely be seeking to learn what we can from those who suffer.

Since determining a complete and sound moral code is beyond the scope of our discussion, we can temporarily suspend philosophical inquiry and appeal to the general understanding that whatever results in the greatest good for the most people is right and that we should treat others the way we want to be treated. Now if no lens of perception is correct, then perhaps we should view the world in whatever manner results in the greatest good and increases our ability to treat other the way we would want to be treated. In other words, our perception should prioritize the greater good and cause us to feel empathy for others.

So how do we do this? A good strategy would be to look at the world through the eyes of as many people as possible. By sharing in their suffering through second-hand experience, we can align our priorities with theirs and become genuinely sensitive to their situation. In doing so, we generate empathy, which causes us to treat them, and others in similar situations, with the same sensitivity that we now share. And in addition to all this, we should never forget to be sensitive to the unsensitized, for they are merely seeds that have not yet sprouted.

Stop changing who you are based on who’s around you, and stop trying to do what’s right when you don’t care. Get sensitized.

Wild

There has been a recent shift in favor of things considered natural. People are choosing clothing, food, cleansers and building materials that come from simple, natural sources. The supposed purposes for this trend are the preservation of the environment through the use of renewable resources and the promotion of our health and well-being.

An example of this way of thinking is the Paleolithic diet. This eating regimen is built on the premise that we humans, like other creatures, should eat what is natural for our species to eat, which apparently is whatever our ancestors evolved to eat during the Paleolithic era. There are at least three problems with this idea.

The first is that we are only presuming to know what our ancestors were eating at the time of their most recent evolutionary dietary transition. The second issue is that what is natural is not always the superior choice (most medicines are not natural). The third problem is that the evolution of humans was drastically altered when we became self-aware. We are no longer wild, for we took control of our evolutionary destiny and, along with it, the destiny of the creatures we domesticated.

We’ve already discussed the natural state of humans and how it is largely determined by the presence of human society, but what is the natural state of an animal? More specifically, what is the natural state of a tamed animal, if there is such a thing? There are four general responses to this question, each embodied by a group of people.

The first group believes that animals, or at least the more important animals, must remain wild. They would define a wild animal as one living in its natural habitat without human interference. Those from this camp would argue that what is best for the animal is what is natural, even if that means a high risk of starvation, predation, disease, isolation, etc. To them, the concept of owning and using animals for our benefit is comparable to slavery.

The second group has no concern for the animal’s nature, seeking only to cater to the whims of their captive critter. These are pet owners who will purchase their pets extravagant toys, food and even clothing in an effort to appease them. Rather than the animal functioning as companion or slave, it is essentially elevated to the level of a human child in a demented effort to satisfy lingering or neglected parental instincts.

The third group tolerates the captivity and ownership of animals, but also believes that animals were not meant to exist in the human world. Because of this, their creatures are given ample room to roam and are often fed a diet that resembles what they would eat in the wild. These people attempt to respect animals even as they profit from and consume them.

The fourth and final group sees animals as a commodity and cares nothing for their natural state or desires. To them, animals are merely a resource to be harvested, like plants. And much like plants, they are often packed closely together and only given what is necessary to grow.

So who is right? Well the answer depends not only on how we value animals, but also our understanding of what it means to be wild. The first group would argue that animals are wild by nature, meaning that their natural and therefore best state is one of freedom from human intervention. This sounds like a wonderful idea, but we know from examining the nature of humans that we share a similar state of natural wildness, yet few would argue that feral humans are our finest incarnation. The second and third group both acknowledge that animals have a natural wild state, but also believe that their lives are improved through taming. The final group has no interest in what it means to be wild apart from how it can benefit their ability to cultivate their creatures and maximize profit. Few would argue that this last approach is the most beneficial for the animal.

So the crux of the disagreement is whether or not animals benefit from being tamed. But since most animals are unable express their emotions in ways we can understand, especially wild ones, the answer is largely left to our interpretation. However, there are some who argue that it’s okay to tame some species but not others. Let’s explore this claim.

We often hear stories of pets (usually exotic ones) who turn on their masters, attacking them for no apparent reason. This sparks comments like, “that’s what happens when you keep a wild animal in your house,” implying that some animals are wild and others are not. In a historical sense, this is somewhat accurate, since there are certain species that are traditionally tamed or domesticated (bred by humans for certain purposes). However, to assert that some animals remain wild after taming is both a semantic and logical error.

Animals, like humans, have two basic behavioral states: wild and tame. Since we described a wild animal as one that is free from human intervention, then a tame animal must be one that has integrated with humans. Here are some simple statements that may help us understand the situation:

  • A creature cannot be both wild and tame.
  • All creatures are inherently wild.
  • A wild creature, when properly tamed, loses its wildness.
  • A poorly or partially tamed creature may retain a degree of wildness.
  • Some creatures are more difficult to tame than others.

Now that we share an understanding of the situation, we can dissect the definition of tame. Taming is traditionally defined as the process by which humans integrate animals into their own society, but this does not explain what’s really going on. When we tame an animal, we raise its social compatibility. But this begs some interesting questions: is the process of elevating a human to be compatible with human society not a form of taming as well? If so, is a wolf teaching its pups to behave like wolves also taming them? What about when a human is raised by wolves to integrate with wolf society? A more holistic definition of taming would be the process by which a creature of one species is attuned to the society of another species, but this merely confuses the matter.

Since humans are the highest form of creature and the only species capable of understanding the concept of taming, we perceive a tamed animal as one that is attuned to our society. However, a wolf might consider an adopted squirrel tamed, if it were able to contemplate such things, while we would not. And if a wild wolf is one raised by wolves, then a wild human must be one raised by humans. This is illogical, however, because we traditionally define wildness as an inherent quality of untamed creatures and because we consider ourselves tame; both of these things can’t be true. If taming is the attunement of one species to another species, then humans can’t be tame.

We must use the traditional definition of taming as the process by which a creature of any species is attuned to human society. But that raises the question of how a higher form of intelligence, such as an advanced alien civilization or a race of genetically-enhanced humans, might perceive us. To them, we would be wild beasts in need of taming. That brings up another interesting question: if taming is the attunement of a creature to human society, can we tame each other? Indeed, it was common knowledge in colonial times that native tribes were primitive, lower races in need of taming. The rejection of this idea may be tied to our growing affection for natural things, since it’s easy to argue that these tribes could have benefited from Western medicine and technology.

In any case, taming animals causes enough debate. Just remember that a pet wolf is not wild animal.

thINK

One of the most popular and accessible forms of art involves the creation of two-dimensional imagery on a surface. This is called drawing. Drawing can be done professionally or casually, for profit or personal satisfaction. It can also involve a number of different mediums, including the more traditional pen and paper or oil and canvas, modern instruments like the computer or Magna Doodle and even human skin.

Tattooing dates back thousands of years and spans many cultures across the globe. Each society’s tattoos are visually distinct, employing unique color, content, size, location and pattern. In addition, these designs can serve many different purposes. For the indigenous Polynesians of New Zealand, tattoos were an indication of higher rank or status and also signified a rite of passage into adulthood. During the Kofun period in Japan, tattoos were placed on criminals as a form of punishment. Many cultures use tattoos for religious or spiritual purposes, to honour the dead, to intimidate enemies, to commemorate marriage or simply to appear more attractive.

In modern Western culture, the design and purpose of tattoos is not standardized, but rather determined by the individual. And as with many of our traditions, including naming our children, we tend to borrow from other cultures in an attempt to find purpose and appear unique and sophisticated. In our quest for meaning, we blend Polynesian tribal patterns with Japanese kanji, dragons with crosses, yin-yangs with Bible verses, skulls with Gothic lettering as well as a myriad of other sacred symbols, producing an amalgamation of ancient art that would likely offend and confuse each culture from which we borrowed. Because our population is multicultural and our society prioritizes the individual, each of us is permitted to create our own reality, religion and tattoo design. But in our apparent quest for meaningful markings, we have forgotten one important fact, the true reason why we actually get tattoos: because we want to.

Whenever we ask someone about the meaning of one of their tattoos, the explanation we get usually seems valid. They’ll explain to us how a rose symbolizes their grandmother, who loved roses, or how the number 27 is lucky because all of their children were all born on the 27th day of the month, or how a bloody reaper-skull wearing a crown of thorns reminds them not to fear death but to live life to the fullest every day. These explanations may all be rooted in truth, but there are many ways to commemorate important people or dates or to remind ourselves of mantras, so why did they choose to draw on themselves? Why not just get a picture framed or a piece of jewelry made?

Many would answer that the nearly inescapable permanence of tattoos adds a dimension of commitment to the expression, and that’s true, but let’s think about what the primary factor is for motivating someone to get a tattoo. If the true cause is the death of a loved one, then the person would think something like, “How can I most effectively express my sorrow? Perhaps I should get a tattoo,” but this is inconsistent with the massive number of people who get multiple tattoos and the growing number of those who identify themselves as tattoo addicts. The truth is that every single person who walks into a tattoo parlor wants a tattoo. They may want to commemorate their dead grandmother or immortalize their mantra, but much like the way we choose names for our children, they ultimately decided on the tattoo medium because they liked it. After all, no one ever reluctantly got a tattoo simply because they figured it was the most effective medium.

This isn’t meant to discredit or insult those who have tattoos, since those who choose other mediums also do so because they prefer them. But it’s important to be honest with ourselves and to understand our true intentions, especially when we’re doing something that cannot be easily undone. There’s always a chance that our tattoo will come out wrong because of mistranslation or poor quality artwork, that the tattoo will degrade over time, that the shape of our body will change, that we’ll change our opinion of the subject or that our taste in art will simply evolve.

The idea of sewing a pair of pants to our legs is ridiculous, but even if it was safe to do so, it would seem absurd to imagine that we would always enjoy wearing the same pair of pants. And yet we somehow convince ourselves that we will always love our tattoos, that we won’t be ashamed of them and that the issues that our future selves will face are somehow detached from the choices we make now. Deep down we all know that permanently marking our bodies for aesthetic purposes is foolish. But those who want tattoos are still going to get them, so in light of what we’ve learned, let’s set a few rules in order to minimize the risk of regret and avoid a design that offends another culture or doesn’t make any sense.

  1. Don’t choose someone’s name or face. You might end up feeling differently about them.
  2. Don’t choose another language. There’s always a chance of mistranslation.
  3. Don’t choose a slogan or mantra that may become unpopular.
  4. Do a spell check.
  5. Choose an area of the body that ages well.
  6. Choose an area of the body that can be easily hidden by clothing.
  7. Get a temporary tattoo and see if you like it.
  8. Finish the tattoo.

In other words, don’t get this:

No one who didn’t want a tattoo ever got one.

One of

One of the worst mistakes a promoter, reporter or commentator can make is understating their subject’s significance. Indeed, an important part of their duty is to convince their audience that they are witnessing something amazing and special, and in order to do so, they often use lofty praise and exaltation. A common tactic we see is the placement of the phrase “one of” before a declaration of supremacy. Here are some examples:

  • One of the most dynamic athletes in the division.
  • One of the winningest coaches in the league.
  • One of the biggest events in history.
  • One of the most beautiful women on the planet.
  • One of the greatest players of all time.
  • One of the world’s wealthiest businessmen.

This strategy seems to perfectly allow for elevation of a subject without stating its dominance and thereby diminishing others. For example, if a commentator were to claim that a player was the best in the sport, then he is also implying that other players are worse than him. This creates a problem for those who frequently cite the greatness of people or events, since logic forbids them to only ascribe supremacy to one target.

However, if the speaker prefaces the statement of supremacy with “one of”, then they are free to make this claim about anyone or anything that is arguably nearly supreme. This simple and often unnoticed modification makes the statement slightly more ambiguous and less meaningful, but increases its versatility substantially. Instead of being restricted to having only one player who is the greatest, we can have ten, twenty or even hundreds of players who are all one of the greatest.

In addition to increasing inclusivity by using this preface, the descriptors and conditions can be made more specific in order to allow anyone or anything to be described as “one of” in some category. The phrase makes the statement more flexible, but if we then narrow the scope by reducing the geographic area or window of time and use highly-specialized areas, we can create a near-infinite number of categories in which to crown something or someone one of the greatest. After all, there’s no denying that Michael Jordan is the greatest basketball player of all time, but the title of one of the greatest high school basketball players in Michigan state is much more inclusive. Here are some other examples:

  • One of the most delicious organic fruit smoothies around.
  • One of the funniest talk show hosts in daytime television.
  • One of the most gripping action films of the summer.
  • One of the best pastry chefs in the city.
  • One of the most effective exercise routines for pregnant women.
  • One of the most reliable trucks in its class.

By avoiding a declaration of supremacy and identifying specific qualities in certain situations or locations, we can make anyone or anything one of the best something somewhere. It’s worth noting that these tactics are not only employed by professionals, for we often use them in everyday conversation to generate hype for our cool new phone or favorite musician. Even if the phone isn’t the fastest, it could easily be one of the fastest. And perhaps if it isn’t one of the fastest, it’s certainly one of the fastest in its price range. And maybe we can’t prove that our favorite musician is the most successful or talented, but no one would argue with us if we say that they are one of the most stylish and innovative of a particular genre.

Unfortunately there’s a problem with this method of promotion. By expanding the statement to include anyone or anything that could arguably be the greatest something somewhere, we’re essentially reducing the statement’s meaning to, “this thing is pretty good,” which isn’t much of a compliment. This issue is worsened by our inclination to avoid concrete statements in favor of ambiguous positivity. Let’s look at three reasons why we do this.

First, our desire to avoid negativity means that we will often abstain from making statements that could hurt others. For example, choosing a best man or maid of honor might make others in the wedding party feel inferior. Second, our aversion to commitment often forbids us from saying things we may later recant or regret. Third, our general aversion to difficulty means that we will take the path of least resistance.  In this case it means defaulting to the “one of” phrase, even if it’s fairly obvious that the thing is the best.

The matter may even be trivial, but we’d rather not waste the energy to determine whether or not the it is inconsequential enough for us to safely issue a declaration of supremacy. This becomes extremely obvious when we refrain from stating a simple opinion by refusing to select a favorite. When asked to identify our favorite food, movie or music, we are often unable to answer definitively and instead respond with something like, “I’m not sure, but blank is definitely up there.”

In summary, we don’t call things the best because it’s offensive, because we’re scared to commit and because it’s easier to qualify a statement rather than actually figure out if it’s true. This leaves us in a world full of people and things that are all one of the best in some category – a world where everyone is great. And in a world where everyone is great, no one is great. If we praise every athlete, child, product, artist or event, then true greatness becomes unrecognizable, lost in a sea of ambiguous and inclusive praise.

Commit. Say things that matter. Call something the best.

Miracles

Countless clusters of frost and crystal gracefully descend to rest, dressing the trees in white to celebrate the winter holiday. With a warm cup of hot chocolate in hand, a young mother watches her son gleefully shred paper and toss ribbons she placed with care. But as each impractical, meaningless toy is revealed, her grin fades.

Images of blood and gunfire march through her mind as she ponders the fate of her love. It has been two hundred and sixty-four days and this morning since last they savored each other’s presence. Hope and fear have spared her no sleep since she last heard that the helicopter did not return from its mission.

Then a knock at the door.

Hatred boils in her blood. She knows that stoic men with hat in hand stand solemnly on her porch to deliver the news that she already knows. Before she can even set down her cup, her cheeks drip and her heart sinks into her stomach.

“Who’s at the door?” the boy asks.

“Nobody,” she responds, “just keep opening your presents.”

She makes her way to the door, clutching her chest and holding her mouth, ready to hear the words that can set her free from fear and allow her to accept his death. Tearful and trembling, she pauses to look in the mirror before greeting the figures who have haunted her.

As the wooden slab swings open, the fear evaporates and she crumbles to the floor. Every horrid vision melts away as despair becomes a memory. Bliss descends and renders every thought and word insufficient. Her love has come home.

This story is truly a miracle. Or is it?

Most are quick to declare an event a miracle, but before we can agree or disagree with such assessments, it would be wise to define the conditions of a miracle.

Although miracles are considered supernatural by many, it is much more difficult to discern the intent of heavenly beings than to examine the qualities of a typical miracle. For this reason, we will ignore the metaphysical nature of miracles and instead focus on the measurable and observable ingredients required in order for an event to be defined as miraculous.

Based on the common understanding of miracles, there are four possible criteria for the event:

  1. It is beneficial to the recipient.
  2. It is requested by or meaningful to the recipient.
  3. It is extremely unlikely or timely.
  4. It seems impossible or its origin is unknown.

Not all of these criteria are of equal weight. To be accepted by most as a miracle, an event must meet the first requirement. It is obvious, but worth discussing, that miracles must benefit those who experience them. This is because miraculous events often affect those other than the recipient in benign or negative ways.

When someone wins the lottery, it means that many others did not win. When we find money on the street, it means that someone lost it. There’s even a movie called Miracle which documents the triumph of the United States men’s Olympic hockey team in 1980. But this tale is as much about the American team overcoming incredible odds as it is about the Soviet Union team suffering disappointment and defeat – an inevitable consequence of competition. In this story, the Soviet team’s experience met only the last three criteria. Because their experience was detrimental, they likely reached the opposite conclusion as the Americans, declaring the event a curse or punishment. In order for us to experience a miracle, we must be the beneficiary of the event.

There are times, however, when miraculous events don’t benefit us but spare us from harm. Narrowly escaping a calamity, especially a rare or unusual unusual one, or surviving a terminal illness will cause us to feel blessed even though we didn’t gain from the event.

It’s also possible for events to be reevaluated weeks, months or years after they occur. A seemingly meaningless or even negative experience can later be perceived as miraculous. Likewise, a miracle may one day be invalidated in light of new information.

The second criteria for a miracle isn’t quite as important as the first, though some would argue that it is also an essential component. The additional importance is likely a reflection of the belief that miracles are a direct result of prayer or divine intervention. To them, the fact that the event was requested or meaningful reflects their deity’s care and benevolence toward them. To those who do not subscribe to this notion, it is not enough to solidify a positive event as a miracle, though it may enhance the significance of what is already considered a miracle.

The final two criteria are what many identify as the primary cause for their interpretation of an event as a miracle. When explaining the reason why we believe our experience was miraculous, the fact that the event was beneficial is obvious and therefore not mentioned. The request or special meaning is used only between those who share a supernatural view of miracles, but even in such cases, this facet can be considered less credible due to its emotional and subjective nature. For this reason, the last two criteria are much more likely to be offered as proof of a miracle.

There is a good chance that a positive event that is extremely unlikely or timely will be considered a miracle. Winning the lottery, finding money on the street or overcoming incredible odds to achieve victory are all evidence of the significance of the event being unlikely or timely. We do not consider winning two dollars on a scratchcard a miracle because the chance of winning a small amount is high. Likewise, finding money on the street is only considered a miracle in a time of need. And there’s nothing miraculous about a heavy favorite winning a sporting match. Miracles have to surprise us – make us stop and ponder the significance of the event – which leads us to our final criteria.

Similar to the feeling of astonishment brought on by experiencing an extremely unlikely or timely beneficial event, the impossible or unknown origin of such an event also stirs in us the sense that something miraculous has happened. Just as our inability to predict an outcome causes us to declare it random, when we cannot understand or explain how something happened, the natural conclusion for many is that the laws of nature were temporarily suspended by some benevolent force in order to bestow on us a special blessing. Though this idea is more likely to be accepted by those who believe that miracles are divine, the sense of mystery alone retains the power to convince us that we have experienced a miracle.

For those who do not attribute miracles to divine intervention, the source of a seemingly impossible or unknowable event need not be identified in order for them to be amazed by it. However, as science continues to unravel the mysteries of the universe, this criteria becomes less credible. The constant ratification of our understanding of the physical world makes it less likely for us to declare that the origin of an event is impossible or unknowable, which means that we are also less likely to consider such an event a miracle.

The last three criteria we talked about are distinct from one another, but they all perform the same function: the creation of a sense of significance. In order for us to believe that an event was a miracle, supernatural or not, we must be astonished by it. Though some frivolously brand childbirth, sunrises and friendship as miracles, we must know that what we experienced was truly special in order to feel like it was a miracle.

So now we know what factors influence our belief in miracles. We’ve seen that miracles must be beneficial (at least to the one evaluating the event). In addition, we know that an event that is requested or particularly meaningful is more likely to be considered a miracle, especially by those who believe that miracles are supernatural. We also looked at the two most common causes of events being interpreted as miracles: unlikeliness or timeliness and an impossible or unknowable origin. Now let’s try evaluating some events to determine which criteria they meet and whether or not they deserve the title of miracle.

  • A woman is the sole survivor of a plane crash.
  • A man is struck by lightning on his way to work.
  • A child prays for rain and the following day it rains.
  • A man receives an unmarked envelope containing the exact amount of money needed to pay his debt.
  • A woman who is thought unable to conceive becomes pregnant.
  • A child born with a severe disability is abandoned but later adopted into a loving family.

It is certainly possible to evaluate these events, but what if we don’t see the big picture? What about the other passengers who died in the plane crash? What if the man struck by lightning was a serial killer? What if the child who prayed for rain did so without knowing that the weather report already predicted rain? Questions like these raise concern that our interpretations may be oversimplified.

The declaration of a miracle reflects confidence in our understanding (or lack of understanding) of the cause and purpose of an event. Even if we claim that the origin of the event is unknowable, that in itself is a claim of knowledge about the origin of the event. By labeling an event miraculous, we are saying that we know why it happened. Of course, it’s not impossible that we are correct, but the assertion that we understand the cause and purpose of events may say more about us than it does about the event.

There can be miracles when you believe.

Forever Young

What would the world be like if we lived forever?

Before we ponder this intricate hypothetical, let’s answer another question: what is it about old people that makes them old?

Obviously if we’re defining age as the gradual deterioration of the physical form, then the answer is some complex explanation involving DNA and telomeres, but that’s not what we’re discussing today. What we want to know is what biological, experiential and environmental factors cause an old person to feel and act like an old person. Let’s take a look at four traits commonly associated with the elderly.

The first example we’ll look at is speed. Older people tend to walk, talk and drive slower than younger folk. This is likely due the fact that as the body ages, ligaments tighten, joints stiffen and reaction time slows. This decrease in speed is clearly caused by the aging of the physical body.

The second trait is memory. Seniors are known to have trouble recollecting the past. While it’s true that older people have more memories to scan through, dementia and the natural deterioration of the brain are the culprits here.

The third example we’ll examine is the inability or unwillingness to adapt. It is well known that the elderly are not generally fond of change and that they struggle to understand modern morality, fashion and technology. This could be caused by a decrease in brain function, but a more likely source is the increase in nostalgia over time. As we age, we tend to view the past more favorably and become increasingly frustrated with modern conventions and inventions.

The final trait is political disposition. Older citizens tend to vote differently and with a different frequency than other sections of the population. This is caused by what is known as the cohort effect – the observation that populations with shared experiences tend to have certain traits in common. So, for example, those who endured the the depression of the 1930s in their youth are more likely to finish the food on their plate, enjoy farming and vote for politicians who are fiscally conservative later in life.

So we clearly have some traits that are caused by physical aging and others that, while rooted in the transition from youth to adulthood, come from merely existing for a long period of time. Before we return to the original question, let’s answer one more: what does living forever look like?

When we attempt to answer this question, many of us imagine a world full of young, healthy and happy individuals, but the cure for death can come in many forms. Here are five possible immortality scenarios:

  1. Mr. Freeze: we find a way to stop the physical effects of aging, but not reverse them. Our bodies no longer age, but we retain the years that we have already accrued.
  2. Death Becomes Her: we discover a way to prevent our bodies from shutting down due to old age, but we cannot prevent our bodies from degenerating.
  3. The Fountain of Youth: we find a way to restore our bodies to their youthful form, and no one ages beyond their prime.
  4. Robocop:  we use technology to modify our minds and bodies to an extent that age becomes irrelevant.
  5. Ghost in the Shell: we discover a way to transport consciousness into machines, shedding our useless organic forms.

Of these five possibilities, the most relevant and feasible is likely among the first three. At first it may appear as though these scenarios are very similar, but whether or not we are returned to our youth or our aging is halted will affect the world in significant ways.

Earlier we discussed some of the differences between the elderly and other demographics. Now imagine that the entire world is comprised of only individuals of the same age. In order to progress, we must have a better understanding of immortality’s effects.

First, we must acknowledge that our perception of age is largely based on physical appearance. Sure, there are those traits that we mentioned earlier, but it’s likely that if a 90-year-old woman was transported into the body of a 20-year-old woman, she would be treated much differently. The fact that romances such as the one between Bella and Edward from the Twilight series do not arouse suspicion reveals just how poor our understanding of age actually is.

In the series, Bella is a 17-year-old high school student who falls in love with Edward, a 106-year-old vampire. Edward became a vampire when he was 17, which means that he appears to be the same age as Bella. Now in the real world, a relationship between a 17-year-old and a 106-year-old is not only illegal, it’s downright unimaginable. Despite this, the relationship initially seems believable because Edward appears to be 17. In actuality, Edward may have more luck finding a suitable wife at a retirement home than a high school.

Now for a moment let’s imagine that there is a group of immortal people who have been alive for 1,000 years. What would they think of modern morals and fashion? What kind of grasp would they have on modern technology? How would they vote? The answers to these questions depend on whether or not we believe that youthful or elderly traits are derived from the age of the physical body we inhabit or from the amount of time that we exist.

Another mystery of immortality is that we don’t know what happens to people when they live beyond a century or so. As we discussed, we tend to become more conservative as we age, unwilling or unable to accept new ideas. If this trend were to continue indefinitely, an ever-aging population may threaten to stamp out political change completely.

It’s also important to remember that we don’t know how the cohort effect would be affected by a population that doesn’t physically age. It’s possible that without maturing past our youth, the window in which shared experiences may affect us is extended indefinitely. Many of us, especially seniors, look back on our youth with nostalgia, but if we’re eternally young, is it even possible to do so?

A final point to consider before we move on: if people lived forever, we would likely need a limit on breeding in order to control the population. This means that there would be few, if any, new humans. This lack of new life may slow the rate of change drastically, for there would be no inquisitive and rebellious youngsters to challenge the establishment. Another potential side effect could be a lack of inspiration and concern for others, since there is no future generation to which we might pass on a better world.

Now let’s explore some possible outcomes if these situations became reality. Here’s a table showing what we will experience based on the immortality scenario and whether or not aging continues to affect us once our bodies have stopped maturing.

Age Type Immortality Type
We Stay How We Are We Get Physically Old We Stay Young
Aging Affects Us We appear as we are now, but we feel and act increasingly old. We appear old, and we feel and act increasingly old. We appear young, but we feel and act increasingly old.
Aging Doesn’t Affect Us We appear, feel and act as we are now. We appear, feel and act old, but aging stops. We appear, feel and act young.

So what would the world be like if we lived forever? Well that depends not only on the means by which we cheat death and whether or not aging continues to affect us, but also on our ideas of what it means to be young or old. Based on these factors, we may be excited or frightened at the concept of immortality.

If, for example, we are all made young, never age and aging doesn’t continue to affect us, then the world’s entire population will be comprised of people that look, feel and act young. For those who are wary of the rebellious and reckless trappings of youth, this might sound like a recipe for disaster. If, however, aging continues to affect us, then we may end up with a whole world that is increasingly fond of the past and skeptical of change. This may cause moral and political shifts to halt and technological innovation to cease.

Now some may argue that the world is not so easily divided into the old and young, pointing to studies that reveal how our views do not become more conservative as we age. However, what is considered a conservative view does change over time, and so time makes conservatives of us all.

In the immortal words of Hartwig Schierbaum, do you really want to live forever?