Thanks to food safety programs, most of us are well-schooled in the preparation of safe food. We know that we should wash our hands before cooking, that meat must be cooked to a safe temperature and that we should, according to the FDA, “wash cutting boards, dishes, and utensils (including knives), and countertops with soap and hot water after [emphasis in the original [emphasis added]] they come in contact with raw meat, poultry or seafood.”

Although such food safety steps are largely based on sound science and reason, it’s concerning that the federal government does not consider poultry or seafood to be meat. Anyway, we’ve built a structure of regulations to protect ourselves from contamination. But, like the sanitation triangle, it all comes crashing down as soon as one of the supports is knocked out.

Let’s test our familiarity with food safety by taking a look at an example of a chef cooking a simple dish. See if you can spot any instances of cross contamination.

  1. The chef washes his hands thoroughly.
  2. He grabs a fresh, uncooked chicken breast from the package and places it on a clean cutting board.
  3. Using a clean, sharp knife, the chef fillets the chicken boob.
  4. The chef washes his hands thoroughly.
  5. Using a clean pair of tongs, he picks up the fillets and places them in a clean, preheated frying pan.
  6. He then washes his hands, the cutting board, the knife and the countertop.
  7. Using the tongs, the chef flips the meat over to ensure even cooking.
  8. After measuring the internal temperature of the meat using a food thermometer, the chef concludes that the meat is safe to consume.
  9. He removes the chicken from the frying pan with the tongs and places the cooked meat on a clean plate.
  10. The chef washes his hands, the frying pan and the tongs, then serves the dish.

If you said that the contamination occurs at steps 8 and 10, when the chef failed to wash the thermometer, then you’re not correct. Since the thermometer measured a safe cooking temperature, we know that there’s no way that it could be contaminated. Of course, it’s not the most hygienic decision to leave the thermometer uncleaned, as it may eventually pose a problem, but there is no immediate threat of contamination.

The thermometer, though it came into contact with raw meat, endured the cooking process and was raised to a temperature that rendered it safe to consume (if it weren’t a thermometer). However, between transferring the raw chicken to the frying pan, flipping the meat and placing it on the plate, the tongs underwent no such process.

The infraction occurred when our chef, like nearly every cook across the globe, used the same utensil to handle the meat throughout the cooking process in steps 5, 7 and 9. As mentioned earlier, food safety regulations make it clear that we should wash our hands, surfaces and utensils after making contact with uncooked meat, but for some reason we fail to apply this to the tool we use to move our meat.

It’s not clear why we don’t seem concerned about our tongs and spatulas. Perhaps we do secretly recognize the threat, but we subconsciously concede that it would be far too annoying to wash the tool after each time we transfer, rotate, flip, poke, prod and peer underneath our meat. Or maybe, as with bathroom sanitation, we just know that we’re probably going to miss a step somewhere along the way.


We’ve all been there. One moment we’re minding our own business, travelling down the highway on our way to satisfy some temporary urge. Suddenly we notice a vehicle appear to our right, and he wants to merge into our lane.  We know that the merging driver wants to enter our lane, but he has limited time and space to decide exactly how to proceed. Now we’re in a precarious situation with a number of potentially fatal possibilities. Here’s a series of images below depict a common outcome:

Figure 1. The first image shows a dark sedan reaching the merge lane, while a white pickup truck is proceeding in the right lane.

Figure 2. Noticing the sedan, the driver of the pickup begins to move into the left lane.

Figure 3. Realizing that the pickup is moving into the far lane, the driver of the sedan begins to merge.

Figure 4. The sedan continues merge uninhibited now that the pickup is out of the way.

Figure 5. The sedan’s merge is nearly complete.

Figure 6. The merge is now complete. Both vehicles continue travelling safely down the highway.

Most find the events detailed above fairly unremarkable. In fact, they might even say that it was a great example of a driver etiquette and caution on the part of the pickup driver, but the truth is nothing of the sort. Noticing that the sedan was travelling alongside him, the driver of the pickup was faced with three basic options:

  1. Slow down and let the sedan merge in front of him.
  2. Accelerate in order to allow the sedan to merge behind him.
  3. Continue at the same speed.

The driver of the sedan also has three similar options:

  1. Slow down speed and attempt to merge behind the pickup.
  2. Accelerate in order to merge ahead of the pickup.
  3. Continue to merge at the same speed.

It’s kind of like playing a game rock-paper-scissors, with each driver trying to anticipate the other’s next move. Here’s a table showing the possible results:

Pickup Choice Sedan Choice
Slow down Accelerate Continue at Same Speed
Slow Down  Crash  No crash  No crash
Accelerate  No crash  Crash  No crash
Continue at Same Speed  No crash  No crash  Crash

As we can see, 3 of the 9 possibilities result in a crash, which is alarming. This is why drivers often choose the secret fourth option: moving into another lane. Of course, this is only possible when there’s another lane in which to move, but doing so circumvents the merging problem entirely. However, it has a serious drawback.

When the pickup driver decides to move over and allow the sedan to merge, he is entering into a lane that could already have traffic in it. This creates another table of possibilities based on the number and speed of vehicles in the left lane. If there is a vehicle in the left lane, then this vehicle and the pickup are now in a nearly identical situation as were the pickup and the sedan. And of course, many highways have more than two lanes, which means that should the driver in the left lane choose to move over, it could result in a third encounter.

The problem here is not that moving out of the right lane to accommodate merging traffic is inherently unsafe, but that it attempts to resolve a crisis by merely shifting the problem to another party. According to traffic regulations, it is the duty of the merging vehicle to match the speed of highway traffic and select a position in which to merge. In the aforementioned case, the pickup driver alleviates the driver of the sedan from this responsibility by taking it upon himself to merge into the left lane. While this act may seem prudent and selfless, it actually has some serious consequences. Here are a few things wrong with this behavior:

  1. It forces additional vehicles into a merging scenario, each of which could result in a crash.
  2. It slows down traffic in the passing lane.
  3. It fosters an expectation for highway traffic to yield to merging drivers.

The third point is more difficult to measure, but its effects are likely the most serious. By yielding to merging traffic, the rules of the road are obscured, and merging drivers come to mistakenly believe that they have the right-of-way. This, in turn, results in a greater number of dangerous merging scenarios, which means more crashes. The whole point of traffic laws like those governing highway merging is to clearly indicate which party has priority, so that we don’t feel like we’re playing rock-paper-scissors.

Now changing lanes to make room for merging traffic is not always unwise. If done with caution and well in advance of the merging lane, it’s a great way to avoid a potentially hazardous situation. Just make sure you’re not merging to avoid merging.

Backward Time Travel Is Impossible

Everyone knows that time travel is real – it happens every day. Every time an astronaut launches into orbit, he or she experiences very slight time dilation, which means that they experience time at a slower rate than people on Earth. There are also creatures that freeze themselves during the winter, experiencing a state of suspended animation. Each of these techniques could allow a being to theoretically transport themselves far into the future. Unfortunately, without the ability to return to the present, this isn’t very useful.

Have you ever wondered what it would be like to go back in time? Well if you have any imagination at all, chances are you have. After all, who wouldn’t want to go back and walk with the dinosaurs, hang out with Abraham Lincoln or invest in Microsoft? The possibilities are endless – at least they would be if backward time travel was possible.

Before we can discuss its impossibility, we must first discuss the different theories about how backward time travel could potentially work. Of course, the technological requirements remain unknown, but the theory regarding how backward time travel would affect our world can be divided into the following general categories, summarized in the following table:

Characteristic Time Travel Theory
Fixed History Flexible History Alternate History
Description There is only a single, unchangeable timeline There’s only a single timeline, but we can change it Each action in the past produces a new timeline
In Film 12 Monkeys, The Terminator Back to the Future, Looper Terminator 2: Judgment Day, Star Trek (2009)
Problems No free will Paradoxes Can’t return to original timeline

The difference between the three interpretations can be illustrated using the story of a man who travels back in time to assassinate an infant Adolf Hitler in order to prevent World War II.

If history is fixed, then the man will not be able to prevent Hitler from coming to power. In fact, his actions will actually prove necessary for this to happen. Perhaps the man fails in his attempt to murder the child, and by doing so, scars the child and plants the seed necessary for Hitler’s evil actions. Or maybe he successfully dispatches the infant, but the family later adopts another child, naming it after their lost son, and the adopted child becomes the Hitler from history. Whatever the case, the man’s actions can’t affect the future because, from the vantage of the future, they’re already a part of the past.

With a flexible history, the man may successfully kill Hitler and prevent World War II. However, by preventing such a monumental historical event, the entire future is changed. Perhaps his grandparents never immigrate to the United States, so his parents never meet and, consequently, he’s never born. And if he’s never born, then how can he go back in time and kill Hitler? Of course, if the man doesn’t exist, then World War II happens, and the man now exists. This is known as a grandfather paradox, and it is inescapable, which is one reason why the third theory seems so attractive.

Perhaps the man does travel back in time and successfully assassinates Hitler, but instead of changing the future, an entirely new future is created. This understanding of backward time travel allows the man to change the past while preventing paradoxes from occurring. However, because the man has created a new timeline – a new universe – he is now unable to return to his own. After all, if he really has changed the past, then the future he knew no longer exists, or it exists in an alternate reality.

Another reason why this view seems to make sense is the growing belief that there are an infinite number of parallel universes. In a feeble attempt to understand the implications of quantum physics, popular culture has sifted a few concepts, including the many-worlds interpretation, which implies that all possible timelines exist in alternate realities (or universes). Although this idea would seem to support the possibility of backward time travel, we’ll see that no matter which theory we subscribe to, backward time travel is impossible.

Let’s begin by addressing the first option: fixed history theory. Imagine that you have possession of a working time travel machine, so you decide to travel back in time ten minutes to give yourself a high five. Well if history is immutable, then you won’t be able to high five yourself, or even see yourself, because you don’t remember seeing yourself ten minutes ago. See, this theory only works if we assume that the term history implies grand, complex events that no living person remembers (or of which they deny memory). If we try to change something simple and knowable, then it becomes obvious that we really should be able to change the past. So we’re left with two options: either backward time travel never ends up occurring, or history is not fixed.

Before we move on, let’s stop to discuss what we mean when we say that backward time travel is impossible. This doesn’t necessarily require that the technology is never invented; it could be that it’s just never implemented. After all, in order for us to do something, we need the both the opportunity and desire to do it. Perhaps backward time travel never happens because we decide that it shouldn’t.

Anyway, what if our history is flexible and allows us to go back and change things? Well, aside from the previously mentioned grandfather paradox (and others), there’s also no record in our history of anyone back in time and messing around. This could be due to the skill and secrecy of the travelers, but it’s difficult to imagine that no one in all of time was accidentally discovered or decided to reveal their secret. Of course, if a hidden organization tightly controlled the technology, then it might be safe. However, this would require that the secret would never be revealed throughout all of history. It would also mean that no one else ever invents time travel, otherwise they would both be editing each other’s pasts, producing competing time travelers. Basically, backward time travel can’t happen with a flexible history because it produces paradoxes, and it won’t happen because anyone who invents it will be assassinated by an opposing group. This is because backward time travel is power – the power to make the world the way you want it to be – and it’s pretty likely that no one would be comfortable with anyone else holding such power.

And so we come to alternate history theory (also known as parallel universe theory). This concept seems plausible. After all, it allows us to change history, it doesn’t produce any paradoxes and it also seems to be supported by science. However, a problem occurs when we imagine the ramifications of an infinite number of alternate realities combined with backward time travel. If there are an infinite number of universes, then all possibilities have occurred an infinite number of times. So if backward time travel is possible, then there are an infinite number of realities in which it exists. That means there’s a universe where a person from the future decided to visit you right now. But no one is visiting you from the future, so this can’t be the case.

Some would argue that the number of universes only increases (via branching) when a backward time traveler changes something, but this implies that there’s a specific number of universes, which violates the zero, one or infinity rule. It also would mean that there’s only one original universe, which means that there’s only one chance for time travel to be invented and implemented. In addition, if traveling back in time produces a new and separate future, then there really isn’t any reason to go back in time. Think about it. If you go back time to assassinate Hitler and succeed, you didn’t really kill him, you only created a new universe in which he’s dead. In this way, backward time travel is actually completely useless since we’re unable to affect anything in the present, only create a new present where things are different. It’s like trying to save your dog from cancer by getting a new dog.

And just in case you still think time travel might happen, there’s a secret society whose members have sworn an oath, passed down through generations, that should backward time travel be invented, they will go back and stop it before it starts.

The Brain: Part I

What makes us who we are?

This question seems intriguing, but it’s actually far too vague to have any real meaning. This is also the case when people ask, “what is the meaning of life?” They think they’re being insightful, but without specifying what they’re trying to discover, the answer is made indiscernible, and the question becomes useless. To illustrate this problem, try to determine the meaning of the subject in any of the following questions:

  • What is the meaning of broccoli?
  • What is the meaning of basketball?
  • What is the meaning of five dollars?
  • What is the meaning of a question that asks about the meaning of life?

As we can see, asking such poorly-phrased questions leaves far too much room for interpretation. It’s likely that they meant to ask something like, “for what reason was life created?” or “what is the purpose of  human existence?” Now let’s return to our original inquiry, improving its structure in order to allow for a meaningful answer.

What properties possessed by an individual human distinguishes them from other humans?

Even this more pointed question still retains many different avenues of response. After all, a fingerprint is unique and distinguishes each human from all others. But the question seems philosophical in nature, so it probably doesn’t aim to address the mere physical. Its phrasing also suggests that there’s more than one correct answer, though we’re probably just looking for the most interesting and insightful one. Let’s try again.

What feature of a person contains the most crucial and meaningful components that make them a unique individual?

Now we’re onto something. We all know that everyone is unique, and we can easily distinguish one person from others, so let’s start our question for a solution by examining the ways that we tell each other apart and see if one of them satisfies our inquiry.

The first and most obvious way that we recognize each other is by our appearance. It’s true that we’re covered by clothing and makeup much of the time, and it’s also true that each of us has our own fashion sense that we pretend is unique, and yet we’re still able to recognize each other in a swimsuit or bizarre costume. This is because there’s a special part of the brain responsible for facial recognition, and it helps us tell others apart. However, few would agree that our bodies or our faces make us who we are. In the 1997 action movie Face/Off, FBI agent Sean Archer and criminal mastermind Caster Troy (played by John Travolta and Nicholas Cage irrespectively) have their faces switched. While the premise and execution of this film is obviously bad, it teaches us that we aren’t defined by our appearance. There are also those who tragically suffer amputations or facial deformation, and while they may ask serious questions about their own identity and purpose, others certainly identify them as the same person.

Those who subscribe to a materialistic view would likely argue that it’s our genetics that make us who we are. According to them, since everything can be explained by natural processes, then everything about us is derived from our genes: our appearance, ideas and abilities. On top of that, each of us has our own unique genetic code, or do we? Identical twins actually share the same DNA and, while irritatingly similar, they aren’t the same person. If two people with identical genetics can be distinct, then this can’t be what defines us.

The world is full of those whose quest in life is fame and wealth. In many ways their identity is tied to their notoriety and possessions, but even pop culture recognizes that money doesn’t define who we are. In her autobiographical 2001 hit single Jenny From The Block, Jennifer Lopez pleads with audiences, “don’t be fooled by the rocks that I got,” and goes onto claim that despite her wealth and status, “[she’s] still Jenny from the block.” This implies that her identity remains static despite the fact that she, “used to have a little, now [she] has a lot.”

If it isn’t her wealth and fame, maybe Ms. Lopez’ talents and accomplishments as a dancer, singer, songwriter, author, actress, fashion designer and producer that define her. After all, each of us possess unique skills and abilities that make us special (at least that’s what our mothers told us). It’s true that our skills, abilities, achievements, vocations and interests define us to a degree. An example of the value we place on our job is the fact that the first question we ask a new acquaintance is often what do you do? Many of us derive our identity primarily from our profession. However, when we encounter failure, disability or retirement, we’re still us.

So clothes don’t make the man and neither does the body. Our genes don’t make us unique individuals. On top of that, wealth and fame don’t define us and neither do our abilities or achievements. So what could it be? Perhaps we can find the solution by examining cases of people who are no longer identified as the person they once were. Unfortunately there are millions of examples of such cases.

Dementia comes in many forms, the most common being Alzheimer’s disease. Those suffering from dementia experience a number of symptoms including memory loss, memory distortion, speech and language problems, agitation, loss of balance and fine motor skills, delusions, depression and anxiety. These symptoms are caused by changes in the brain brought on by nerve cell damage, protein deposits and other complications. In advanced cases, the person may become unrecognizable to loved ones. Visiting family members may be shocked to find their relative or friend using abusive language, exhibiting violent aggression or making inappropriate sexual comments.

Brain damage can also produce equally drastic changes in people. In her article After Brain Injury: Learning to Love a Stranger, Janet Cromer details the story of her husband, who suffered anoxic brain injury. She discusses the impact of brain injury on her husband’s memory, communication, behavior and personality. She notes that the experience is like getting to know him all over again, summarizing it this way: “Imagine feeling like you’re on a first date even though you’ve been married to this person for… 30 years?”

It’s clear that our identities are largely defined by our personalities. The things we love and hate, the ways we think and act, even our way of standing perfectly still – they all define who we are. When these things change, we change. But there’s more to us than simply what we think, do and say.

The other way that we can observe changes in identity is though memory loss. In addition to the aforementioned cases of dementia, retrograde amnesia can also impair or rewrite personal identity. While most of us have no experience with amnesia, it’s obvious that a loss of knowledge of identity is a loss of identity. After all, how can you be someone you’ve never heard of? But memories don’t just allow us to recognize our own identity, they also define us, for we are obviously and seriously affected by our experiences. Brain scans reveal that those who have been traumatized, especially at a young age, actually show clear physical changed in the brain.

Though it’s pure fiction, there are cases in which we accept that a person’s identity has changed. Hollywood provides us with many examples of instantaneous change of identity due to mind transfer. In the 1976 film Freaky Friday, a mother and daughter miraculously have their memories switched (as well as their personalities). While the story may not be the most plausible, we clearly understand that the two characters are no longer themselves. 2001’s K-PAX tells the story of an alien being called Prot who inhabits the body of a human named Robert Porter. At the end of the film, the alien abandons its human form, leaving behind a catatonic Porter. Upon his departure, Prot’s former body is no longer recognizable by his friends, one of whom remarks, “That’s not Prot.”

These examples also illustrate how important memory is to our identity. Without the transference of memory, the characters would retain the knowledge of their past, including their own identity. And this is precisely why the existence of reincarnation is largely inconsequential. If we possess only the memories of ourselves, then it doesn’t matter if our life is the continuation of another. If a person experienced reincarnation or a mind transfer, but did not retain any memory, then they would be unable to identify as anyone but their current self and would therefore possess a unique identity. So don’t do good for the sake of your reincarnated self, for the being you will be will not be you.

And so we have our answer: it is our personality and memory that make us who we are. And although there is great uncertainty about how it actually works, these features are produced and stored in the brain, which somehow projects consciousness (also known as the mind). Our minds allow us to perceive, think and imagine, and while its existence is arguable metaphysical, the mind gives rise to identity. So identity is actually stored in and generated by the brain.

Now we can rest in the knowledge that our identity is safely locked inside a squishy mass hidden behind a quarter inch of bone. Unfortunately the brain remains a very mysterious and peculiar thing. In part II we will explore some of the curiosities and limits of this mighty organ that defines us.

Two-legged Friends

Dogs are often called our four-legged friends, but this label is inaccurate in more ways than one. First of all, many dogs are not friendly. Each year, nearly 4.5 million Americans are bitten by dogs, and half of those are children. That means a child is bitten by a dog every 16 seconds in the United States alone. The other inaccuracy in this title has to do with the physical anatomy of canis lupus familiaris (the domestic dog).

According to the dictionary, a leg is defined as a limb used for support or mobility. Naturally this would imply that humans have two legs and dogs have four. But our understanding of what constitutes a leg varies depending on the species. Gorillas, for example, walk using all four limbs, yet most would agree that they have only two legs. Most contend that the front two limbs of a gorilla should be considered arms because they are used in much the same way that we use them: to forage for food, to use tools and to scratch ourselves. However, dogs use their front limbs for digging, climbing and adorably covering their faces, yet these appendages are somehow not awarded the title of arms. But if a dog’s front limbs aren’t arms, what are they and why?

A common understanding of the distinction between arms and legs is the idea that arms have hands. Proponents of this view would argue that a gorilla’s front limbs should be considered arms because they have hands with opposable thumbs, but there are many other creatures with hands that have thumbs, including the giant panda, the chameleon, the opossum and some species of reptiles, rats and frogs. So not only does the hand-arm theory imply that rats and frogs have arms, it also would mean that a gorilla has no legs at all because its feet have opposable toes as well. In addition to these complications, this understanding fails to address the fact that many animals, including the dog, have significant anatomical differences between the front and rear appendages.

There are yet others who subscribe to the if-it’s-not-a-leg-it’s-an-arm movement (IINALIAA), which implies just that: any limb not used for mobility is an arm. While this idea perfectly explains the anatomy of bipeds such has humans and kangaroos, it also implies that gorillas don’t have arms and that birds might actually have arms. Since this argument specifically tackles the issue of identifying arms among legs, it doesn’t effectively address limbs such as wings, which, while they aren’t legs, are used for mobility. In addition, it’s obvious to everyone that a gorilla’s front limbs are much more armish than a bird’s wings.

Each of these explanations falls short of satisfying our understanding of the difference between arms and legs, and so we have a problem. Both gorillas and dogs use all four limbs for mobility, have different front and rear appendages and use the front two for special functions, and yet we deny dogs arms. What’s not in dispute here is the nature and function of a leg – any child can tell that legs are used for walking. What is in dispute is what makes some legs arms.

To take a brief break from animals with controversial limbs, let’s take a look at a creature with an anatomy that we can all agree upon: the centipede. Centipedes are totally disgusting and possess anywhere between 20 and 300 identical limbs, each used solely for mobility and freaking people out. There’s no debate about whether any of these legs are actually arms because the only purpose of each limb is movement and all of them are the same – and that’s where the difference lies. When we inspect the anatomy of humans, gorillas and dogs, the one feature that they all share is an obvious design difference between the front and rear limbs. And not only is the form and function of each limb set unique, the structure of the joints that connects the limbs to the body is also different.

As illustrated above, dogs possess both a set of hips and a set of shoulders, and everyone knows that shoulders connect to arms. Also, if the front and rear limbs of these animals are so different, why should we give them the same name? If gorillas have arms, then arms can be used for mobility. And if kangaroos have arms, then arms don’t need hands with opposable thumbs. So if dogs have shoulders and if arms can be used for mobility and don’t require hands, then we’re left with only one conclusion: dogs must have arms.

The error lies in the false belief that an arm is defined by what it doesn’t do instead of what it does. A leg does not become an arm when it stops being used for mobility; a leg becomes an arm when starts being used for more than mobility. Just think of a panda laying on its back eating bamboo. Is it really using its legs to grab hold of the shoots and bring them up to its mouth? Of course not!

This new understanding of limbs is sure to make some people uncomfortable. After all, what about horses, hamsters, llamas and lemurs, seals, skunks, tigers and turtles? Surely the entire animal kingdom must be reexamined in order for their limbs to be properly classified. But just because a proposition implies a difficult solution, it doesn’t mean it’s incorrect. In fact, it’s likely evidence that the opposite is true.


We’ve all heard someone say that they don’t have an addictive personality. This arrogant statement is usually followed by a citation of all the most common addictions to which they are not enslaved. But their claim isn’t just pompous and irritating, it’s also inaccurate.

When someone claims that they don’t have an addictive personality, what they mean to say is that they aren’t susceptible to the most common forms of addiction. This statement is also false, since they would likely become addicted if they were forced to undertake the addictive behaviors to which they believe they are immune.

If we were to give such a person a hit of crystal meth, for example, the outcome would likely be equally grim for them as any other person. What they should be saying, if anything at all, is that they have no desire to engage in common addictive behaviors or that the limited behaviors they have engaged in have not yet produced an addiction (that they’re aware of).

Sometimes people will make claims about the addictive properties of a substance or behavior in order to further their argument against it. For example, those who believe that humans should not consume wheat gluten will point to the fact that it contains opioid peptides, which are from the same family as opium. Obviously something that is related to opium must be bad, right?

Well opioid peptides are actually produced naturally in the body and are found in other foods such as soybean and spinach. Apart from that, the fact that something is addictive doesn’t necessarily mean that it should be avoided. People suffer a wide variety of addictions, including addictions to exercise, reading, whistling and social media, but that isn’t reason enough to conclude no one should engage in these behaviors. We all know that addiction can be dangerous, but our understanding of this issue is often limited to a narrow group of common afflictions. Most of us would define addiction in a dictionary-like manner, and it would look something like this:

uh-dik-shuhn -noun

1. the state of having strong compulsion to repeatedly consume something or perform an action. Every night Danny goes to the bar and gets drunk; I think he might have an addiction.

Although this definition is certainly accurate in many cases, it’s far too vague to be used to determine whether or not someone has an addiction, much less what should be done about the thing to which they are addicted. In order to illustrate this, please indicate which of the following subjects are and are not addictive:

  • Adrenaline
  • Alcohol
  • Approval
  • Caffeine
  • Carbohydrates
  • Chatting Online
  • Chocolate
  • Cigarettes
  • Cocaine
  • Collecting Things
  • Computers
  • Driving
  • Exercising
  • Fame
  • Fashion
  • Fat
  • Food
  • Friendship
  • Gambling
  • Heroin
  • Humming
  • Lip Balm
  • Love
  • Marijuana
  • Money
  • Monosodium Glutamate (MSG)
  • Music
  • Pharmaceuticals
  • Piercings
  • Pleasure
  • Pornography
  • Procrastination
  • Reading
  • Relationships
  • Relaxing
  • Risk
  • Salt
  • Sex
  • Sitting
  • Shopping
  • Sleeping
  • Social Media
  • Spirituality
  • Stealing
  • Studying
  • Sugar
  • Talking
  • Tanning
  • Tattoos
  • Television
  • Thinking
  • Travel
  • Vacations
  • Video Games
  • Vitamins
  • Whistling
  • Work
  • Writing

Many of our definitions, opinions and interpretations are easily shattered by the introduction of subjects that lay on the fringe, and this is also true of our understanding of addiction. Everyone knows that cigarettes and gambling are addictive, but what about things that are less-obviously bad, like lip balm, music and talking? If an addiction is simply a powerful compulsion, then aren’t we all addicted to everything we crave? Perhaps a more thorough understanding of addiction is necessary. The American Society of Addictive Medicine uses a more comprehensive definition:

“Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry. Dysfunction in these circuits leads to characteristic biological, psychological, social and spiritual manifestations. This is reflected in an individual pathologically pursuing reward and/or relief by substance use and other behaviors.

Addiction is characterized by inability to consistently abstain, impairment in behavioral control, craving, diminished recognition of significant problems with one’s behaviors and interpersonal relationships, and a dysfunctional emotional response. Like other chronic diseases, addiction often involves cycles of relapse and remission. Without treatment or engagement in recovery activities, addiction is progressive and can result in disability or premature death.”

But even this nuanced understanding fails to differentiate between an addiction and a simple urge or craving. Let’s analyze each key component of the definition.

“Addiction is a primary, chronic disease of brain reward, motivation, memory and related circuitry. Dysfunction in these circuits leads to characteristic biological, psychological, social and spiritual manifestations.”

This section merely describes addiction as a disease of the brain that manifests in a multitude of dimensions and doesn’t explain what it looks like.

“…an individual pathologically pursuing reward and/or relief…”

The term pathologically is used to imply that the subject is suffering from a mental disorder, but this must be established beforehand in order to determine whether or not the subject is addicted, so it doesn’t really help us here. Also, this description defines pretty much every behavior, since we are constantly eating to relieve hunger, entertaining ourselves to relieve boredom and so on.

“Addiction is characterized by inability to consistently abstain, impairment in behavioral control, craving, diminished recognition of significant problems with one’s behaviors and interpersonal relationships, and a dysfunctional emotional response. Like other chronic diseases, addiction often involves cycles of relapse and remission. Without treatment or engagement in recovery activities, addiction is progressive…”

Again, abstaining from most healthy behaviors will also produce these symptoms. In addition, sometimes people can regularly engage in what many would consider an addictive behavior and suffer no ill effects. There are also people we might consider addicted who may suddenly abandon the behavior and immediately experience permanent freedom from the craving. Finally, the requirement that addiction must continue to progress excludes those who can maintain a constant level of craving even if it completely disrupts their life, so this should not be necessary for the definition.

“Without treatment or engagement in recovery activities… [addiction] can result in disability or premature death.”

Now here’s where it gets interesting. Most definitions of mental disorders include a similar requirement that the condition negatively impacts the subject’s life in some way. Many mental disorders are merely extreme forms of a natural, healthy behavior. We all feel some anxiety and paranoia, we all feel depressed at times, we’re all traumatized by our past and we all have mood swings. For most of us, of course, these symptoms are within manageable levels, so they are not classified as a mental illness. However, addiction is different. Addiction is something that we all naturally feel toward certain things. Hunger, for example, is a very powerful craving that can lead to any and all of the symptoms listed above, and yet we all know that hunger is not an addiction.

So there appears to be a hidden ingredient in addiction that these definitions have failed to include, and it’s something that we all understand. We all know people who suffer from addictions – we know what addiction looks like – yet we can’t articulate it. We know that the man who drinks his paychecks away is addicted, we know that the woman who keeps taking painkillers long after her surgery is addicted and we know that the teenager who stays up till 3 A.M. every night playing World of Warcraft is addicted. We know this regardless of any consequences they suffer, the growth of their addiction and presence of a relapse. There’s just something fundamentally different between their cravings and something natural like hunger.

Here’s a new definition of addiction that attempts to grasp what makes addiction recognizable:

uh-dik-shuhn -noun

1. a powerful synthetic craving brought on by the introduction of unnecessary stimulus, or a natural craving magnified to an extreme degree.

This definition obviously doesn’t include the aforementioned requirement of a negative impact on the subject’s life. Although this is an important part of diagnosing a mental illness, this is not a medical definition; it is meant to help everyday people understand and recognize addiction.

Another issue that we’ve avoided until now is the question of which substances and behaviors are addictive. As we’ve already discussed, people can become addicted to a huge variety of substances and activities, but that doesn’t mean that these things are addictive, does it?

To answer this question, we could take a scientific approach and discuss substance dependencies, how habits are formed and chemical reactions in the brain, but we’ve already learned that such subjects are not relevant to our everyday experience. In other words, who cares whether or not we can prove that a substance or behavior has innate addictive traits?

One obvious answer to this question would be to inform people about which substances or behaviors to avoid, but we already know that people can become addicted to almost anything and that just because something is addictive doesn’t mean that it’s bad for us. In addition, the fact that something is both highly addictive and extremely bad for us doesn’t mean that society will reject it. After all, 87.6 percent of Americans consume alcohol, 15 percent of those people abuse it and around 90,000 Americans will die this year as a direct result of alcohol consumption. And yet we still promote drinking in films and television, advertise it in magazines and on billboards and sell it at grocery stores and at sporting events.

If you’re concerned about an addiction, don’t rely on the government or the Internet to help you make a decision. Listen to those who love you.

Proximity 7

“If you would have seen what I’ve seen, then you’d understand.”

His gaze leaves the room as memory engulfs him. Hell is not a fantasy, but history. Visions of severed limbs and hollow faces wash over his fragile heart like lifeless bodies breaking against distant shores.

Our memories are as much a part of us as our DNA. Both define who we are, and just as we cannot edit our genetic code, we cannot rewrite our history. It’s impossible to escape that which we have endured, and so we shoulder burdens of fear, anger and remorse. We like to imagine that we choose who we are, what we think and how we feel, but the reality is that we are blessed, scarred creatures, refined and haunted by our past.

We dare not speak of war to the veteran or of love to the widow. We dare not grumble about bright lights to the blind or stagnant wages to the unemployed. We would never fret over a pimple in the presence of those who are disfigured, and we would not make light of a disability in the presence of someone who suffers its reality every day.

Those who have experienced intense, life-changing events or conditions often develop an increased sensitivity to such things, and when we’re in the presence of someone who has been through a traumatic experience, we also become sensitive to that experience.

This is why ads that depict starving, suffering children have little effect on healthy, first-world citizens. We certainly don’t doubt that those children are suffering or that they need help, but they are outside of our proximity. While televisions may transmit images and sounds effectively, they do not transmit experience, and they do not put us in the presence of those suffering children.

The problem is not that we have been desensitized to these things, since that would imply that we were once sensitive to them. Neither is the problem that we are insensitive, for we can certainly feel compassion, empathy and other emotions. The truth is that we aren’t sensitive to these issues because we have never been sensitized to them. Without living in poverty or experiencing starvation, we can’t help but underreact. We all know and agree that poverty and starvation are bad, but we know it in a theoretical, moral sense; we don’t know how it feels to be poor and starving.

The simple fact that something is true usually isn’t enough to invoke a reaction. This is why advertizers use music, drama, sex, controversy and comedy to provoke us. They’re trying to make us understand, make us imagine enjoying the ice-cool soda on a hot day or driving the elegant, powerful sports car down winding rural roads.

The impotence of mere truth is also why we find it much easier to hurt people we can’t see or don’t know. We’re much more likely to berate others on the Internet, steal from a faceless corporation, get angry at other drivers or even crush the dreams of unknown opponents in a competition. It’s not that we believe that these actions are any less wrong in such situations, but we don’t have to look someone we know in the eye when we commit them.

Although we might believe that an action is wrong, the severity and impact of that belief changes depending on our company. We never swear in front of our grandmother, we don’t say retard near those with mental disabilities and we wouldn’t complain about our big feet to someone with no legs. Our sensitivities are constantly fluctuating as we engage with different people. This basically means that each of us suffers from a form of dissociative identity disorder, since we are constantly changing from one person into another.

Some would argue that the problem is the increased sensitivity of those affected by traumatic experiences. They might say that it’s unfair and unreasonable to expect others to be sensitive when they only became sensitive through personal experience. This is a sensible conclusion, but it ignores a simple, yet important question: which level of sensitivity is correct?

Consider the story of a man who tragically lost his home and family in a fire. To him, having smoke detectors, a fire extinguisher and an emergency escape plan now seems extremely important. After dealing with his loss, he chooses to begin a crusade to educate others on fire safety. Because of his experience, the man’s concern is heavily weighted toward this issue. This doesn’t mean that his warnings are invalid or exaggerated, only that his ability to perceive the truth of the danger of fire was revealed by his proximity to such events. He is now subject to increased sensitivity and awareness of the issue. He no longer finds certain jokes or comments funny and reacts to imagery of fire and smoke differently than another person might. He is now subject to what are called trauma triggers – experiences that trigger a response from someone who has been traumatized.

Critics of this man’s position might point out his lack of concern for healthy living or earthquake safety. After all, while it’s true that fire safety is important, there are a multitude of threats endangering our families. These people would argue that the man’s perception has been tainted and that he now possesses an intense bias toward fire safety. In this civilized, scientific era, empirical knowledge is supreme, and those who allow bias, emotion and experience to influence their decisions are considered foolish, since we can easily show the inconsistencies in their positions.

Others would maintain that the man’s crusade for fire safety, though extreme, is an important contribution. They would agree that his reaction is emotional and possibly unreasonable, but they would never discourage or criticize him for focusing on only one issue. This is probably the most acceptable and popular position, but it ignores the profound possibility that this man’s vision is not distorted by his experience, but clarified. What if the man’s crusade to promote fire safety is merely a reasonable reaction from someone surrounded by people who don’t understand what he now sees so clearly? It may be that he is still blind to other dangers, but this only means that we are even more blind.

So what is the proper reaction from unsensitized folk, given what we’ve learned? Here are some options:

  1. Ignore the issue and continue adapting to the sensitivities of others without feeling them ourselves.
  2. Force ourselves into acts of charity regardless of our sensitivity to the issue.
  3. Immerse ourselves in suffering to gain sensitivity.

The first solution is obviously wrong because it’s boring and has the word ignore in it. Aside from that, it evades the question of whether or not suffering enlightens or misguides us.

The second option is a good choice if you’re interested in resuming your normal life while appeasing any sense of guilt or obligation to others. It’s true that acts of love done without love still make a difference, but they ignore the systemic problem of a lack of concern for others, which will eventually lead to increased suffering.

The final choice seems frightening and overwhelming, but we may be able to sensitize ourselves without bringing suffering on ourselves. As we mentioned earlier, simply being near someone is enough to temporarily grant us sensitivity, so perhaps we can permanently sensitize ourselves by allowing them to share their lives with us. This is possible because there are more ways to experience something than personally living through it.

The image above shows that there are actually four levels of experience. Those in the first, outer-most level have absolutely no experience with the subject. They’re usually fearful, curious and skeptical of the new experience, since they haven’t even heard of it before.

The second level is the one that most of us would would identify with, and it’s one of the two we’ve been discussing. All of us have heard of cancer, poverty, AIDS and political persecution, but we’ve never experienced these things, and we aren’t close with anyone who has. We may have casual acquaintances or distant relatives who have lived through these experiences, but we don’t know them well enough to understand their situations.

Let’s skip ahead to the fourth and inner-most level, which is another one we’ve been talking about. Those in this level have personally experienced suffering and exhibit the increased sensitivity we’ve already discussed.

The third level of experience is the path to sensitivity we’re looking for. People with second-hand experience have shared a close relationship with someone in the first level. An example might be a person whose immediate family member suffered a serious illness. Some might argue that the line between first and second hand experience is blurry, and that’s precisely the point. By sharing in the suffering of others, we can permanently adopt their experience and sensitivity. We have done more than witness the suffering of another, we have endured it with them.

Now let’s return to the question of whether suffering enlightens or misguides us. First, by asking such a question, we’re making an assumption that there is a default perception. But just as there is no correct price of fuel, there is no correct way to perceive the world. Again, some would say that the correct understanding is one of unbiased empiricism, but a purely empirical worldview would remove love, honesty, morality and human value from the equation. Without these things, we cannot discern whether or not suffering is bad or human extinction is good. On top of that, empirical knowledge is merely something that some of us prefer, so someone who promotes empirical knowledge is actually revealing their biased toward empiricism.

To say that there is no correct way to perceive the world seems a bit extreme, but we’re not talking about moral relativism, religious pluralism or the rejection of scientific theory. We’re not here to debate the existence of reality; we’re talking about the lens of experience through which we view the world. To put it scientifically, we’re talking about the combination of chemicals and electrical impulses in the brain that make up our attitude, outlook, emotions, values and overall state of mind.

In contrast to the more extreme examples of suffering we’ve used, here are some minor influences that are constantly altering our perception:

  • Confusion
  • Pain or discomfort
  • Lack of sleep
  • Hunger or thirst
  • Drugs and alcohol
  • Caffeine
  • Exercise
  • Sexual arousal
  • Joy
  • Anger
  • Boredom
  • Solitude
  • Social awkwardness
  • Confidence
  • Uncertainty
  • Greed
  • Guilt
  • Obligation
  • Competition
  • Inspiration

As we can see, our outlook on life is changing every moment. Even drinking a glass of water or taking a few seconds to look out the window can make us feel better, and a simple compliment from a stranger can change what we think of ourselves. Likewise, staying up too late or encountering an irritating person can put us off, and receiving a bill in the mail can change our attitude toward finances.

A person may try to argue that our perception is correct when we are free from all of these influences, but not only is it extremely unlikely that we would be able to attain such a state, a position of perfect balance would also be the result of external influences. Even the air we breath makes us who we are, for variations in the oxygen content of our environment affects our physiology. Perhaps oxygen is actually a hallucinogen that causes us to imagine that our existence is meaningful. Whatever the case, our perception is obviously unstable.

In addition to the our normal fluctuation, intense experiences induce brief states of extreme emotion. Having valuable property stolen can affect us for life, but it also affects us in a more powerful, temporary way when it initially occurs. Learning that we have lost important, irreplaceable items will cause us to realize the insignificance of our other problems. But then learning that a family member has been diagnosed with a life-threatening disease makes us immediately forget our stolen property.

So not only is our perception vulnerable to minor temporary changes and powerful permanent changes, it’s also subject to even more powerful temporary changes brought on by the same experiences that affect us for life. And in these moments of extreme emotion, are really experiencing clarity? And if we are, how can we say that this is the correct perception, since it’s impossible to maintain such a view? Well, if there’s no correct way to perceive the world, and if our perception is constantly and sometimes drastically changing, it would seem impossible to determine whether traumatic experiences enlighten or misguide us. But just because we can’t be the best, it doesn’t mean we can’t be better. Perhaps instead of attempting to identify a perfect state of understanding, we should merely be seeking to learn what we can from those who suffer.

Since determining a complete and sound moral code is beyond the scope of our discussion, we can temporarily suspend philosophical inquiry and appeal to the general understanding that whatever results in the greatest good for the most people is right and that we should treat others the way we want to be treated. Now if no lens of perception is correct, then perhaps we should view the world in whatever manner results in the greatest good and increases our ability to treat other the way we would want to be treated. In other words, our perception should prioritize the greater good and cause us to feel empathy for others.

So how do we do this? A good strategy would be to look at the world through the eyes of as many people as possible. By sharing in their suffering through second-hand experience, we can align our priorities with theirs and become genuinely sensitive to their situation. In doing so, we generate empathy, which causes us to treat them, and others in similar situations, with the same sensitivity that we now share. And in addition to all this, we should never forget to be sensitive to the unsensitized, for they are merely seeds that have not yet sprouted.

Stop changing who you are based on who’s around you, and stop trying to do what’s right when you don’t care. Get sensitized.


There has been a recent shift in favor of things considered natural. People are choosing clothing, food, cleansers and building materials that come from simple, natural sources. The supposed purposes for this trend are the preservation of the environment through the use of renewable resources and the promotion of our health and well-being.

An example of this way of thinking is the Paleolithic diet. This eating regimen is built on the premise that we humans, like other creatures, should eat what is natural for our species to eat, which apparently is whatever our ancestors evolved to eat during the Paleolithic era. There are at least three problems with this idea.

The first is that we are only presuming to know what our ancestors were eating at the time of their most recent evolutionary dietary transition. The second issue is that what is natural is not always the superior choice (most medicines are not natural). The third problem is that the evolution of humans was drastically altered when we became self-aware. We are no longer wild, for we took control of our evolutionary destiny and, along with it, the destiny of the creatures we domesticated.

We’ve already discussed the natural state of humans and how it is largely determined by the presence of human society, but what is the natural state of an animal? More specifically, what is the natural state of a tamed animal, if there is such a thing? There are four general responses to this question, each embodied by a group of people.

The first group believes that animals, or at least the more important animals, must remain wild. They would define a wild animal as one living in its natural habitat without human interference. Those from this camp would argue that what is best for the animal is what is natural, even if that means a high risk of starvation, predation, disease, isolation, etc. To them, the concept of owning and using animals for our benefit is comparable to slavery.

The second group has no concern for the animal’s nature, seeking only to cater to the whims of their captive critter. These are pet owners who will purchase their pets extravagant toys, food and even clothing in an effort to appease them. Rather than the animal functioning as companion or slave, it is essentially elevated to the level of a human child in a demented effort to satisfy lingering or neglected parental instincts.

The third group tolerates the captivity and ownership of animals, but also believes that animals were not meant to exist in the human world. Because of this, their creatures are given ample room to roam and are often fed a diet that resembles what they would eat in the wild. These people attempt to respect animals even as they profit from and consume them.

The fourth and final group sees animals as a commodity and cares nothing for their natural state or desires. To them, animals are merely a resource to be harvested, like plants. And much like plants, they are often packed closely together and only given what is necessary to grow.

So who is right? Well the answer depends not only on how we value animals, but also our understanding of what it means to be wild. The first group would argue that animals are wild by nature, meaning that their natural and therefore best state is one of freedom from human intervention. This sounds like a wonderful idea, but we know from examining the nature of humans that we share a similar state of natural wildness, yet few would argue that feral humans are our finest incarnation. The second and third group both acknowledge that animals have a natural wild state, but also believe that their lives are improved through taming. The final group has no interest in what it means to be wild apart from how it can benefit their ability to cultivate their creatures and maximize profit. Few would argue that this last approach is the most beneficial for the animal.

So the crux of the disagreement is whether or not animals benefit from being tamed. But since most animals are unable express their emotions in ways we can understand, especially wild ones, the answer is largely left to our interpretation. However, there are some who argue that it’s okay to tame some species but not others. Let’s explore this claim.

We often hear stories of pets (usually exotic ones) who turn on their masters, attacking them for no apparent reason. This sparks comments like, “that’s what happens when you keep a wild animal in your house,” implying that some animals are wild and others are not. In a historical sense, this is somewhat accurate, since there are certain species that are traditionally tamed or domesticated (bred by humans for certain purposes). However, to assert that some animals remain wild after taming is both a semantic and logical error.

Animals, like humans, have two basic behavioral states: wild and tame. Since we described a wild animal as one that is free from human intervention, then a tame animal must be one that has integrated with humans. Here are some simple statements that may help us understand the situation:

  • A creature cannot be both wild and tame.
  • All creatures are inherently wild.
  • A wild creature, when properly tamed, loses its wildness.
  • A poorly or partially tamed creature may retain a degree of wildness.
  • Some creatures are more difficult to tame than others.

Now that we share an understanding of the situation, we can dissect the definition of tame. Taming is traditionally defined as the process by which humans integrate animals into their own society, but this does not explain what’s really going on. When we tame an animal, we raise its social compatibility. But this begs some interesting questions: is the process of elevating a human to be compatible with human society not a form of taming as well? If so, is a wolf teaching its pups to behave like wolves also taming them? What about when a human is raised by wolves to integrate with wolf society? A more holistic definition of taming would be the process by which a creature of one species is attuned to the society of another species, but this merely confuses the matter.

Since humans are the highest form of creature and the only species capable of understanding the concept of taming, we perceive a tamed animal as one that is attuned to our society. However, a wolf might consider an adopted squirrel tamed, if it were able to contemplate such things, while we would not. And if a wild wolf is one raised by wolves, then a wild human must be one raised by humans. This is illogical, however, because we traditionally define wildness as an inherent quality of untamed creatures and because we consider ourselves tame; both of these things can’t be true. If taming is the attunement of one species to another species, then humans can’t be tame.

We must use the traditional definition of taming as the process by which a creature of any species is attuned to human society. But that raises the question of how a higher form of intelligence, such as an advanced alien civilization or a race of genetically-enhanced humans, might perceive us. To them, we would be wild beasts in need of taming. That brings up another interesting question: if taming is the attunement of a creature to human society, can we tame each other? Indeed, it was common knowledge in colonial times that native tribes were primitive, lower races in need of taming. The rejection of this idea may be tied to our growing affection for natural things, since it’s easy to argue that these tribes could have benefited from Western medicine and technology.

In any case, taming animals causes enough debate. Just remember that a pet wolf is not wild animal.


One of the most popular and accessible forms of art involves the creation of two-dimensional imagery on a surface. This is called drawing. Drawing can be done professionally or casually, for profit or personal satisfaction. It can also involve a number of different mediums, including the more traditional pen and paper or oil and canvas, modern instruments like the computer or Magna Doodle and even human skin.

Tattooing dates back thousands of years and spans many cultures across the globe. Each society’s tattoos are visually distinct, employing unique color, content, size, location and pattern. In addition, these designs can serve many different purposes. For the indigenous Polynesians of New Zealand, tattoos were an indication of higher rank or status and also signified a rite of passage into adulthood. During the Kofun period in Japan, tattoos were placed on criminals as a form of punishment. Many cultures use tattoos for religious or spiritual purposes, to honour the dead, to intimidate enemies, to commemorate marriage or simply to appear more attractive.

In modern Western culture, the design and purpose of tattoos is not standardized, but rather determined by the individual. And as with many of our traditions, including naming our children, we tend to borrow from other cultures in an attempt to find purpose and appear unique and sophisticated. In our quest for meaning, we blend Polynesian tribal patterns with Japanese kanji, dragons with crosses, yin-yangs with Bible verses, skulls with Gothic lettering as well as a myriad of other sacred symbols, producing an amalgamation of ancient art that would likely offend and confuse each culture from which we borrowed. Because our population is multicultural and our society prioritizes the individual, each of us is permitted to create our own reality, religion and tattoo design. But in our apparent quest for meaningful markings, we have forgotten one important fact, the true reason why we actually get tattoos: because we want to.

Whenever we ask someone about the meaning of one of their tattoos, the explanation we get usually seems valid. They’ll explain to us how a rose symbolizes their grandmother, who loved roses, or how the number 27 is lucky because all of their children were all born on the 27th day of the month, or how a bloody reaper-skull wearing a crown of thorns reminds them not to fear death but to live life to the fullest every day. These explanations may all be rooted in truth, but there are many ways to commemorate important people or dates or to remind ourselves of mantras, so why did they choose to draw on themselves? Why not just get a picture framed or a piece of jewelry made?

Many would answer that the nearly inescapable permanence of tattoos adds a dimension of commitment to the expression, and that’s true, but let’s think about what the primary factor is for motivating someone to get a tattoo. If the true cause is the death of a loved one, then the person would think something like, “How can I most effectively express my sorrow? Perhaps I should get a tattoo,” but this is inconsistent with the massive number of people who get multiple tattoos and the growing number of those who identify themselves as tattoo addicts. The truth is that every single person who walks into a tattoo parlor wants a tattoo. They may want to commemorate their dead grandmother or immortalize their mantra, but much like the way we choose names for our children, they ultimately decided on the tattoo medium because they liked it. After all, no one ever reluctantly got a tattoo simply because they figured it was the most effective medium.

This isn’t meant to discredit or insult those who have tattoos, since those who choose other mediums also do so because they prefer them. But it’s important to be honest with ourselves and to understand our true intentions, especially when we’re doing something that cannot be easily undone. There’s always a chance that our tattoo will come out wrong because of mistranslation or poor quality artwork, that the tattoo will degrade over time, that the shape of our body will change, that we’ll change our opinion of the subject or that our taste in art will simply evolve.

The idea of sewing a pair of pants to our legs is ridiculous, but even if it was safe to do so, it would seem absurd to imagine that we would always enjoy wearing the same pair of pants. And yet we somehow convince ourselves that we will always love our tattoos, that we won’t be ashamed of them and that the issues that our future selves will face are somehow detached from the choices we make now. Deep down we all know that permanently marking our bodies for aesthetic purposes is foolish. But those who want tattoos are still going to get them, so in light of what we’ve learned, let’s set a few rules in order to minimize the risk of regret and avoid a design that offends another culture or doesn’t make any sense.

  1. Don’t choose someone’s name or face. You might end up feeling differently about them.
  2. Don’t choose another language. There’s always a chance of mistranslation.
  3. Don’t choose a slogan or mantra that may become unpopular.
  4. Do a spell check.
  5. Choose an area of the body that ages well.
  6. Choose an area of the body that can be easily hidden by clothing.
  7. Get a temporary tattoo and see if you like it.
  8. Finish the tattoo.

In other words, don’t get this:

No one who didn’t want a tattoo ever got one.