The Subtle Art of Not Giving a F*ck by Mark Manson

Most of what causes our unhappiness is the attention we give to unimportant problems. The mistake of a lot of self-help books is that it tries to convince us we can overcome or avoid problems completely. Manson retells the story of the Buddha’s revelation that both riches and poverty can be sources of unhappiness and one’s problems. One of the most important points of the book is that in order to even be happy sometimes we have to be sad or angry or miserable and face rejection. In other words, happiness comes from solving our problems, not avoiding them. Since having some problems in our life is inevitable we should give our attention to better problems.

We need to decide what problems are worth our time and effort. Rejection and failure are a necessary component of solving problems. Suppose your problem is you want to develop confidence to ask out more women or men. Well, why do most people find this to be a problem? They may say they’re shy or don’t know where to begin. The real issue is that they’re afraid of rejection and failure. But why are they afraid of rejection? Many of these people will take it as a message that they aren’t good enough as people.

People have different ideas of what constitutes a problem and success in the first place based on their metrics and values. Our values form our metrics: how we judge our success and failures. Some people might feel successful if they achieve a good family life at home, while others might feel successful only if they’ve made millions of dollars and own a yacht. These two different definitions of success reflect different values and thus different metrics of valuing what is successful. In order to select better problems and come to realize what really matters we have to recognize what we really value and if it turns out our values are bad and harmful, we should try to realign them with new values that are better. Only then can we choose better problems for ourselves.

Advertisements

Thinking, Fast and Slow by Daniel Kahneman

During the 1970s most social scientists followed in the path of Ancient Greek Philosophers in their belief that man is a rational being who occasionally slips from time to time due to emotions. Noble prize-winning psychologists, Daniel Kahneman and Amos Tversky, challenged this view by documenting mistakes in people’s thinking that arose from everyday mental processes rather than emotional factors. In their famous article, “Judgement Under Uncertainty: Heuristics and Biases” that originally appeared in Science, they describe three mental shortcuts our brains naturally take that lead to poor assessment of probabilities.

1) Representative is when a person judges that some connection is more likely due to some imagined essential characteristic or categorical association in their minds. For example, if Steve is described as shy and people are asked to rank the probability that he is a certain profession from a list, studies show most people will rank librarian as the highest likely occupation for Steve even if farmers are more frequent in the overall population. People will ignore the frequency or probability that suggests Steve, a random person selected from a population, is more likely to be a farmer. Instead they will use the irrelevant personal characteristic and draw on the stereotype that librarians are shy to come to their conclusions.

2) Availability bias leads us to judge an event as more or less likely due to how easily we remember or can recall an example. If my uncle won the lottery, I am more likely to overestimate my chances of winning the lottery. If none of the female members of my family have had breast cancer I’m likely to underestimate the prevalence of an average person getting breast cancer. If I see a house burn down with my own eyes down the block I am more likely to believe there is a greater chance my house will burn down than if I read about it in the newspaper.

3) Anchoring and Adjustment Bias involves the way our starting estimate or value “anchors” us when we are given a chance to adjust our estimates. In experiments, people given low starting values for an estimate and another group given a much higher starting value and then asked to adjust to what they think is the correct value will remain closer to the initial value given. The group with the higher value will remain much higher in their estimates when they complete their adjustments and the group given the lower starting value will remain much lower in their estimates, suggesting the initial value given at random affected how far their adjustments went. Their first value “anchors” their adjustments. Basically, we rely too much on the first piece of information we receive.

Other researchers, such as Schwartz, also explored the availability bias. In his experiment, people ranked how assertive they thought they were after they listed either six or twelve particular instances when they were assertive. Paradoxically those who only listed six ranked themselves as more assertive. When you have to list a larger number of instances it becomes less easy to retrieve from memory so it feels like you’re less assertive, despite technically producing more instances and thus more evidence of your assertiveness.
Another example of the availability bias in action can be witnessed in a survey that Slovic and Lichtenstein gave in which participants had to compare two potential causes of death and judge which was more likely. Their results found people misjudged the probability of dying by one cause compared to another. For example, people ranked dying by an accident as more likely than dying by a stroke. In reality, you’re twice as likely to die from a stroke as by an accident. The culprit seems to be media coverage, which leads to an availability bias. By covering automobile accidents more frequently than people dying from strokes it gives the false impression that they occur more often.

In Thinking Fast Thinking Slow, Kahneman expands on this earlier research in order to explain what causes us to make all these mental errors. According to the dual-processing theory expounded in the book, we have two ways of mentally processing the world which he calls System 1 and System 2. System 1 is associated with intuition. It is fast and at times unconscious. It deals with thoughts, impressions, and judgements that occur automatically. It is responsible for noticing simple relations such as a person being taller than another, recognizing that 17 X 24 is a multiplication problem, or navigating from your upstairs bathroom down to your kitchen. The key characteristic is that you don’t need to deliberately think about any of these things. If I see a green shirt or the symbol 4 my brain will register the concept green and four whether I want it to or not. Meanwhile, System 2 is deliberate and slow. It is often associated with rationality, self-control, attention, careful decision-making, and effortful mental activities. It is capable of following rules (such as learning the rules of a new board game you haven’t played before), able to compare advanced characteristics between objects (such as making a list of the pros and cons of a new political policy in comparison to an old one), and allows us to make deliberate choices (such as choosing to eat a healthy salad instead of a donut).

This might sound like the two mental systems are opposed, but in reality they work together. System 1 monitors your daily situation and can solve most of your everyday problems relatively efficiently; it only calls on System 2 when greater mental effort is needed. Likewise, System 2 can reject impressions and judgements formed quickly by System 1. However, in most cases it endorses those initial impressions and this is how we form beliefs. If you ever met someone whose ideas seemed out there and obviously incorrect to you, but when asked to justify those beliefs they were still able to offer long-winded and complicated rationalizations you’ve witnessed an example of System 2 endorsing impressions from System 1. However, before you criticize such a person don’t forget they’re probably thinking the same thing about you and your crazy ideas! You’re just as prone to these same biases.

The problem with System 1 is that it is prone to biases and mistakes. When a situation doesn’t have enough information, System 1 will jump to conclusions and attempt to construct a coherent narrative when none exists. Indeed, instead of judging by the quality and quantity of evidence, System 1 places more weight on how coherent a narrative can be formed. If we fail to find the answer to a harder question, we will substitute an easier question that is similar and answer that. Kahneman coins a term that he repeats often in the book as a defining feature of System 1: WYSIATI (What You See Is All There Is). System 1 is terrible at considering ideas, interpretations, or perspective outside of its limited consideration. To demonstrate this, Kahneman recalls a study done by his friend and collaborator, Amos Tvserky. In the study, the participants each were given a background scenario about an arrest that occurred in a store after a confrontation between a union organizer and the store manager. In addition to the background material that all participants received, which contained only the facts of the events, one group was given a presentation by only the union’s lawyer, one group was given a presentation by only the store’s lawyer, and another group was given both. The lawyer for the union depicted the arrest as an intimidation tactic against the union, while the lawyer for the store argued the talk was disruptive and the manager was in his rights to have the organizer arrested. Despite knowing they only heard one side of the story, the participants trusted their judgements about the situation more than those who got to hear both sides of the story. By only hearing one side and not the other with a conflicting interpretation of the same events, the information is more coherent and is more easily accepted by System 1. System 1 doesn’t like ambiguity because it interferes with coherence and even though the participants knew there was another side and could’ve easily imagined the other side’s arguments for its actions, the data suggests that is not what we naturally do. Our minds want to take the easy way out.

Our natural mental state is one of cognitive ease; we want to use the least amount of energy and effort to solve the problem. This is why we tend to adopt what’s familiar; it’s easier. Research by Larry Jacoby and others have shown that you can induce people with mental illusions and false ideas (like fake celebrity names that they believe are real) by giving the impression of familiarity. Repetition, even of false ideas, creates a sense of familiarity that System 1 tends to believe uncritically. If something feels familiar, we tend to believe it’s true. Robert Zanjonc, who studied this mere exposure effect by placing random Turkish words in a student newspaper and then sending out questionnaires to students who read the paper found that words that appeared more frequently had higher positive connotations for those students, despite not knowing their meaning and not speaking Turkish. Just being exposed to random words more frequently increased their positive feelings towards those words. Mere exposure increases familiarity, which then increases how positively we feel about them.

Experiments by Roy Baumeister suggest that we have a limited pool of willpower. If we use System 2 to exert good self-control at one moment, we are less likely to control ourselves during the next temptation. Although some new research calls this idea of ego depletion into question (see this: youtube video). Likewise, cognitive overload can also interfere with System 2. Cognitive overload occurs when we try to do too many complex tasks at the same time (like solving a tricky math problem, while switching lanes in heavy traffic). You simply can’t give the necessary mental attention to all these tasks simultaneously.