The Knowledge Illusion: Why We Never Think Alone by Steve Sloman and Philip Fernbach

The authors of this book tackle three central ideas: why are people ignorant, why do people think they are knowledgeable when they are actually ignorant, and the importance of recognizing that knowledge is communal. They use a combination of scientific studies, philosophy, and real-world examples to address these issues.

The modern world is extremely complex with technologies that make life very convenient. However, how much does the average person really understand about these technologies? Oh sure, you know how to push the button on your Keurig machine and brew an instant cup of coffee. If you opened up the Keurig and dissected it, would you know what each and every part does? Do you really understand how it works? For that matter, do you know how your toilet works? Yes, you can flush the toilet and use it for your practical purposes. But what happens once you push that lever? The average person probably could not give a detailed causal explanation about what happens underneath the toilet once you flush, how each component works, and the scientific principles behind it.

People often overestimate their knowledge about how the world works. They think they know how the toilet works, how the Keurig works, or even all the nuances behind complex political policies or social problems. As many studies have shown people experience what psychologists call an illusion of explanatory depth.  One way psychologists have studied this phenomenon is by having people rate on a scale from 1 to 7 how well they think they understand how something works (like a zipper or a toilet or a computer), then they ask these same people to describe in detail all the steps of how the object works, and finally they ask them to rate their knowledge a second time on a scale from 1 to 7. Typically, people rate their knowledge lower after they are forced to explain how something works and realize they can’t do this.  The illusion of explanatory depth goes beyond technical knowledge as well. Similar studies have been done involving people’s political positions on controversial policy issues. People rated how well they thought they understood a particular political policy. Then they were asked to generate causal explanations and explain how each of these policies actually worked with step-by-step details. When many people realized they couldn’t do this, they lowered their political extremity. The authors of the book also mention that there was a control in some studies where they did the same basic procedure, but instead of asking people to explain in detail how a policy worked, they simply were asked to explain their reasons for holding their position on the policy issue. In those instances, people had no shift in attitude.

Humans are causal reasoners. We evolved to have two types of reasoning as outlined in dual-processing theory. These two types of causal reasoning are fast versus slow, intuitive versus reflective, shallow versus deep. The two types of reasoning can lead to different conclusions. By asking people to reflect on how something works with detailed causal explanations it seems to force people to activate the more reflective type of thinking and make them deliberate on their lack of knowledge. Tests such as the Cognitive Reflection Test has shown that some people are naturally more deliberative and reflective thinkers and less prone to illusions of explanatory depth.

Often it’s the very complexity of our knowledge and technology that fuels the problem. The internet is a wonderful resource, giving us access to a great deal of knowledge in the world, but this can give us the illusion that we know the information ourselves. People often confuse the knowledge in their heads with the knowledge outside of themselves; they confuse the knowledge they possess with the fact that they know where to get the knowledge if needed.

Experts are also less prone to mistakes in knowledge, especially as it relates to their own field. After all, that is why they are experts; they have put a huge investment in time to learn the information in a field, its methods of knowing, and specialize in particular sub-areas. However, the authors note that even experts and scientists are prone to illusions of knowledge, which can lead to some catastrophic blunders. One example is the detonation of Castle Bravo where scientists underestimated the power of nuclear reactions. Likewise, many individual academics, and sometimes groups, will fail to accept new ideas that don’t conform to previous conceptions. It takes long periods of time for those new ideas, if they have validly and evidence, to replace the old. It is also important to realize that the financial security of many scientists, archaeologists, historians, theologians, and other types of academics depend on convincing the larger world and their fields of their importance. In other words, we always need to keep in mind that even academics of various stripes aren’t completely unbiased neutral individuals. They, too, get a pay check usually tied to their expertise. That engineer has reasons to convince you that bridge needs repair, that chemist working for the pharmaceutical company has additional reasons for you to buy that new medicine. However, at the same time, while acknowledging this facts, the expert is still our best to get accurate knowledge. He belongs to an entire field, an entire community of other experts, who will challenge and test his or her ideas, others who will sift through their results, and others who are performing their own studies to compare with previous results. Likewise, the individual expert is not only a master of his particular sub-domain, but also understands what they don’t know yet about their direct area of study or even a different area of study within their field.

 

This leads to the most important idea of the book: that knowledge is inherently communal. We can’t be an expert in everything. There is simply too much to learn in a modern society. Very few people can be an expert in one field, let alone multiple fields. Therefore, we need to accept the inevitability of ignorance; not only the ignorance of other people, but our own ignorance. As the authors point out, the issue isn’t ignorance itself, which can’t be avoided, it’s our failure and unwillingness to recognize our ignorance. However, we don’t have to know everything since we belong to a community of knowledge.  There are other people who are experts who can answer questions when they arise and there are areas where we are experts. We each have a part to play in the community of knowledge. This has important implications for education. Most people conceptualize education as having the purpose of creating independent thinkers or to learn key facts from various important fields of knowledge. Instead we should become experts in areas that interest us, develop general critical thinking skills to evaluate areas that we don’t know much about, and knowledgeable about how to locate information when we do need it.

Advertisements

The Subtle Art of Not Giving a F*ck by Mark Manson

Most of what causes our unhappiness is the attention we give to unimportant problems. The mistake of a lot of self-help books is that it tries to convince us we can overcome or avoid problems completely. Manson retells the story of the Buddha’s revelation that both riches and poverty can be sources of unhappiness and one’s problems. One of the most important points of the book is that in order to even be happy sometimes we have to be sad or angry or miserable and face rejection. In other words, happiness comes from solving our problems, not avoiding them. Since having some problems in our life is inevitable we should give our attention to better problems.

We need to decide what problems are worth our time and effort. Rejection and failure are a necessary component of solving problems. Suppose your problem is you want to develop confidence to ask out more women or men. Well, why do most people find this to be a problem? They may say they’re shy or don’t know where to begin. The real issue is that they’re afraid of rejection and failure. But why are they afraid of rejection? Many of these people will take it as a message that they aren’t good enough as people.

People have different ideas of what constitutes a problem and success in the first place based on their metrics and values. Our values form our metrics: how we judge our success and failures. Some people might feel successful if they achieve a good family life at home, while others might feel successful only if they’ve made millions of dollars and own a yacht. These two different definitions of success reflect different values and thus different metrics of valuing what is successful. In order to select better problems and come to realize what really matters we have to recognize what we really value and if it turns out our values are bad and harmful, we should try to realign them with new values that are better. Only then can we choose better problems for ourselves.

Thinking, Fast and Slow by Daniel Kahneman

During the 1970s most social scientists followed in the path of Ancient Greek Philosophers in their belief that man is a rational being who occasionally slips from time to time due to emotions. Noble prize-winning psychologists, Daniel Kahneman and Amos Tversky, challenged this view by documenting mistakes in people’s thinking that arose from everyday mental processes rather than emotional factors. In their famous article, “Judgement Under Uncertainty: Heuristics and Biases” that originally appeared in Science, they describe three mental shortcuts our brains naturally take that lead to poor assessment of probabilities.

1) Representative is when a person judges that some connection is more likely due to some imagined essential characteristic or categorical association in their minds. For example, if Steve is described as shy and people are asked to rank the probability that he is a certain profession from a list, studies show most people will rank librarian as the highest likely occupation for Steve even if farmers are more frequent in the overall population. People will ignore the frequency or probability that suggests Steve, a random person selected from a population, is more likely to be a farmer. Instead they will use the irrelevant personal characteristic and draw on the stereotype that librarians are shy to come to their conclusions.

2) Availability bias leads us to judge an event as more or less likely due to how easily we remember or can recall an example. If my uncle won the lottery, I am more likely to overestimate my chances of winning the lottery. If none of the female members of my family have had breast cancer I’m likely to underestimate the prevalence of an average person getting breast cancer. If I see a house burn down with my own eyes down the block I am more likely to believe there is a greater chance my house will burn down than if I read about it in the newspaper.

3) Anchoring and Adjustment Bias involves the way our starting estimate or value “anchors” us when we are given a chance to adjust our estimates. In experiments, people given low starting values for an estimate and another group given a much higher starting value and then asked to adjust to what they think is the correct value will remain closer to the initial value given. The group with the higher value will remain much higher in their estimates when they complete their adjustments and the group given the lower starting value will remain much lower in their estimates, suggesting the initial value given at random affected how far their adjustments went. Their first value “anchors” their adjustments. Basically, we rely too much on the first piece of information we receive.

Other researchers, such as Schwartz, also explored the availability bias. In his experiment, people ranked how assertive they thought they were after they listed either six or twelve particular instances when they were assertive. Paradoxically those who only listed six ranked themselves as more assertive. When you have to list a larger number of instances it becomes less easy to retrieve from memory so it feels like you’re less assertive, despite technically producing more instances and thus more evidence of your assertiveness.
Another example of the availability bias in action can be witnessed in a survey that Slovic and Lichtenstein gave in which participants had to compare two potential causes of death and judge which was more likely. Their results found people misjudged the probability of dying by one cause compared to another. For example, people ranked dying by an accident as more likely than dying by a stroke. In reality, you’re twice as likely to die from a stroke as by an accident. The culprit seems to be media coverage, which leads to an availability bias. By covering automobile accidents more frequently than people dying from strokes it gives the false impression that they occur more often.

In Thinking Fast Thinking Slow, Kahneman expands on this earlier research in order to explain what causes us to make all these mental errors. According to the dual-processing theory expounded in the book, we have two ways of mentally processing the world which he calls System 1 and System 2. System 1 is associated with intuition. It is fast and at times unconscious. It deals with thoughts, impressions, and judgements that occur automatically. It is responsible for noticing simple relations such as a person being taller than another, recognizing that 17 X 24 is a multiplication problem, or navigating from your upstairs bathroom down to your kitchen. The key characteristic is that you don’t need to deliberately think about any of these things. If I see a green shirt or the symbol 4 my brain will register the concept green and four whether I want it to or not. Meanwhile, System 2 is deliberate and slow. It is often associated with rationality, self-control, attention, careful decision-making, and effortful mental activities. It is capable of following rules (such as learning the rules of a new board game you haven’t played before), able to compare advanced characteristics between objects (such as making a list of the pros and cons of a new political policy in comparison to an old one), and allows us to make deliberate choices (such as choosing to eat a healthy salad instead of a donut).

This might sound like the two mental systems are opposed, but in reality they work together. System 1 monitors your daily situation and can solve most of your everyday problems relatively efficiently; it only calls on System 2 when greater mental effort is needed. Likewise, System 2 can reject impressions and judgements formed quickly by System 1. However, in most cases it endorses those initial impressions and this is how we form beliefs. If you ever met someone whose ideas seemed out there and obviously incorrect to you, but when asked to justify those beliefs they were still able to offer long-winded and complicated rationalizations you’ve witnessed an example of System 2 endorsing impressions from System 1. However, before you criticize such a person don’t forget they’re probably thinking the same thing about you and your crazy ideas! You’re just as prone to these same biases.

The problem with System 1 is that it is prone to biases and mistakes. When a situation doesn’t have enough information, System 1 will jump to conclusions and attempt to construct a coherent narrative when none exists. Indeed, instead of judging by the quality and quantity of evidence, System 1 places more weight on how coherent a narrative can be formed. If we fail to find the answer to a harder question, we will substitute an easier question that is similar and answer that. Kahneman coins a term that he repeats often in the book as a defining feature of System 1: WYSIATI (What You See Is All There Is). System 1 is terrible at considering ideas, interpretations, or perspective outside of its limited consideration. To demonstrate this, Kahneman recalls a study done by his friend and collaborator, Amos Tvserky. In the study, the participants each were given a background scenario about an arrest that occurred in a store after a confrontation between a union organizer and the store manager. In addition to the background material that all participants received, which contained only the facts of the events, one group was given a presentation by only the union’s lawyer, one group was given a presentation by only the store’s lawyer, and another group was given both. The lawyer for the union depicted the arrest as an intimidation tactic against the union, while the lawyer for the store argued the talk was disruptive and the manager was in his rights to have the organizer arrested. Despite knowing they only heard one side of the story, the participants trusted their judgements about the situation more than those who got to hear both sides of the story. By only hearing one side and not the other with a conflicting interpretation of the same events, the information is more coherent and is more easily accepted by System 1. System 1 doesn’t like ambiguity because it interferes with coherence and even though the participants knew there was another side and could’ve easily imagined the other side’s arguments for its actions, the data suggests that is not what we naturally do. Our minds want to take the easy way out.

Our natural mental state is one of cognitive ease; we want to use the least amount of energy and effort to solve the problem. This is why we tend to adopt what’s familiar; it’s easier. Research by Larry Jacoby and others have shown that you can induce people with mental illusions and false ideas (like fake celebrity names that they believe are real) by giving the impression of familiarity. Repetition, even of false ideas, creates a sense of familiarity that System 1 tends to believe uncritically. If something feels familiar, we tend to believe it’s true. Robert Zanjonc, who studied this mere exposure effect by placing random Turkish words in a student newspaper and then sending out questionnaires to students who read the paper found that words that appeared more frequently had higher positive connotations for those students, despite not knowing their meaning and not speaking Turkish. Just being exposed to random words more frequently increased their positive feelings towards those words. Mere exposure increases familiarity, which then increases how positively we feel about them.

Experiments by Roy Baumeister suggest that we have a limited pool of willpower. If we use System 2 to exert good self-control at one moment, we are less likely to control ourselves during the next temptation. Although some new research calls this idea of ego depletion into question (see this: youtube video). Likewise, cognitive overload can also interfere with System 2. Cognitive overload occurs when we try to do too many complex tasks at the same time (like solving a tricky math problem, while switching lanes in heavy traffic). You simply can’t give the necessary mental attention to all these tasks simultaneously.