What is a tendency to search for information that supports our preconceptions and to ignore or distort contradictory evidence?

Confirmation bias is a well-characterized phenomenon: the tendency to search for or interpret information in a way that confirms one’s preconceptions.

From: Misleading DNA Evidence, 2014

Arson

Rachel Dioso-Villa, John J. Lentini, in Forensic Science Reform, 2017

Confirmation Bias

Confirmation bias can occur when an analyst knowingly or unknowingly seeks or interprets information in a way that supports their beliefs, hypotheses, and expectations (Nickerson, 1998). For example, investigative facts, such as knowing that the suspect confessed or that the suspect has a criminal record of similar offenses, may affect how an analyst interprets findings (Dror et al., 2006). In Willingham’s case, we can draw inferences that suggest confirmation bias led to his conviction. From Vasquez’s testimony at trial, he appeared to see Willingham as a physically abusive husband who had reacted to the deaths of his children in unexpected ways. These impressions about Willingham’s character, informed by eyewitness statements and criminal record, certainly could have influenced the way in which Vasquez interpreted the physical evidence in his investigations. The likelihood of bias increases when analyses are made and conclusions are drawn based on “art” and not on empirical analysis. Thus, it is telling that Vasquez believes, “the fire tells a story” and he is just the fire’s “interpreter,” and Fogg believes, “the fire talks to you…[t]he structure talks to you…[y]ou call that years of experience” (Mills and Possley, 2004).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128027196000030

Confirmation Bias in Forensic Science

Wendy J. Koen, Jeff Kukucka, in The Psychology and Sociology of Wrongful Convictions, 2018

The Psychology of Confirmation Bias

Confirmation bias is a ubiquitous phenomenon, the effects of which have been traced as far back as Pythagoras’ studies of harmonic relationships in the 6th century B.C. (Nickerson, 1998), and is referenced in the writings of William Shakespeare and Francis Bacon (Risinger, Saks, Thompson, & Rosenthal, 2002). It is also a problematic phenomenon, having been implicated in “a significant fraction of the disputes, altercations, and misunderstandings that occur among individuals, groups, and nations” throughout human history, including the witch trials of Western Europe and New England, and the perpetuation of inaccurate medical diagnoses, ineffective medical treatments, and erroneous scientific theories (Nickerson, 1998, p. 175).

For over a century, psychologists have observed that people naturally favor information that is consistent with their beliefs or desires, and ignore or discount evidence to the contrary. In an article titled “The Mind’s Eye,” Jastrow (1899) was among the first to explain how the mind plays an active role in information processing, such that two individuals with different mindsets might interpret the same information in entirely different ways (see also Boring, 1930). Since then, a wealth of empirical research has demonstrated that confirmation bias affects how we perceive visual stimuli (e.g., Bruner & Potter, 1964; Leeper, 1935), how we gather and evaluate evidence (e.g., Lord, Ross, & Lepper, 1979; Wason, 1960), and how we judge—and behave toward—other people (e.g., Asch, 1946; Rosenthal & Jacobson, 1966; Snyder & Swann, 1978).

The scientific study of this phenomenon grew after World War II, led by Jerome Bruner and the “New Look” theorists, who described visual perception as an active process which “reflects the predispositions, goals, and strivings of the organism at the moment of perceiving” (Bruner & Postman, 1948, p. 203). They argued that perception has both objective and subjective components: A person’s interpretation of a stimulus is shaped not only by the physical properties of the stimulus (i.e., bottom-up processing), but also by the perceiver’s idiosyncratic expectations, desires, and experiences (i.e., top-down processing; see Gregory, 1970). In an early test of this hypothesis, Bruner and Goodman (1947) asked children to estimate the size of US coins from memory, and found that less affluent children (who presumably valued the coins more highly) overestimated the size of the coins to a greater degree than their more affluent peers.

In the decades since, a plethora of studies have shown that our expectations have a powerful influence on how we perceive visual stimuli. To offer one example, Bressan and Dal Martello (2002) asked people to rate the degree of resemblance between an adult and a child who were shown in a photo together. Although they all saw the same photos, people saw greater resemblance between the adult and child if they were told in advance that the adult and child were genetically related—even if this was not actually true. In other words, perceptions of the photos were shaped more by the perceiver’s beliefs about their relatedness than by their actual relatedness.

Although confirmation bias often serves to validate our preexisting beliefs, it can also be driven by our goals and desires. Along these lines, Kunda (1990) drew a distinction between accuracy goals (where the perceiver’s goal is to arrive at an accurate judgment) and directional goals (where the perceiver’s goal is to arrive at a desirable judgment). In the latter case, perceivers maintain an illusion of objectivity” which prevents them from recognizing that they have been biased by their desires. To illustrate, Balcetis and Dunning (2006) showed people a drawing that could be interpreted as either of two animals, and attached a positive consequence to one interpretation and a negative consequence to the other. In a series of studies, they showed that people were unconsciously inclined to “see” whichever animal led to a desirable outcome.

Mechanisms of confirmation bias: To explain its existence, some have argued that confirmation bias is a byproduct of a “positive test strategy” that is intrinsic to human cognition; that is to say, people naturally test their beliefs by seeking out feedback that is likely to support their beliefs, rather than feedback that may refute their beliefs (Klayman & Ha, 1987; Wason, 1960). The automaticity of this strategy is consistent with other research showing that confirmation bias operates outside of conscious awareness, such that people are largely unaware of the sources and effects of their own biases (Nisbett & Wilson, 1977). To make matters worse, people typically see themselves as being less biased than others, and often fail to recognize the same biases in themselves that they readily notice in others (Pronin, 2007; Pronin, Lin, & Ross, 2002).

Given that confirmation bias appears to be both innate and unconscious, one might ask whether and how it can be avoided. Wilson and colleagues (Wilson & Brekke, 1994; Wilson, Houston, Etling, & Brekke, 1996) have outlined four necessary conditions for the correction of bias: first, a person must be aware of the bias. Second, they must be motivated to correct it. Third, they must be aware of its direction and magnitude, so that an effort to correct it can be properly calibrated. Fourth, they must have enough control over their own cognition to allow for its correction. Their model thus presents an admittedly pessimistic view, insofar as being aware of bias and motivated to correct it are not sufficient conditions for eliminating bias.

It should be noted, however, that confirmation bias is not limitlessly powerful. Speaking in terms of directional goals, Kunda (1990) explained “people do not seem to be at liberty to conclude whatever they want to conclude merely because they want to” (p. 482). Instead, people are bound by reality constraints, such that even a strongly held belief or a strongly desired outcome cannot be justified in the face of irrefutable bottom-up evidence to the contrary.

Along these same lines, Darley and Gross (1983) proposed a two-stage model of confirmation bias: First, a person forms an expectation, which acts as a tentative hypothesis. Then, s/he tests this hypothesis against the available evidence in a biased manner, which serves to confirm their expectation. Confirmation bias is thus described as an active process in which perceivers use bottom-up evidence to validate their top-down beliefs—but if the bottom-up evidence is inadequate, bias does not occur. To test their model, Darley and Gross had people evaluate the academic ability of a girl whose ability they were led to believe was either high or low. Those who later watched a video of the girl taking a test (in which her true ability was ambiguous) judged her in line with their expectations. However, those who did not see the video—and thus had no evidence to validate their expectation—made unbiased judgments of her ability. In short, confirmation bias appears to manifest only when there is at least some evidence in favor of the expected or desired conclusion, even if that evidence is weak or ambiguous.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128026557000071

Rigor in Forensic Science

Tania Simoncelli, in Blinding as a Solution to Bias, 2017

Confirmation bias—like other forms of cognitive bias—is not unique to latent print examination, but is rather a fixture of human experience. The role of bias in human decision making was generally well understood at the time of the Madrid bombing investigation. It was also understood that analyses that rely on subjective methods and depend on a high degree of human judgment are especially vulnerable to cognitive bias. Many, if not most, forensic methods are of this nature, yet the forensic science community generally had shown little regard for the dangers posed by these sources of error. As the Mayfield case made painfully clear, even the FBI’s laboratory at Quantico—arguably the most sophisticated forensic science laboratory in the country—had failed to implement rigorous practices to mitigate bias in procedures the lab was performing every day.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024607020040

The expert witness

Chris Monturo, in Forensic Firearm Examination, 2019

Confirmation bias

Confirmation bias is present if the examiner, for example, produces a hypothesis that the bullet was fired from a specific firearm. Then, when evaluating the evidence, the examiner lends more weight to any markings on the bullet agreeing with test-fired bullets from that firearm and selectively ignores significantly differing marks indicating an identification may not be present. According to Charman in his article titled, “The forensic confirmation bias: A problem of integration, not just evidence evaluation,” it is stated, “A cognitive coherence approach has at least two stages. First, it emphasizes the interplay not just between a preexisting belief and the subsequent evaluation of evidence, but also between the evaluation of evidence and the emerging conclusion that is being formed (e.g., whether the suspect is guilty or not guilty). Second, it emphasizes the bidirectionality of effects: Not only does the evaluation of a piece of evidence affect the emerging conclusion, but the emerging conclusion feeds back to affect the evaluation of other evidence” (Charman, 2013).

When evaluating firearm-related evidence, frequently both fired bullets and fired cartridge cases are submitted for evaluation. If the examiner were to first examine the fired cartridge cases and determined that they had characteristics of having been fired from the same firearm, then the firearm examiner may have a preconceived conformational bias that the associated bullets in the case are automatically from the same firearm as well. That preconceived notion, if kept unchecked, could easily lead to an incorrect identification.

However, this potential bias can be overcome and eliminated from the decision-making process of the analyst if the proper approach to evidence is taken. Along with the research on causes of confirmation bias, Charman describes countermeasures that reduce and eliminate the occurrence of bias. He states, “And we may be given hints how to eliminate, or at least reduce, this bias. For instance, if coherence effects occur due to the constraints imposed by the emerging conclusion on the evaluation of a piece of evidence, then manipulations that delay the formation of an emerging conclusion should mitigate subsequent tendencies toward coherence. Although some attempts to do just this via an explicit instruction to delay one’s conclusion have failed to eliminate coherence effects, there are other avenues to be explored. For instance, the belief perseverance literature has shown that having people think of reasons why a ‘fact’ might be wrong at the time they receive it tends to reduce people’s tendency to stubbornly persist in that belief despite it later being discredited” (Charman, 2013).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128145395000113

Emerging Issues and Future Directions

Caleb W. Lack, Jacques Rousseau, in Comprehensive Clinical Psychology (Second Edition), 2022

11.04.4.1.1 Confirmation Bias

Confirmation biases are some of the most encountered, frustrating, and yet understandable biases (Nickerson, 1998). It is the tendency of individuals to favor information that confirms their beliefs or ideas and discount that which does not. This means that, when confronted with new information, we tend to do one of two things. If this information confirms what we already believe, our natural instinct is to accept it as true, accurate, and unbiased. We unreservedly accept it and are happy to have been shown it. Even if it has some problems, we forgive and forget those and incorporate this new information into our beliefs and schemas quickly. We are also more likely to recall this information later, to help buttress our belief during an argument. On the other hand, if this newly encountered information contradicts what we already believe, we have a very natural different response. We become highly critical and defensive immediately, nitpicking any possible flaw in the information, even though the same flaw would be ignored if the information confirmed our beliefs. It also fades quickly from our mind, so that in the future we cannot even recall being exposed to it.

As an example, consider that you believe that corporal punishment, such as spanking, is an effective way to discipline a child who is acting out. When you see a family member spank a child when they aren't listening to what they are told, and then the child goes and does what they were told, your brain latches onto that, and you say to yourself “I knew it works!” But later you are scrolling through your preferred social media feed, and you see a friend has shared a meta-analysis spanning five decades of research that comes to the conclusion that the more children are spanked, the more likely they are to be defiant toward their parents, as well as have increases in anti-social and aggressive behavior, MHP, and cognitive difficulties (Gershoff and Grogan-Kaylor, 2016). Since that doesn't fit with your already formed belief, you are likely to discount it in some way (e.g., “I was hit and I turned out just fine!” or “They must have ignored all the studies that support spanking in their meta-analysis!”).

In many ways, the confirmation bias undergirds the entire reason why scientific methodology needed to be developed in the first place. We naturally try to find information that supports and proves our beliefs, which can, in turn, lead to the wholesale discounting or ignoring of contradictory evidence. Science, in contrast, actively tries to disprove ideas. The scientific method allows for increased confidence in our findings and makes scientists less prone to the confirmation bias (at least, theoretically speaking and in their scientific work). But humans do not naturally think in a scientific manner, which helps make pop and pseudo-psychology so much easier to understand and absorb. And, once believed, it can be very difficult to shift someone's ideas (Ahluwalia, 2000; Nyhan and Reifler, 2010). But how do we get to that belief in the first place?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128186978000522

The Use of Standardized Rating Scales in Clinical Practice

R. MICHAEL BAGBY, ... FIONA S.M. SCHULTE, in Psychiatric Clinical Skills, 2006

Clinical Judgment Biases and Heuristic Errors

The confirmatory bias is the tendency of clinicians to search for information to confirm existing beliefs or hypotheses that have been formed. Once a diagnostic decision has been made, therefore, you engage in confirmatory hypothesis testing. As such, subsequent probing throughout the assessment and the resulting information provided by the patient tend to be carefully assimilated in ways that only seek to confirm the initial impression. For example, if you have concluded that a patient is suffering from anxiety, the confirmatory bias posits that you will formulate your pattern of questioning to elicit responses in accordance with your hypothesis, while simultaneously construing the client's responses to align with this hypothesis. Clearly, some measure of this is absolutely necessary in the fleshing out of a clinical history from a patient based on presenting complaints and clinical hypotheses; however, the risks of this approach on its own should be evident to you as well.

Another bias that has been recognized to influence clinical judgment is the hindsight bias, which refers to the way in which impression or perception can be changed after learning the actual outcome of an event.5 In other words, it is the tendency for people with outcome knowledge to believe falsely that they would have predicted the reported outcome of an event. In clinical practice, the hindsight bias can interfere when a patient has been referred to you with a speculated diagnosis prereported. Clinicians exaggerate the extent to which they had foreseen the likelihood of its occurrence. For example, learning that an outcome has occurred, such as the attempted suicide of a patient, might lead you to perceive your initial formulation, perhaps of suicidal thoughts, as being correct.5

Heuristics, or rules that guide cognitive processing to help make judgments more quickly, introduce another source of error in human judgment. In that clinicians are often pressured by time constraints in everyday practice, it is not unusual to expect that heuristics be employed to help make decisions; indeed, you would be completely lost clinically without heuristics. However, while providing ease in assessment, heuristics often sacrifice accuracy of judgment for speed. For example, the availability heuristic is the tendency for decisions to be influenced by the facility with which objects and events can be remembered. When applied to clinical practice, the availability heuristic would posit that you might be more likely to make a diagnosis of depression as opposed to anxiety if you can more readily recall patients diagnosed with depression. Coinciding with the availability heuristic is the tendency for people to be influenced by more graphic or dramatic events, rather than real-life probabilities, otherwise known as the “base-rate fallacy.” Thus, disorders that receive considerable attention from the media tend to be perceived as occurring more often than they actually do. This can be especially problematic when it is recognized that the media tend to be fascinated by the more rare disorders, thereby implanting a view that these disorders occur with a greater frequency than is actually true.6

The representative heuristic occurs when a decision is made based on whether a person is representative of a particular category. In other words, when making a decision as to whether a patient might be diagnosed with borderline personality disorder, you may compare this patient's behavior and experiences to what has been understood as the typology of a borderline patient to determine whether the situations can be considered similar.

It is clear that there are many factors that might influence your perception on any given day. Error in human judgment is inevitable, regardless of the amount of training or the years of expertise a clinician has obtained. Standardized rating scales, therefore, are a means by which to reduce the threat of error inevitable in human decision-making.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323031233500077

Core Network Principles

Warren W. Tryon, in Cognitive Neuroscience and Psychotherapy, 2014

Confirmation Bias

Illusory correlation is also driven by confirmation bias; another defective heuristic that operates outside of awareness (Baron, 2000). Confirmation bias refers to our tendency to let subsequent information confirm our first impressions (Baron, 2000). Hence, the same subsequent information can confirm different points of view depending upon what our first impression was. Alternatively stated, we are preferentially sensitive to and cherry-pick facts that justify decisions we make and hypotheses that we favor, and are similarly insensitive to facts that either fail to support or contradict decisions we make and hypotheses that we favor. And the best part is that all of this continuously operates unconsciously; outside of our awareness. This heuristic has been called the Positive Test Strategy and is illustrated next.

Snyder and Cantor (1979) described a fictitious person named Jane. To one group Jane was described as an extravert; to another group Jane was described as an introvert. A couple of days later, half the participants were asked to evaluate Jane for an extraverted job of a real estate broker and half were asked to evaluate her for an introverted job of librarian. Evaluations for the real estate job contained more references to Jane’s extraversion whereas evaluations for the introverted job contained more references to her introversion. This finding implies the use of a positive test strategy when trying to remember things about Jane. This cognitive heuristic is also caused by the neural network property of preferring consonance and coherence over dissonance that we will discuss as our Principle 7.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012420071500003X

What Does It Mean to be Biased

Ulrike Hahn, Adam J.L. Harris, in Psychology of Learning and Motivation, 2014

2.1 Understanding Bias: Scope, Sources, and Systematicity

We begin our example-based discussion with a very general bias which, if robust, would provide direct evidence of motivated reasoning, namely “wishful thinking.” Under this header, researchers (mostly in the field of judgment and decision-making) group evidence for systematic overestimation in the perceived probability of outcomes that are somehow viewed as desirable, as opposed to undesirable.

In actual fact, robust evidence for such a biasing effect of utilities or values on judgments of probability has been hard to come by, despite decades of interest, and the phenomenon has been the dubbed “the elusive wishful thinking effect” (Bar-Hillel & Budescu, 1995). Research on wishful thinking in probability judgment has generally failed to find evidence of wishful thinking under well-controlled laboratory conditions (see for results and critical discussion of previous research, e.g., Bar-Hillel & Budescu, 1995; Bar-Hillel, Budescu, & Amar, 2008; Harris, Corner, & Hahn, 2009). There have been observations of the “wishful thinking effect” outside the laboratory (e.g., Babad & Katz, 1991; Simmons & Massey, 2012). These, however, seem well explained as “an unbiased evaluation of a biased body of evidence” (Bar-Hillel & Budescu, 1995, p. 100, see also Gordon, Franklin, & Beck, 2005; Kunda, 1990; Morlock, 1967; Radzevick & Moore, 2008; Slovic, 1966). For example, Bar-Hillel et al. (2008) observed potential evidence of wishful thinking in the prediction of results in the 2002 and 2006 football World Cups. However, further investigation showed that these results were more parsimoniously explained as resulting from a salience effect than from a “magical wishful thinking effect” (Bar-Hillel et al., 2008, p. 282). Specifically, they seemed to stem from a shift in focus that biases information accumulation and not from any direct biasing effect of desirability. Hence, there is little evidence for a general “I wish for, therefore I believe…” relationship (Bar-Hillel et al., 2008, p. 283) between desirability and estimates of probability. Krizan and Windschitl's (2007) review concludes that while there are circumstances that can lead to desirability indirectly influencing probability estimates through a number of potential mediators, there is little evidence that desirability directly biases estimates of probability.

What is at issue here is the systematicity of the putative bias—the difficulty of establishing the presence of the bias across a range circumstances. The range of contexts in which a systematic deviation between true and estimated value will be observed depends directly on the underlying process that gives rise to that mismatch. Bar-Hillel and Budescu's (1995) contrast between “an unbiased evaluation of a biased body of evidence” and a “magical wishful thinking effect” reflects Macdougall's (1906) distinction between “primary” and “secondary bias,” namely a contrast between selective information uptake and a judgmental distortion of information so acquired.

Both may, in principle, give rise to systematic deviations between (expected) estimate and true value; however, judgmental distortion is more pernicious in that it will produce the expected deviation much more reliably. This follows readily from the fact that selective uptake of information cannot, by definition, guarantee the content of that information. Selectivity in where to look may have some degree of correlation with content, and hence lead to a selective (and truth distorting) evidential basis. However, that relationship must be less than perfect, simply because information uptake on the basis of the content of the evidence itself would require processing of that content, and thus fall under “judgmental distortion” (as a decision to neglect information already “acquired”).

In fact, selective attention to some sources over others can have a systematic effect on information content only where sources and content are systematically aligned and can be identified in advance.

Nevertheless, selectivity in search may lead to measurable decrements in accuracy if it means that information search does not maximize the expected value of information. In other words, even though a search strategy cannot guarantee the content of my beliefs (because there is no way of knowing whether the evidence, once obtained, will actually favor or disfavor my preferred hypothesis), my beliefs may systematically be less accurate because I have not obtained the evidence that could be expected to be most informative.

This is the idea behind Wason's (1960) confirmation bias. Though the term “confirmation bias,” as noted, now includes phenomena that do not concern information search (see earlier, Fischhoff & Beyth-Marom, 1983), but rather information evaluation (e.g., a potential tendency to reinterpret or discredit information that goes against a current belief, e.g., Lord et al., 1979; Nisbett & Ross, 1980; Ross & Lepper, 1980), Wason's original meaning concerns information acquisition. In that context, Klayman and Ha (1989) point out that it is essential to distinguish two notions of “seeking confirmation”:

1.

examining instances most expected to verify, rather than falsify, the (currently) preferred hypothesis.

2.

examining instances that—if the currently preferred hypothesis is true—will fall under its scope.

Concerning the first sense, “disconfirmation” is more powerful in deterministic environments, because a single counter-example will rule out a hypothesis, whereas confirming evidence is not sufficient to establish the truth of an inductively derived hypothesis. This logic, which underlies Popper's (1959) call for falsificationist strategies in science, however, does not apply in probabilistic environments where feedback is noisy. Here, the optimal strategy is to select information so as to maximize its expected value (see e.g., Edwards, 1965; and on the general issue in the context of science, see e.g., Howson & Urbach, 1996). In neither the deterministic nor the probabilistic case, however, is it necessarily wrong to seek confirmation in the second sense—that is, in the form of a positive test strategy. Though such a strategy led to poorer performance in Wason's (1960) study this is not generally the case and, for many (and realistic) hypotheses and environments, a positive test strategy is, in fact, more effective (see also, Oaksford & Chater, 1994).8 This both limits the accuracy costs of any “confirmation bias”9 and makes a link with “motivated reasoning” questionable.

Consideration of systematicity and scope of a putative bias consequently necessitates a clear distinction between the different component processes that go into the formation of a judgment and its subsequent report (whether in an experiment or in the real world). Figure 2.4 distinguishes the three main components of a judgment: evidence accumulation; aggregation, and evaluation of that evidence to form an internal estimate; and report of that estimate. In the context of wishful thinking, biasing effects of outcome utility (the desirability/undesirability of an outcome) can arise at each of these stages (readers familiar with Funder's (1995), realistic accuracy model of person perception will detect the parallels; likewise, motivated reasoning research distinguishes between motivational effects on information accumulation and memory as opposed to effects of processing, see e.g., Kunda, 1990). Figure 2.4 provides examples of studies concerned with biasing effects of outcome desirability on judgment for each of these component processes. For instance, demonstrations that participants’ use information about real-world base rate (Dai et al., 2008) or real world “representativeness” (Mandel, 2008) in judging the probability of events exemplify effects of outcome utility on the information available for the judgment: events that are extremely bad or extremely good are less likely in the real world than ones of moderate desirability, so that outcome utility provides information about frequency of occurrence which can be used to supplement judgments where participants are uncertain about their estimates.

What is a tendency to search for information that supports our preconceptions and to ignore or distort contradictory evidence?

Figure 2.4. Locating indirect effects of utility (outcome desirability/undesirability) in the probability estimation process. Framed boxes indicate the distinct stages of the judgment formation process. Ovals indicate factors influencing those stages via which outcome utility can come to exert an effect on judgment. Numbers indicate experimental studies providing evidence for a biasing influence of that factor. Note that Dai, Wertenbroch, and Brendl (2008), Mandel (2008), and Harris et al. (2009) all find higher estimates for undesirable outcomes (i.e., “pessimism”).

Figure adapted from Harris et al. (2009).

Confirming our observations about the relative reliability of primary and secondary bias in generating systematic deviations, the different components of the judgment process vary in the extent to which they generally produce “wishful thinking” and several of the studies listed (see Fig. 2.3) have actually found “anti” wishful thinking effects, whereby undesirable events were perceived to be more likely.

Such mixed, seemingly conflicting, findings are, as we have noted repeatedly, a typical feature of research on biases (see e.g., Table 1 in Krueger & Funder, 2004). However, only when research has established that a deviation is systematic has the existence of a bias been confirmed and only then can the nature of that bias be examined. The example of base rate neglect above illustrated how examination of only a selective range of base rates (just low prior probabilities or just high prior probabilities) would have led to directly conflicting “biases.” The same applies to other putative biases.

In general, names of biases typically imply a putative scope: “wishful thinking” implies that, across a broad range of circumstances, thinking is “wishful.” Likewise, “optimistic bias” (a particular type of wishful thinking, see Sharot, 2012) implies that individuals’ assessments of their future are generally “optimistic.” Researchers have been keen to posit broad scope biases that subsequently do not seem to hold over the full range of contexts implied by their name. This suggests, first and foremost that no such bias exists.

To qualify as optimistically biased for example, participants should demonstrate a tendency to be optimistic across a gamut of judgments or at least across a particular class of judgments such as probability judgments about future life events (e.g., Weinstein, 1980; in keeping with Weinstein's original work we restrict the term “optimistic bias” to judgments about future life events in the remainder). However, while people typically seem optimistic for rare negative events and common positive events, the same measures show pessimism for common negative events and rare common events (Chambers et al., 2003; Kruger & Burrus, 2004). Likewise, for the better-than-average effect (e.g., Dunning, Heath, & Suls, 2004; Svenson, 1981), people typically think that they are better than their peers at easy tasks, but worse than their peers at difficult tasks (Kruger, 1999; Moore, 2007), and the false consensus effect (whereby people overestimate the extent to which others share their opinions, Ross, Greene, & House, 1977) is mirrored by the false uniqueness effect (Frable, 1993; Mullen, Dovidio, Johnson, & Copper, 1992; Suls, Wan, & Sanders, 1988).

One (popular) strategy for responding to such conflicting findings is to retain the generality of the bias but to consider it to manifest only in exactly those situations in which it occurs. Circumstances of seemingly contradictory findings then become “moderators,” which require understanding before one can have a full appreciation of the phenomenon under investigation (e.g., Kruger & Savitsky, 2004): in the case of the better-than-average effect therefore that moderator would be the difficulty of the task.

2.1.1 The Pitfalls of Moderators

Moderators can clearly be very influential in theory development, but they must be theoretically derived. Post hoc moderation claims ensure the unfalsifiability of science, or at least can make findings pitifully trivial. Consider the result—reported in the Dutch Daily News (August 30th, 2011)—that thinking about meat results in more selfish behavior. As this study has since been retracted—its author Stapel admitting that the data were fabricated—it is likely that this result would not have replicated. After (say) 50 replication attempts, what is the most parsimonious conclusion? One can either conclude that the effect does not truly exist or posit moderators. After enough replication attempts across multiple situations, the latter strategy will come down to specifying moderators such as “the date, time and experimenter,” none of which could be predicted on the basis of an “interesting” underlying theory.

This example is clearly an extreme one. The moderators proposed for the optimism bias and better-than-average effects are clearly more sensible and more general. It is still, however, the case that these moderators must be theoretically justified. If not, “moderators” may prop up a bias that does not exist, thus obscuring the true underlying explanation (much as in the toy example above). In a recent review of the literature, Shepperd, Klein, Waters, and Weinstein (2013) argue for the general ubiquitousness of unrealistic optimism defined as “a favorable difference between the risk estimate a person makes for him- or herself and the risk estimate suggested by a relevant, objective standard…Unrealistic optimism also includes comparing oneself to others in an unduly favorable manner,” but state that this definition makes “no assumption about why the difference exists. The difference may originate from motivational forces…or from cognitive sources, such as…egocentric thinking” (Shepperd et al., 2013, p. 396).

However, the question of why the difference exists is critical for understanding what is meant by the term unrealistic optimism especially in the presence of findings that clearly appear inconsistent with certain accounts. The finding that rare negative events invoke comparative optimism, while common negative events invoke comparative pessimism seems entirely inconsistent with a motivational account. If people are motivated to see their futures as “rosy,” why should this not be the case for common negative events (or rare positive events) (Chambers, Windschitl, & Suls, 2003; Kruger & Burrus, 2004)? One can say that comparative optimism is moderated by the interaction of event rarity and valence, such that for half the space of possible events pessimism is in fact observed, but would one really want to call this “unrealistic optimism” or an “optimistic bias”? Rather, it seems that a more appropriate explanation is that people focus overly on the self when making comparative judgments (e.g., Chambers et al., 2003; Kruger & Burrus, 2004; see Harris & Hahn, 2009 for an alternative account which can likewise predict this complete pattern of data)—a process that simply has the by-product of optimism under certain situations. It might be that such overfocus on the self gives rise to bias, but through a correct understanding of it one can better predict its implications. Likewise, one is in a better position to judge the potential costs of it.

In summary, when bias is understood in a statistical sense as a property of an expectation, demonstration of deviation across a range of values is essential to establishing the existence of a bias in the first place, let alone understanding its nature. Conflicting findings across a range of values (e.g., rare vs. common events in the case of optimism) suggest an initial misconception of the bias, and any search for moderators must take care to avoid perpetuating that misconception by—unjustifiedly—splitting up into distinct circumstances one common underlying phenomenon (i.e., one bias) which has different effects in different circumstances (for other examples, see on the better-than-average/worse-than-average effect, see e.g., Benoit & Dubra, 2011; Galesic, Olsson, & Rieskamp, 2012; Kruger, 1999; Kruger, Windschitl, Burrus, Fessel, & Chambers, 2008; Moore & Healy, 2008; Moore & Small, 2007; Roy, Liersch, & Broomell, 2013; on the false uniqueness/false consensus effect see Galesic, Olsson, & Rieskamp, 2013; more generally, see also, Hilbert, 2012).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128002834000022

Professional Forensic Expert Practice

Mark Page, in Forensic Testimony, 2014

4.5.2.3 Target shifting

This phenomenon works in a similar way to information sharing and confirmation bias. Target shifting occurs when presented with a priori information regarding what a suspected “match” may look like; thus the forensic examiner is likely to resolve ambiguities in interpretation of the original sample towards the pattern already seen or expected from knowledge of the reference sample. The naming of this phenomenon is referenced from the notion of “painting a target around an arrow.” Even DNA analysis has been the subject of criticism regarding subjective interpretation and confirmation bias, where the use of low copy number analysis, partial samples, and mixtures in order to obtain a DNA profile suggests that the incidence of ambiguity and subsequent interpretation in DNA casework probably occurs in more than a trivial fraction of cases (Whitman and Koppl, 2010). The existence of ambiguity regarding which peaks belong to which donor, in addition to the problems of allelic drop-out (and drop-in) often require the analyst to make a judgment call on the significance of electropherogram peaks. If the analyst has prior knowledge of a suspect’s profile, as commonly occurs in many laboratories, then they may be more inclined to include some ambiguous readings, and dismiss others by claiming them as artifacts. This “target shifting” naturally occurs in favor of supporting the prosecution theory, as the profile used for comparison is usually that of the defendant (Thompson, 2009). Another researcher has published evidence to suggest that this effect is potentially very real in DNA casework, particularly in mixed-sample cases where potentially biasing information is known to the examiners (Dror and Hampikian, 2011).

The NAS report, discussing this issue in relation to tool marks, notes that the a priori stipulation of what features may or may not be considered suitable for analysis might not be possible, and hence examination of the tool in question might be warranted prior to analysis of the mark itself. This comment is also applicable to other disciplines such as bite mark analysis and fingerprint analysis. But this is now in contrast to most laboratory DNA techniques, arguably some of the most objective forensic analyses possible, which have attempted to correct for this phenomenon by initially “blinding” the examiner to the reference sample. This limiting of a priori knowledge represents a more scientifically justified series of steps for making conclusions regarding the source of forensic samples.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970053000049

The Psychology of Learning and Motivation

Klaus Fiedler, in Psychology of Learning and Motivation, 2012

6.3 Sample-Size Neglect in Hypothesis Testing

One intriguing consequence of self-induced differences in sample size is confirmation bias in hypothesis testing. When asked to test the hypothesis that girls are superior in language and that boys are superior in science, teachers would engage in positive testing strategies (Klayman & Ha, 1987). They would mostly sample from targets that are the focus of hypothesis. As a consequence, smart girls in language and smart boys in science are rated more positively, due to enhanced sample size, than girls in science and boys in language whose equally high achievement is only visible in smaller samples.

The causal factor that drives this repeatedly demonstrated bias (cf. Fiedler et al., 2002b; Fiedler, Freytag, & Unkelbach, 2007; Fiedler, Walther, & Nickel, 1999) is in fact n, or myopia for n, rather than common gender stereotypes. Thus, if the hypothesis points in a stereotype-inconsistent direction, calling for a test of whether girls excel in science and boys in language, most participants would still engage in positive testing and solicit larger samples from, and provide more positive ratings of, girls in science and boys in language. Similarly, exposing participants to a stimulus series that entails negative testing (i.e., a small rate of observations about the hypothesis target), then a reversal is obtained. Reduced samples yield more regressive, less pronounced judgments (Fiedler et al., 1999), highlighting the causal role of n.

More generally, the MM approach offers an alternative account for a variety of so-called confirmation biases (Klayman & Ha, 1987; Nickerson, 1998). Hypothesis testers – in everyday life as in science – sample more observations about a focal hypothesis Hfocal than about alternative hypotheses Halt. Provided that at least some evidence can be found to support any hypothesis, the unequal n gives a learning advantage to Hfocal. No processing bias or motivated bias is necessary. If each observation has the same impact on memory, unequal n will bias subsequent judgments toward the focal hypothesis.

MM prevents judges from monitoring and controlling for n differences, which reflect their own information-search strategies. Meta-cognitively, they should ignore n for two reasons. First, if the task calls for estimations rather than choices, they should not engage in a Bayesian competition of whether Hfocal or Halt receives more support but rather try to provide unbiased estimations (e.g., of the confirmation rate for all hypotheses). In this case, the impact of n has to be discounted anyway. Second, even in a competitive hypothesis test or choice, the enhanced n in favor Hfocal does not imply enhanced diagnosticity if it reflects the judge's own search bias toward Hfocal, which creates stochastic dependencies in the sample.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123942937000017

Is the tendency to search for interpret favor and recall information in a way that confirms one's preexisting beliefs?

Confirmation Bias, also called confirmatory bias or myside bias, is the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs or hypotheses. It is a type of cognitive bias and a systematic error of inductive reasoning.

What is the tendency to think of things only in terms of their usual function and leads to impediment to problem solving?

Functional Fixedness: The tendency to think of things only in terms of their usual functions; an impediment to problem solving.

What may lead us to ignore other relevant information as we intuitively compare something with a particular prototype?

judging the likelihood of things in terms of how well they seem to represent, or match, particular prototypes; may lead us to ignore other relevant information. We intuitively compare the likelihood of something with our mental representation of that category.

What is it called when you cling to your preconceived beliefs or initial conceptions?

confirmation bias. tendency to search for info that confirms one's preconceptions. insight. a sudden and often novel realization of the solution to a problem; contrasts with strategy-based solutions. heuristic.