Judgement and Decision Making

Majority notes taken from Cognitive Psychology: A Student’s Handbook by Eysenck & Keane (click image), which I recommend for its readability and theory evaluation.



Normative theories: ideal decisions

Descriptive theories: how people actually make decisions

Prescriptive approach: supporting people to make better decisions e.g. decision analysis

Judgement: aspects of decision making related to estimating likelihood of events; evaluated in terms of their accuracy

Decision making: deciding on a course of action; evaluated in terms of their consequences

– Judgements underlie decisions






Base rate info: the relative frequency of an event within a population

Kahneman & Tversky (1972) people often take less notice of the prior odds



K&T: we use them because they’re quick & easy even though they cause us errors

Heuristics: basic rules of thumb we use when faced with problems

Representative heuristic: assumption that typical members of a category are encountered most frequently

e.g. basing the guess of career on my stereotype of an occupation not BR

Conjunction fallacy: mistaken belief probability of a conjunction of 2 events (A&B) > prob of 1 of them (A or B)

e.g. is Linda a banker, feminist or feminist banker: most say feminist banker

this is incorrect as all feminist bankers belong to larger cats bankers and feminists!



– estimating frequency of events based on how easy it is to get info from LTM

Lichtenstein et al (1978) – people judged likelihood of cause of death based on news


Oppenheimer (2004) – name commonality

– famous vs normal names frequency test: people guessed correctly

– people recognise when familiarity of stimuli comes from other sources & overcorrect???502



– based on availability heuristic

– any event will appear more/less likely depending on how it’s described

– e.g. what is probability that you’ll die next year vs …from disease, heart attack, accident etc

– draws attention to less obvious aspects of the event; supplies info that might’ve been forgot

Mandel (2005) – probability of terrorist attack higher when al Qaeda mentioned**

Tversky et al (1995) – same effect with doctors (experts) diagnosing illness from options


– doesn’t explain why we overlook info that’s well known to us

– doesn’t explain why focusing on a possibility increases support for it



– Kahneman nobel prize; shown people have biases in judgement; impact on phil & econ

– Stanovich & West (2008) – effects wide and unrelated to intelligence


  1. Not fully clear how heuristics reduce effort (as opp. just describing phenomena)
  2. Could be people misunderstood e.g. Linda problem though Sides et al (2002) – not
  3. Juslin et al (2007) – not faulty processing but quality of info provided (e.g. **)
  4. Can’t explain that people generally make fairly accurate judgements
  5. Lab based & detached from real life, emotional & motivational factors



– rapid processing with little info: “take the best, ignore the rest”

– we “have adaptive tool box” with these heuristics

  1. search rule: search in order of validity (e.g. name recognition)
  2. stopping rule: stop after finding something that applies to only 1 option
  3. decision rule: choose outcome

Recognition heuristic: the recognised object has the highest value

– in a choice people said X city has largest pop because they recognised its name


Oppenheimer (2003) – against recognition heuristic

– subjects decide whether small known cities larger than fictitious cities

– recognised city judged larger on only 37% of trials: knowledge overrode recognition

– Hertwig & Pachur (2006) – recognition heuristic used more with time pressure

– obvious that we don’t use stopping cue when e.g. choosing partner: use all evidence



– evolutionary basis in strengths & weaknesses in human judgement

– idea of natural sampling i.e. coming across instances in our environment

– means it’s easier for us to work with frequencies not fractions/percentages

– we ignore base rates & make mistakes because tasks involve stats & percentages

Fiedler (1988) – Linda problem better when presented in terms of frequencies


– people do perform well on some judgement tasks involving probability



– addresses point that some use complex cognitive processes over heuristics

System 1: intuitive, automatic, immediate; most heuristics are produced by this system

System 2: analytical, consciously monitored & controlled; operations are slower, serial

– system 1 gives quick answers to judgement problems but system 2 might correct them

– K says most use sys1 and this makes sense to me and fits the results of lots of research

– found intelligent people use sys2 more because it’s more cognitively demanding

DeNeys (2006) – those correctly solving Linda problem took longer because used sys2


– model not explicit about processes involved e.g. how does system 2 monitoring work?

– model is serial (sys1 then sys2) but many believe processes run in parallel




– was assumed that people behave rationally, as in normative theories

– after came the development of subjective expected utility theory

– but people’s decisions are often decided by other things than simple utility



– Tversky & Shafir (1992) – $200 for a heads or lose $100 for tails

– majority didn’t want to bet even though they’d make a gain over a series of tosses

2 main assumptions:

  1. people identify a reference point representing their current state
  2. loss aversion: people are more sensitive to potential losses than gains

– means people don’t want to take bets involving losses even though gains might outweigh

– they’d prefer a sure gain to a risky but maybe greater gain

– also people give more weight to low probability events than they should (lottery)

– and people give less weight to high probability events than is merited

KAHNEMAN & TVERSKY showed that people didn’t make choices predicted by utility/ normative theories

KAHNEMAN & TVERSKY showed framing effect: influence of irrelevant bits of a situation (e.g. wording)

– 1987 Asian disease problem led to different choices depending on pos/neg frame

– results could be explained by loss aversion

– utility theory suggests framing shouldn’t make any difference

– so theory better than utility theory


Josephs et al (1992) – individual differences

– certain $8 vs 66% chance of $12

– those with low self-esteem were more risk averse

– suggests prospect theory doesn’t emphasize individual differences

Post et al (2008) – analysis of deal or no deal gives mixed support for prospect theory

Wang (1996) – social & moral factors

– Asian disease decision influenced by smaller group size & relation to patient (relatives)

– findings inconsistent with utility theory

– social & moral factors not emphasized by prospect theory

Hertwig et al (2004) – outcome different with neat lab descriptions vs real world & experience

Moorman & van den Putte (2008) – framing effects depend on indiv diffs as well as pos/neg

Eysenck & Keane (2010) – no detailed rationale for the value function graph so theory partial



– prospect theory doesn’t emphasize emotional & social factors but these influence decisions

Ritov & Baron (1990) – omission bias

– hypothetical child vaccine scenario that protects but also has possible death side effect

– people chose not to vaccinate even though risk of death from vaccine lower than disease

– omission bias: prefer inaction to action because their action could lead to loss & regret

Anderson (2003) – rational-emotional model

– experienced and anticipated emotions e.g. fear and regret have a role in decisions

– “people make choices that reduce negative emotion”

– can explain loss aversion






Unbounded rationality: all relevant info is available for use & is used by us & we optimise

Bounded rationality: we are as rational as our processing limitations allow (e.g. heuristics)

– environment constraints (info costs) or mind (attention, memory)

Satisficing: a heuristic where we consider options and select first one meeting min reqs

– won’t always lead to best decisions but useful where new options are presented with time

– e.g. marriage: set min acceptable level and first person meeting/exceeding is chosen

Galotti (2007) – university decisions

– looked at real life decisions e.g. choosing a college

– found people limited the amount of info they considered, consistent with BR



– conscious thought has the limited capacity of consciousness; unconscious greater capacity

– unconscious naturally weights relative importance of various attributes

– but only conscious thought can follow strict rules and deal with complex maths problems

– found unconscious thinkers more satisfied with IKEA (complex) product purchased vs..

– conscious thinkers more satisfied with simple products purchased than unconscious

Betsch et al (2001) – shares

– subjects looked at advertisements and info about 5 shares then asked Qs about shares

– poor (conscious) answers but used gut feeling to identify best & worst shares


– perhaps minimises usefulness of conscious thought

– in real world we have devices to deal with limitations of conscious thought e.g. notes

– complex engineering obviously can’t be done with unconscious thought only!

– precise cognitive processes used in conscious thought unknown



– paradox: many fail in lab reasoning tasks but cope fairly well in everyday life but…

  1. it’s sensible for people to use heuristics because they’re quick & fairly accurate
  2. performance is better when base rate info is made more explicit
  3. studies don’t take into account that people make decisions based on social factors
  4. problems can be artificial so don’t tell us about decision making in real world
  5. conclusions are right or wrong – real life reasoning is degrees of probability
  6. people’s interpretation of the problem/ rewording affects results


Stanovich & West (2008) – intelligence

– more intelligent perform better on deductive reasoning (not judgement)

– suggest problem is processing ability not tasks


Tversky et al (1995) – medical experts got diagnosis probability wrong so not interpretation


Stanovich & West (2008) – overriding heuristics

– people make errors on tasks because they don’t put aside/override heuristics

  1. they don’t have the mindware to override the heuristic
  2. they have the mindware but don’t realise that they should override
  3. they have the mindware, realise they should but don’t divorce & do the analytic processing needed  

– S&W suggest performance due more to non-obviousness of cues than irrationality

– intelligent people might have more rationality of some sense than less intelligent



– subjective expected utility (SEU) theory is most extensively applied model of risky choice

– rational decision makers should trade-off value of outcomes by likelihood of getting them