The same cause underlies both the availability heuristic and the conjunction fallacy

Root problem: to judge how likely something is, we sub in a judgment about representativeness, i.e. characteristicness.

See how this looks in two different failure modes:

  • Availability heuristic: To judge the actual rates of homicide, we trust in how typical it feels to encounter reports of homicide.
  • Conjunction fallacy: To judge the likelihood of "Poland is invaded followed by a breakdown of relations with the Soviet Union", we trust in how typical the sequence of events feels.

While the standard antidote to the conjunction fallacy is to learn to react to details as additional burdens, and there are multiple band-aids for the Availability heuristic, maybe we can also get to the root of the problem.

Which is…that our inner simulator loves to compare against 'typicality clusters'? And it's easy to see why if you think of it as implementing a neural net: light up more nodes, stronger sense of recognition.

Can we get it to habitually wire up a different sort of neural net?

See picture here: www.greaterwrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside

Not sure where I'm going with that.

Actually that's about words. The dangling node doesn't matter for this.

What we need is a… companion node tied to every node, working as a kind of gatekeeper so the node doesn't light up? Trained to be sensitive to the exact statement it hears, i.e. it knows A influences B doesn't mean B influences A.

<2024-Mar-25> Representativeness heuristic! That's the name for this thing.

Created (7 months ago)