Bias blind spot

Bias blind spot

Perhaps the scariest bias.

Potential causes

  • The introspection illusion: you believe that you have direct insight into the origins of your mental states (and ironically, it's plain to you that other people do not know why they do what they do).
    • It could help to have deeply internalized felt senses of
      • how easy it is for two people to misunderstand each other (test: if you don't tend to make the fundamental attribution error, good)
      • that you don't know where you ever get any thought from – that you're not a rational homunculus deep down merely being beset by annoying biases, but rather that those biases are all there is behind your sense of self – that you're running on malicious hardware
      • Level 2 theory of mind
      • that it is useless to be superior – egolessness
    • It could help to fully train away the mind projection fallacy, typical mind fallacy, fundamental attribution error, and all the sort of error that has to do with extrapolating from how you work to how others work, and here the intent is to extrapolate from how others work to how you work.
  • Self-enhancement bias (underlying or same as Dunning-Kruger effect?)
    • Perhaps err on having an impostor effect in this specific matter, so you suspect your introspective ability to be closer to that of a lemur than to most people
      • I am worse at knowing myself than G is at knowing herself, for sure

Other potential patches:

  • For each important decision or conclusion, draw on your encyclopedic knowledge of biases and "what biases are going into this decision?/what biases are involved in this type of decision?" Then correct for each at least somewhat.
    • Correcting for them doesn't mean "oh, I see that I was biased in that way here"… all the research says you won't see it, just having it pointed out as a possibility. It means you shift your conclusion anyway, despite feeling it's already as correct as can be.
    • The problem is that we don't know how much we're personally affected by each bias, especially after education in debiasing, and especially considering some biases may not even exist (bad science), so how do we know how much to correct for them? Maybe only bother to do this in cases when you do have a feedback system, i.e. you will soon be told if your conclusion is wrong. Need concrete examples.
      • Taleb could patch in here … not just if there's a feecback system, maybe also consider if the loss is bounded or unbounded. Need concrete examples.
  • Supposing that the blind spot cannot be removed, maybe we can make the spot smaller? Shrink the space of consequences. Limit the damage.

What links here

Created (2 years ago)