Showing 297 to 300

Reversal test

Bostrom & Ord showed that subjects applying what they dub a "reversal test" eliminates status quo bias in applied ethics (Bostrom & Ord 2006).

The test has to do with when you're discussing a continuous parameter such as lifespan and want to avoid misjudging the ideal value of that parameter, i.e. "what's the ideal human lifespan?", or "how much ought people to receive in financial support?", or "how many gigabytes of storage do I need?", or "how many days should our bicycle trip last?"

For example, if it became possible to increase the human lifespan, some would argue that it would be undesirable for people to live longer because, say, overpopulation would be difficult to manage. The reversal test is then to check that the same people accept that shorter lifespan is desirable […]

If reversal makes your argument appear absurd, your own values hold that it was absurd the other direction too, so you'd better drop it like a hot potato.

I see this as another facet of the general art of trying to disprove everything you hear (Attempt to falsify) at least once, just to see if it makes you think different about the topic.

I want to use the term "reversal test" for a wider category of techniques, including

  • when judging an ideal quantity: simply checking that "if less is worse, more is better" holds. If a bit more is worse, a bit less is better, and if a lot more is worse, a lot less is better (it's extremely unlikely X is at the ideal level, and it seems to me a good starting point if you resist adding Y to X, checking that it would be good to subtract the same quantity Y from the current value of X).
  • disprove everything you hear
  • reversing all advice you hear (Try reversing the advice you hear)
  • reversing all new facts you hear (How to feel shocked enough?)
  • reversing claims/statements to spot *Applause Lights
  • "consider the opposite" (How to debias?) (i.e.: when you make a decision or draw a conclusion, think at least once how it might be completely wrong)

What links here

Created (2 years ago)

Think up two reasons how your starting judgment could be wrong

This technique helps against a range of biases, at least including:

  • overconfidence
  • hindsight bias
  • anchoring

That many biases reduced, if you just think up some reasons that your initial judgment might be incorrect.

Although don't dwell too long:

After a certain point, it becomes increasingly difficult for a person to generate reasons they might have been incorrect. This then serves to convince them that their idea must be right, otherwise it would be easier to come up with reasons against the claim. At this point, the technique ceases to have a debiasing effect. While the exact number of reasons that one should consider is likely to differ from case to case, Sanna et. al. (2002) found a debiasing effect when subjects were asked to consider 2 reasons against their initial conclusion but not when they were asked to consider 10. Consequently, it seems plausible that the ideal number of arguments to consider will be closer to 2 than 10.

So consider the opposite but not too much.

What links here

Created (2 years ago)

Bias blind spot

Perhaps the scariest bias.

Potential causes

  • The introspection illusion: you believe that you have direct insight into the origins of your mental states (and ironically, it's plain to you that other people do not know why they do what they do).
    • It could help to have deeply internalized felt senses of
      • how easy it is for two people to misunderstand each other (test: if you don't tend to make the fundamental attribution error, good)
      • that you don't know where you ever get any thought from – that you're not a rational homunculus deep down merely being beset by annoying biases, but rather that those biases are all there is behind your sense of self – that you're running on malicious hardware
      • Level 2 theory of mind
      • that it is useless to be superior – egolessness
    • It could help to fully train away the mind projection fallacy, typical mind fallacy, fundamental attribution error, and all the sort of error that has to do with extrapolating from how you work to how others work, and here the intent is to extrapolate from how others work to how you work.
  • Self-enhancement bias (underlying or same as Dunning-Kruger effect?)
    • Perhaps err on having an impostor effect in this specific matter, so you suspect your introspective ability to be closer to that of a lemur than to most people
      • I am worse at knowing myself than G is at knowing herself, for sure

Other potential patches:

  • For each important decision or conclusion, draw on your encyclopedic knowledge of biases and "what biases are going into this decision?/what biases are involved in this type of decision?" Then correct for each at least somewhat.
    • Correcting for them doesn't mean "oh, I see that I was biased in that way here"… all the research says you won't see it, just having it pointed out as a possibility. It means you shift your conclusion anyway, despite feeling it's already as correct as can be.
    • The problem is that we don't know how much we're personally affected by each bias, especially after education in debiasing, and especially considering some biases may not even exist (bad science), so how do we know how much to correct for them? Maybe only bother to do this in cases when you do have a feedback system, i.e. you will soon be told if your conclusion is wrong. Need concrete examples.
      • Taleb could patch in here … not just if there's a feecback system, maybe also consider if the loss is bounded or unbounded. Need concrete examples.
  • Supposing that the blind spot cannot be removed, maybe we can make the spot smaller? Shrink the space of consequences. Limit the damage.

What links here

Created (2 years ago)

You don't need to know about biases to debias

Telling the subjects of cogsci experiments about a bias rarely helps them – they continue to be just as biased. YES we need Rationality techniques that produce less biased judgment, but NO we don't need to know which biases they circumvent, since the techniques should circumvent them whether or not we know of them.

If that's true, I see only two reasons to learn about biases: it's so you can re-explain them to people, to drive home how much the techniques may help. And if you're developing a new technique, of course you'll need to have a bias in mind.

I understand the research has shown that for most biases, it doesn't help to be told about it or to keep it in mind. However, it seems to me feasible that it could help to dwell more deeply on each bias, at least if you first do as a good student – approach it with many different sorts of questions, think about concrete examples, how it would show up, what sort of policy can prevent the problem, etc – and then develop a TAP (trigger-action plan) and finally train yourself to Notice the trigger. Game, set and match.

Since Knowing About Biases Can Hurt You, it seems worth doing at least this much for biases you do know about. If you're gonna know it, know it well.

What links here

Created (2 years ago)
Showing 297 to 300