For every postulate, provide at least one gear
Background:
Many times I think something is obviously true, and when I start writing a blog post about it, where I have to explain and justify, I realize, mid-paragraph, that what I’m writing is not quite correct, and I have to rethink it.
Same reason the slipbox confronts us with our lack of understanding.
So the rule is, when you make a claim or postulate anything, try to also say one of your reasons for believing it. You could say all your reasons but that takes time. The important thing is to prop the door open, so to speak… get out of the habit of insisting or feeling that "well it's obviously true" without really analyzing it.
At a glance, maybe we can regard this rule superfluous; if we just always avoid pre-drawing the bottom line, we'll never even need to demystify our reasons, because we started from the reasons! Right?! But a brain with expertise of a topic will often know things without having thought of the reasons directly, and we like this efficiency. So, we often end up needing to unpack to understand ourselves, and this advice benefits us, even were we an expert in every subject.
My theory: refine this technique of providing reasons into a bottom-up "factor search" instead of the default "justification search". Instead of justifying a feeling you got, detach for a moment from the outcome, list what factors affect the outcome, as if taking notes for a slipbox, and your conclusion may be clear after simply listing them, or it may not and then you clarify that "dunno, but I feel the truth is X". No need to weave a narrative.
So the technique is not "Provide reasons why something is so" (justification search), but, "List what factors would affect whether something is so" (factor search), or even more crisply, "Find your cruxes".
It may not even be necessary to claim that the thing is so at all… you could speak aloud only cruxes until the thing itself becomes apparent? But that's another skill.
I feel this relates to the general habit of dissolving attributes or dissolving categories in favour of identifying specifics: www.greaterwrong.com/posts/ik2oJrQA4jz2uxovE/think-in-terms-of-actions-not-attributes, with same ideas in E-Prime, and the old writing rule "show, don't tell".
However, don't bother finding reasons for matters of preference, such as aesthetics or taste:
Students given a poster of their choice were less happy with their decision some months later if they had been asked to give reasons for their choice than if they were just given the poster with no questions. The study hypothesized that students chose based off of easily-explainable aspects rather than the aspects that actually affected their preferences.
So be careful what you give reasons for. Perhaps more aesthetic decisions should be left to the initial impression.
What links here
Reference class forecasting
Observed to work against
- planning fallacy (Osberg and Shrauger 1986)
(Note: When people use the overloaded expression "taking the Outside view", they may mean reference class forecasting, or they may mean other completely different things.)
In short: If you want to guess how long a task will take, ask how long a similar task has taken you in the past, or how long you think someone else would expect you to take. You can apply the technique to many things, not just time prediction.
There's not always a natural reference class. In that case, you're "reasoning by analogy".
For example, if you want to invade Ukraine, ask how the US fared in Iraq before you commit to the decision. This may be perfectly good to do, but it's not strict RCF, just analogy. It's still better the more similar the analogy: the US did not border Iraq nor was it seeking to annex it, so ideally you find a more closely matching example.
What links here
Reversal test
Bostrom & Ord showed that subjects applying what they dub a "reversal test" eliminates status quo bias in applied ethics (Bostrom & Ord 2006).
The test has to do with when you're discussing a continuous parameter such as lifespan and want to avoid misjudging the ideal value of that parameter, i.e. "what's the ideal human lifespan?", or "how much ought people to receive in financial support?", or "how many gigabytes of storage do I need?", or "how many days should our bicycle trip last?"
For example, if it became possible to increase the human lifespan, some would argue that it would be undesirable for people to live longer because, say, overpopulation would be difficult to manage. The reversal test is then to check that the same people accept that shorter lifespan is desirable […]
If reversal makes your argument appear absurd, your own values hold that it was absurd the other direction too, so you'd better drop it like a hot potato.
I see this as another facet of the general art of trying to disprove everything you hear (Attempt to falsify) at least once, just to see if it makes you think different about the topic.
I want to use the term "reversal test" for a wider category of techniques, including
- when judging an ideal quantity: simply checking that "if less is worse, more is better" holds. If a bit more is worse, a bit less is better, and if a lot more is worse, a lot less is better (it's extremely unlikely X is at the ideal level, and it seems to me a good starting point if you resist adding Y to X, checking that it would be good to subtract the same quantity Y from the current value of X).
- disprove everything you hear
- reversing all advice you hear (Try reversing the advice you hear)
- reversing all new facts you hear (How to feel shocked enough?)
- reversing claims/statements to spot *Applause Lights
- "consider the opposite" (How to debias?) (i.e.: when you make a decision or draw a conclusion, think at least once how it might be completely wrong)
What links here
- Rationality techniques
- How to feel shocked enough?
- How to be confused by fiction?
- How to debias?
- Ritual of Changing My Mind