Non-Occam

Non-Occam

[…] This is not the same question as “How do I argue Occam’s Razor to a hypothetical debater who has not already accepted it?”

Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam’s Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn’t implement Modus Ponens, it can accept “A” and “A->B” all day long without ever producing “B”. How do you justify Modus Ponens to a mind that hasn’t accepted it? How do you argue a rock into becoming a mind?

Our minds implement Modus Ponens, Occamian priors and other fundamentals, and they make sense to us because we are built with them. We believe the validity of modus ponens because we fundamentally work by modus ponens!

So at some point when you reach up against questioning modus ponens, you end up in a sort of reflective loop. (www.greaterwrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom)

And what about trusting reflective coherence in general? Wouldn’t most possible minds, randomly generated and allowed to settle into a state of reflective coherence, be incorrect? Ah, but we evolved by natural selection; we were not generated randomly.

So, at the end of the day, what happens when someone keeps asking me “Why do you believe what you believe?”

At present, I start going around in a loop at the point where I explain, “I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct.”

But then… haven’t I just licensed circular logic?

Actually, I’ve just licensed reflecting on your mind’s degree of trustworthiness, using your current mind as opposed to something else.

Even if you classify this reasoning as circular logic, it's a rather specific subtype.

A reflective loop of this sort is a bit different from the circular logic of "my blind faith was placed in me by God, and is therefore trustworthy" – for one thing, the latter doubles as a stopsign. You can unpack that sentence and its origins if you want to, but you decide there's nothing more to do and stop there, whereas with the reflective loop it's not just you. Every philosopher agrees there's nothing more to do from within the mind of the reflecter.

In point of fact, when religious people finally come to reject the Bible, they do not do so by magically jumping to a non-religious state of pure emptiness, and then evaluating their religious beliefs in that non-religious state of mind, and then jumping back to a new state with their religious beliefs removed.

People go from being religious, to being non-religious, because even in a religious state of mind, doubt seeps in. They notice [examples] and it doesn’t seem to make sense even under their own religious premises.

Being religious doesn’t make you less than human. Your brain still has the abilities of a human brain. The dangerous part is that being religious might stop you from applying those native abilities to your religion—stop you from reflecting fully on yourself. People don’t heal their errors by resetting themselves to an ideal philosopher of pure emptiness and reconsidering all their sensory experiences from scratch. They heal themselves by becoming more willing to question their current beliefs, using more of the power of their current mind.

All that is to demonstrate that questioning Occam's Razor is asking you to step outside your own mind, which is asking the impossible. Even a superintelligence can't do that. You can guess what an alternate brain-design might conclude, but you'd never agree that those conclusions make sense!


There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.

And when you ask these strange beings why they keep using priors that never seem to work in real life… they reply, “Because it’s never worked for us before!”

[…]

When I examine the causal history of my brain—its origins in natural selection—I find, on the one hand, all sorts of specific reasons for doubt; my brain was optimized to run on the ancestral savanna, not to do math. But on the other hand, it’s also clear why, loosely speaking, it’s possible that the brain really could work. Natural selection would have quickly eliminated brains so completely unsuited to reasoning, so anti-helpful, as anti-Occamian or anti-Laplacian priors.

So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I’m not halting the chain of examination at the point that I encounter Occam’s Razor, or my brain, or some other unquestionable. The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use?

[…] Still… wouldn’t it be nice if we could examine the problem of how much to trust our brains without using our current intelligence? Wouldn’t it be nice if we could examine the problem of how to think, Without using our current grasp of rationality?

When you phrase it that way, it starts looking like the answer might be “No”.

I don't entirely buy it yet. If those alien minds with anti-Occamian priors were to go ahead and doubt their own priors the same way some armchair philosophers argue we can doubt Occam's Razor, perhaps that gives them a chance to "escape" their poor mind-design. Is it not possible to find some principled approach, some way to explore fuzzing our own priors and finding out what's actually best? Their counterpart to Eliezer might argue, as does ours, that doing so would abandon what they know as good reasoning, which is the last thing you want to do when debugging your own reasoning. Only, their Eliezer would actually be wrong (by our judgment) – it'd actually be good for them to abandon their priors (by our judgment).

As per that parable of the deluded patient who believes he's a psychiatrist and that his psychiatrist is his patient, but agrees to take a drug that cures the delusion together with the psychiatrist, and then whichever one was deluded, "the patient makes a full recovery" – should we not try, too?

I admit I cannot imagine the particulars.

What links here

Created (2 years ago)