Showing 481 to 484

Non-Occam

[…] This is not the same question as “How do I argue Occam’s Razor to a hypothetical debater who has not already accepted it?”

Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam’s Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn’t implement Modus Ponens, it can accept “A” and “A->B” all day long without ever producing “B”. How do you justify Modus Ponens to a mind that hasn’t accepted it? How do you argue a rock into becoming a mind?

Our minds implement Modus Ponens, Occamian priors and other fundamentals, and they make sense to us because we are built with them. We believe the validity of modus ponens because we fundamentally work by modus ponens!

So at some point when you reach up against questioning modus ponens, you end up in a sort of reflective loop. (www.greaterwrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom)

And what about trusting reflective coherence in general? Wouldn’t most possible minds, randomly generated and allowed to settle into a state of reflective coherence, be incorrect? Ah, but we evolved by natural selection; we were not generated randomly.

So, at the end of the day, what happens when someone keeps asking me “Why do you believe what you believe?”

At present, I start going around in a loop at the point where I explain, “I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct.”

But then… haven’t I just licensed circular logic?

Actually, I’ve just licensed reflecting on your mind’s degree of trustworthiness, using your current mind as opposed to something else.

Even if you classify this reasoning as circular logic, it's a rather specific subtype.

A reflective loop of this sort is a bit different from the circular logic of "my blind faith was placed in me by God, and is therefore trustworthy" – for one thing, the latter doubles as a stopsign. You can unpack that sentence and its origins if you want to, but you decide there's nothing more to do and stop there, whereas with the reflective loop it's not just you. Every philosopher agrees there's nothing more to do from within the mind of the reflecter.

In point of fact, when religious people finally come to reject the Bible, they do not do so by magically jumping to a non-religious state of pure emptiness, and then evaluating their religious beliefs in that non-religious state of mind, and then jumping back to a new state with their religious beliefs removed.

People go from being religious, to being non-religious, because even in a religious state of mind, doubt seeps in. They notice [examples] and it doesn’t seem to make sense even under their own religious premises.

Being religious doesn’t make you less than human. Your brain still has the abilities of a human brain. The dangerous part is that being religious might stop you from applying those native abilities to your religion—stop you from reflecting fully on yourself. People don’t heal their errors by resetting themselves to an ideal philosopher of pure emptiness and reconsidering all their sensory experiences from scratch. They heal themselves by becoming more willing to question their current beliefs, using more of the power of their current mind.

All that is to demonstrate that questioning Occam's Razor is asking you to step outside your own mind, which is asking the impossible. Even a superintelligence can't do that. You can guess what an alternate brain-design might conclude, but you'd never agree that those conclusions make sense!


There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.

And when you ask these strange beings why they keep using priors that never seem to work in real life… they reply, “Because it’s never worked for us before!”

[…]

When I examine the causal history of my brain—its origins in natural selection—I find, on the one hand, all sorts of specific reasons for doubt; my brain was optimized to run on the ancestral savanna, not to do math. But on the other hand, it’s also clear why, loosely speaking, it’s possible that the brain really could work. Natural selection would have quickly eliminated brains so completely unsuited to reasoning, so anti-helpful, as anti-Occamian or anti-Laplacian priors.

So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I’m not halting the chain of examination at the point that I encounter Occam’s Razor, or my brain, or some other unquestionable. The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use?

[…] Still… wouldn’t it be nice if we could examine the problem of how much to trust our brains without using our current intelligence? Wouldn’t it be nice if we could examine the problem of how to think, Without using our current grasp of rationality?

When you phrase it that way, it starts looking like the answer might be “No”.

I don't entirely buy it yet. If those alien minds with anti-Occamian priors were to go ahead and doubt their own priors the same way some armchair philosophers argue we can doubt Occam's Razor, perhaps that gives them a chance to "escape" their poor mind-design. Is it not possible to find some principled approach, some way to explore fuzzing our own priors and finding out what's actually best? Their counterpart to Eliezer might argue, as does ours, that doing so would abandon what they know as good reasoning, which is the last thing you want to do when debugging your own reasoning. Only, their Eliezer would actually be wrong (by our judgment) – it'd actually be good for them to abandon their priors (by our judgment).

As per that parable of the deluded patient who believes he's a psychiatrist and that his psychiatrist is his patient, but agrees to take a drug that cures the delusion together with the psychiatrist, and then whichever one was deluded, "the patient makes a full recovery" – should we not try, too?

I admit I cannot imagine the particulars.

What links here

Created (2 years ago)

"How to convince me that 2 + 2 = 3"

www.greaterwrong.com/posts/6FmqiAgS8h4EJm86s/how-to-convince-me-that-2-2-3

Hilary Putnam's Twin Earth thought experiment on conceivability vs logical possibility seems to touch on this ( www.greaterwrong.com/posts/ne6Ra62FB9ACHGSuh/heat-vs-motion):

Once we have discovered that water (in the actual world) is H2O, nothing counts as a possible world in which water isn’t H2O. In particular, if a “logically possible” statement is one that holds in some “logically possible world”, it isn’t logically possible that water isn’t H2O.

On the other hand, we can perfectly well imagine having experiences that would convince us (and that would make it rational to believe that) water isn’t H2O. In that sense, it is conceivable that water isn’t H2O. It is conceivable but it isn’t logically possible! Conceivability is no proof of logical possibility.

What links here

Created (2 years ago)

Corporations don't "evolve"

www.greaterwrong.com/posts/XC7Kry5q6CD9TyG4K/no-evolutions-for-corporations-or-nanodevices

One of the misunderstandings of evolution: there's a meme going around that all kinds of things can evolve by natural selection, not just DNA/RNA but also self-copying nanodevices and even things like human corporations. After all, the market is a battleground where survival of the fittest applies, right? Presto, evolution! With time, the corps that survive will be stronger and stronger.

Except no. Many things have to go right for a force like evolution to meaningfully apply. A big thing missing from corps and nanodevices is copying fidelity.

Look. A corp may be outcompeted by another corp who learns the lesson their enemy failed to learn – but that one's successor in turn will easily have never learned this specific lesson, and so be vulnerable to the same mistake. The same way you carry many of your parents' life-lessons but not so many of your great-grandparents' life-lessons, especially if you've never met them, and so you repeat your great-grandparents' mistakes. The same way that post-WW2 was a great time for human rights, but a few generations later fascism is on the rise. Current generations do not have the life-lessons to see through the farce: the life-lessons failed to copy themselves for more than 2-3 generations.

All the old lessons of history, you are doomed to repeat them again and again.

—Leto II, God Emperor of Dune

Contrast with DNA evolution. Consider a genetic mutation that confers a 3% increase in fitness on average. DNA copies perfectly, so after about 768 generations(!), every member of the species has inherited it – and all copies are exactly identical (if just one molecule sits differently, we regard it a different gene).

DNA copies with something like 10-8 errors per copy, several orders better copying fidelity than we can do with any machine for now – I think we're at 10-2 or something? Our nanomachines would have something like a million times the mutation rate (i.e. rate of copying errors), which isn't viable for life.

What links here

  • Evolution
  • 2021-05-05
  • No Evolutions for Corporations or Nanodevices
Created (2 years ago)
Showing 481 to 484