Showing 472 to 475

What's an ideal reasoner?

Sometimes it's useful to talk about what an ideal reasoner would do in a situation, but isn't it subjective? No, if by "ideal reasoner" we're talking about a being that, as efficiently as possible, solves the problem of satisfying their desires.

Sidenote: That can be confusing, because as humans, when we hear a person using language such as "efficiently optimize for my desires" we may picture an egotistical sociopath. And that may be reasonable in everyday contexts… but think yourself into philosophy discussion at the university, where they use words very exactly. What does it mean to optimize for your desires or goals?

It just means the same thing we all try to do all the time: a vegan optimizes for the goal of lessening animal suffering, etc.

Human goals may be often fuzzy and self-contradictory, but rarely can a human's set of goals be described as completely selfish. So "efficient optimization" or "ideal reasoning" is not about that.

Ideal reasoning just means, if for example you want to rule the world, figuring out the shortest path to do so that's still aesthetic to you (destroying the world with nukes can be a quick way to rule it, but that may not be what you actually want, so you rule that path out). Or if you want to end animal farming, then ideal reasoning means figuring out the shortest way to bring about that result. Or if you want good friends, then ideal reasoning is finding a practical way to get good friends into your life. And so on.

With that definition in mind, there are all sorts of logical proofs about how an ideal reasoner would treat the information they have and any new information received. Failing to act according to these proofs opens up for taking sub-optimal actions (Dutch-booking)… and human beings can and often do act so sub-optimally that they fail at their quest!

Yet, we do know many basic principles of ideal reasoning! They just tend to hard to apply faithfully, due to computational limits, cognitive biases and self-defeating psychology.

How do we know about those principles? There's a whole tower of prescriptions arising from probability theory, decision theory and game theory, resting atop a small set of mathematical axioms and consistent with them and each other. Philosophers have thought about this sort of thing for a long time, and the only real way to reject a given prescription of probability theory is to reject one of the axioms it rests on, such as the staggeringly basic axiom of modus ponens: "if A implies B, and I learn A is true, then I also know B is true". As you can imagine, it'll look pretty ridiculous to try to deny any of them. And then, upon not denying any of them, the rest follows.

What links here

Created (2 years ago)

Joy in the merely real

See also www.greaterwrong.com/posts/Fwt4sDDacko8Sh5iR/the-sacred-mundane

What links here

Created (2 years ago)

Sci-fi why?

  • In Dune, a Truthsayer learns to tell truth from lies by speaking only truth so much that they detect its absence. While not realistic, it's a beautiful idea.
  • Reading 2001 (the book aged better than the film) was a magical experience. Part of learning Joy in the Merely Real
  • Hard sci-fi in particular helps against the Fallacy of generalizing from fictional evidence.
    • Greg Egan writes totally possible worlds. Part of Binding yourself to reality, one of those hidden-core-of-rationality things. We should not have to hear about impossible worlds to feel wonder. The fact it seems so many people have a habit of only turning on their wonder-emotion when hearing impossible fiction seems tied to the fact almost no-one "looks at the world as if they've never seen it before". Someone said: "When people praise me for that, it's a nice compliment, but wait – actually, I haven't seen it before. What, did everybody else get a preview?"
  • Move past "savannah poets". The ability to write hard sci-fi (let alone rationalfic) makes for a litmus test on authors: the main reason authors don't write hard sci-fi, is that they can't: they are savannah poets, retelling the same old Great Stories that find a form or another in every culture that has ever existed: stories on love, loss, vengeance, jealousy et cetera. Savannah poets can anthropomorphize Jupiter just fine, but fall silent if forced to regard Jupiter a spinning ball of gas, knowing not what to do with that. It takes a developed and scientifically up-to-date mind to begin to have a chance of describing a possible world rather than yet another impossible world—the latter is easy. You're liable to draw moral lessons from the fiction you read (even just subconsciously, see availability heuristic), so if you're gonna read fiction anyway, why not minimize exposure to authors with the same old systematic misunderstandings of reality that's been all too common over the past thousand generations?

What links here

  • Rationalist fiction
Created (2 years ago)

Occam's Razor not some fuzzy rule of thumb

Not everyone is raised to revere Occam's Razor. To someone who wasn't, the statement "it's the simplest explanation" isn't a knockdown argument for anything. Why couldn't a complex Non-Occam explanation be correct?

So it bears explaining.


Occam's Razor is not just some fuzzy rule of thumb, it has a formalism: minimum message length (MML)!

"The woman down the street is a witch; she did it"

The above sentence looks like a short and simple theory for whatever happened, but it's far from simple. Several of the words used, such as "witch", require a lot of explanation for an AI or alien who knows nothing about any of the words. The resulting completed message, that contains all the data needed to interpret the sentence, in addition to the sentence itself, is the true MML of your theory.

If you then represent the message as binary code, you can describe its complexity in terms of bits (a log2 number).

(To be fair, though it doesn't make a difference for this page, here we handwave away an important detail: the "message" would actually be a computer program (Turing machine code, I think), as that is the shortest possible way—and a language-neutral way—to express a theory.)


Slightly longer messages must be taken by an Ideal reasoner as exponentially less likely to match reality.

Even if on one given occasion this may feel hard to justify, it's simply math, that if you have the habit of believing messages just one bit longer than the shortest message available, you'll be wrong twice as often as otherwise. To say nothing of when the message is ten bits longer, where on average you must expect your first thousand (because 210 = 1024) theories of the same length to be proven false.

And though there's technically a way out here to save your pet theory, if you were motivated to argue it into a defensible position… it's not valid to hope along the lines of "there's still a chance, right?" for the longer message to happen by luck to describe reality more closely. No one can feel a probability that small, so it's more human-psychologically correct (in the sense of that famous parable by Asimov "…wronger than both of them put together") to say that it's simply zero, i.e. to say that we actually know that the simpler explanation is correct (technically, just the most accurate by far – until someone thinks of a theory with an even shorter MML).

This is why physicists strive so hard to find simple theories – the simplicity is as good as proof it's correct!

(Why do physicists run any experiments at all then when they could just sit in an armchair crafting ever simpler theories? Excellent question! There's one constraint on your theory-making: you need the simplest theory that still fits all the facts at hand. Otherwise you could just propose a zero-length message as explanation for everything, right? If a theory fails to explain just one fact, it's already disproven and the answer has to be in a different theory, even if that one must be longer. They just discount anything that's longer than necessary. And run experiments to differentiate between theories of equal length.)

Once a simpler theory is found that fits, everyone acts like we know this theory is true, because… we essentially do know it.

The word "know", if it's to mean anything useful, is shorthand for a sufficiently high probability – large percentages like 99.9976%, the amount of decimals passing beyond the realm where it's psychologically realistic to keep track of the probability as a mental entity at all. We throw it away, and that's the point where we say we "know" the attached proposition. Although for agents with unbounded computing power, the number would always remain.

As Dennis Lindley (1923–2013) said, our theories must always allow for the possibility that the moon is made of green cheese, however tiny (Cromwell's Rule). Most people alive today would assign such a proposition about the moon too tiny a probability to bother keeping track of – in other words, they know perfectly well it's not made of green cheese! If this bothers you, the issue is that the word "know" is a bit of an abomination, a shorthand for a probability hugging up against 0% or 100% with many decimals. And the word "know" serve a pragmatic purpose as such a shorthand, but the vast majority of people don't think of it that way, they just hear it as absolute, so be wary.

Anyway, just as you won't bother to do an experiment to check if the moon is made of green cheese, as it's so improbable as to be not worth your time, then for the same reason, you don't bother to test or even consider any other hypotheses with long MML – they're so improbable as to be not worth your time.

To nevertheless privilege a long-MML hypothesis and insist it be tested, you must likewise argue for checking whether the moon is cheese, and decillions of other improbable hypotheses, and then humanity has no time to do anything else.

But… is it so bad to privilege a hypothesis "just this once"? From www.greaterwrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis:

In the minds of human beings, if you can get them to think about this particular hypothesis rather than the trillion other possibilities that are no more complicated or unlikely, you really have done a huge chunk of the work of persuasion. Anything thought about is treated as “in the running,” and if other runners seem to fall behind in the race a little, it’s assumed that this runner is edging forward or even entering the lead.

What if you have special knowledge that implies it's worth testing? Well, that's allowed and totally OK! Science doesn't pick sides. But your knowledge has to have a large evidential weight to offset the long MML. Without such weight, we're back to the previous reasoning – it's overwhelmingly likely to just waste our time.


If the explicit probability argument doesn't persuade you, how about track record?

Contrary to how it's often presented, the Copernican revolution, where we transitioned from a geocentric to a heliocentric model, wasn't straightforward! Read The Copernican Revolution From the Inside. In the beginning, the data fit the theory worse!

Yet people insisted trying to make heliocentrism true.

Why? They liked its philosophical simplicity. And in the end, that bore fruit. That's why we're now so confident in Occam's Razor: when you find a simple theory, it tends to be worth insisting on it for a while, more than any other butterfly idea. If you don't have that policy, you may get stuck on theories that fit the facts better right now and miss out on the truth.

Science would have discovered almost nothing by now if the scientists weren't thinking about hypotheses according to Occam's Razor.

There are infinite possible explanations for any phenomena, and every time you test one and it fails, you can rule out a large segment of the space of possible explanations similar to the one you just tested. Thus you quickly narrow down the most correct explanations, which results in technology that works. That phone in your hand was crafted by the invisible hand of Occam.

What links here

Created (2 years ago)
Showing 472 to 475