# What's an ideal reasoner?

Sometimes it's useful to talk about what an ideal reasoner would do in a situation, but isn't it subjective? No, if by "ideal reasoner" we're talking about a being that, as efficiently as possible, solves the problem of satisfying their desires.

Sidenote: That can be confusing, because as humans, when we hear a person using language such as "efficiently optimize for my desires" we may picture an egotistical sociopath. And that may be reasonable in everyday contexts… but think yourself into philosophy discussion at the university, where they use words very exactly. What does it mean to optimize for your desires or goals?

It just means the same thing we all try to do all the time: a vegan optimizes for the goal of lessening animal suffering, etc.

Human goals may be often fuzzy and self-contradictory, but rarely can a human's set of goals be described as completely selfish. So "efficient optimization" or "ideal reasoning" is not about that.

Ideal reasoning just means, if for example you want to rule the world, figuring out the shortest path to do so that's still aesthetic to you (destroying the world with nukes can be a quick way to rule it, but that may not be what you actually want, so you rule that path out). Or if you want to end animal farming, then ideal reasoning means figuring out the shortest way to bring about *that* result. Or if you want good friends, then ideal reasoning is finding a practical way to get good friends into your life. And so on.

With that definition in mind, there are all sorts of logical proofs about how an ideal reasoner would treat the information they have and any new information received. Failing to act according to these proofs opens up for taking sub-optimal actions (Dutch-booking)… and human beings can and often do act **so sub-optimally** that they fail at their quest!

Yet, we do know many basic principles of ideal reasoning! They just tend to hard to apply faithfully, due to computational limits, cognitive biases and self-defeating psychology.

How do we know about those principles? There's a whole tower of prescriptions arising from probability theory, decision theory and game theory, resting atop a small set of mathematical axioms and consistent with them and each other. Philosophers have thought about this sort of thing for a long time, and the only real way to reject a given prescription of probability theory is to reject one of the axioms it rests on, such as the staggeringly basic axiom of modus ponens: "if A implies B, and I learn A is true, then I also know B is true". As you can imagine, it'll look pretty ridiculous to try to deny any of them. And then, upon *not* denying any of them, the rest follows.