Calibration

Check out the book Superforecasting by Philip Tetlock.

On average, the predictions you make with 85% confidence should come true around 85% of the time: then we say you are well-calibrated.

Why not go for 100% or 0%, like in the example that if you believed with 60% confidence that a coin was biased one way, you'd be better served betting all of your money on that outcome, rather than 60% of your money on it and 40% on the other outcome. It's a better expected payoff to bet all of it on the most likely outcome.

But calibration is not about betting for a payoff, it's about describing your confidence level to start with. If you are regularly over 90% confident of one outcome that regularly comes true 70% of the time, it's hard to say this overconfidence benefits you in any way.

The Brier score assesses prediction accuracy. That sort of statistic is shown on your PredictionBook/FateBook profile.

Integral for Bayesian methods to use well-calibrated priors.

What links here

Created (3 years ago)