Showing 216 to 219

Log-posterior

Bayesian methods, #statistics

What's a log-posterior (all the lp__ output from Stan)? It's a number piled on by every variable in the model, and the higher it is, the more likely that the model is good.

Break it down. First, it's similar to the frequentist log-likelihood (Likelihood function), and the log transformation is a convenience thing:

The log likelihood tells you nothing you can't get from the likelihood, but if observations are independent, it is additive. That's often an advantage when you want to differentiate to find a maximum. The reason for logging a posterior is because it is derived partly from the likelihood which, as mentioned above, is additive.

When one compares two models in a Bayesian setup, one can take the ratio of the posteriors. This can be interpreted as the odds for one of the models over the other. If we take the log of the ratio, we get the difference of the log-posteriors. Thus, the log-posterior can be used in model comparison.

In comparison to Frequentist methods, this is not unlike the likelihood ratio tests for model comparisons. The advantages in the Bayesian setting are two: 1) our models do not have to be nested, as they do in the likelihood ratio test; 2) the distribution of the likelihood ratio test is only known asymptotically; the difference in the log-posteriors gives us the distribution for whatever sample size we have.

Created (4 years ago)

Bayes classifier

Bayesian methods

A machine learning thing. Related to Naive Bayes classifier.

It's one way to approach a classification problem.

BTW, terminology is varied. The term classifier can be meant as:

  1. a mathematical function that maps input data to a category
  2. an algorithm that implements classification, esp. in a concrete implementation

In stats, classification is often done via logistic regression, speaking of explanatory/independent vars, and the possible categories are termed outcomes. In ML, observations are often known as instances, the explanatory vars as features, and the possible categories are termed classes.

Created (4 years ago)

Bayes factor

Bayesian methods

datacolada.org/78

This is an alternative to classical hypothesis testing. You can select models based on Bayes factor.

The aim of the Bayes factor is to quantify the support for a model over another, regardless of whether these models are correct.

When the two models are equally probable a priori, so that node:internal/modules/cjs/loader:1228 throw err; ^ Error: Cannot find module 'katex' Require stack: - /home/kept/private-dotfiles/.config/emacs/texToMathML.js at Module._resolveFilename (node:internal/modules/cjs/loader:1225:15) at Module._load (node:internal/modules/cjs/loader:1051:27) at Module.require (node:internal/modules/cjs/loader:1311:19) at require (node:internal/modules/helpers:179:18) at Object. (/home/kept/private-dotfiles/.config/emacs/texToMathML.js:1:15) at Module._compile (node:internal/modules/cjs/loader:1469:14) at Module._extensions..js (node:internal/modules/cjs/loader:1548:10) at Module.load (node:internal/modules/cjs/loader:1288:32) at Module._load (node:internal/modules/cjs/loader:1104:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:173:12) { code: 'MODULE_NOT_FOUND', requireStack: [ '/home/kept/private-dotfiles/.config/emacs/texToMathML.js' ] } Node.js v20.18.1 , the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2.

When the priors are different, it is the following ratio. Pr(M1|D) is simply the aforementioned posterior probability of M1.

node:internal/modules/cjs/loader:1228 throw err; ^ Error: Cannot find module 'katex' Require stack: - /home/kept/private-dotfiles/.config/emacs/texToMathML.js at Module._resolveFilename (node:internal/modules/cjs/loader:1225:15) at Module._load (node:internal/modules/cjs/loader:1051:27) at Module.require (node:internal/modules/cjs/loader:1311:19) at require (node:internal/modules/helpers:179:18) at Object. (/home/kept/private-dotfiles/.config/emacs/texToMathML.js:1:15) at Module._compile (node:internal/modules/cjs/loader:1469:14) at Module._extensions..js (node:internal/modules/cjs/loader:1548:10) at Module.load (node:internal/modules/cjs/loader:1288:32) at Module._load (node:internal/modules/cjs/loader:1104:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:173:12) { code: 'MODULE_NOT_FOUND', requireStack: [ '/home/kept/private-dotfiles/.config/emacs/texToMathML.js' ] } Node.js v20.18.1

an advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[6] It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,[7] with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[8]

Other approaches are:

  • to treat model comparison as a decision problem, computing the expected value or cost of each model choice;
  • to use minimum message length (MML).
Created (4 years ago)
Showing 216 to 219