Showing 129 to 132

Bayes factor

Bayesian methods

datacolada.org/78

This is an alternative to classical hypothesis testing. You can select models based on Bayes factor.

The aim of the Bayes factor is to quantify the support for a model over another, regardless of whether these models are correct.

When the two models are equally probable a priori, so that Pr(M1)=Pr(M2)\Pr(M_{1})=\Pr(M_{2}) , the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2.

When the priors are different, it is the following ratio. Pr(M1|D) is simply the aforementioned posterior probability of M1.

{\frac {\Pr(M_{1}|D)}{\Pr(M_{2}|D)}}{\frac {\Pr(M_{2})}{\Pr(M_{1})}}}

an advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure.[6] It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,[7] with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.[8]

Other approaches are:

  • to treat model comparison as a decision problem, computing the expected value or cost of each model choice;
  • to use minimum message length (MML).
Created (3 years ago)

Bayesian structural time series

Bayesian methods

The Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data.

Difference-in-differences models[1] and interrupted time series designs[2] are alternatives to this approach.

The model consists of three main components:

  1. Kalman filter. The technique for time series decomposition. In this step, a researcher can add different state variables: trend, seasonality, regression, and others.
  2. Spike-and-slab method. In this step, the most important regression predictors are selected.
  3. Bayesian model averaging (ensemble learning). Combining the results and prediction calculation.
Created (3 years ago)

Sensitivity analysis

Bayesian methods

I've followed this approach in much of my own applied work, using noninformative priors and carefully avoiding the use of prior information in the final stages a statistical analysis. But that can't always be the right choice. Sometimes (as in the sex ratio example above), the data are just too weak—and a classical textbook data analysis can be misleading. Imagine a Venn diagram, where one circle is "Topics that are so controversial that we want to avoid using prior information in the statistical analysis" and the other circle is "Problems where the data are weak compared to prior information." If you're in the intersection of these circles, you have to make some tough choices! More generally, there is a Bayesian solution to the problem of sensitivity to prior assumptions. That solution is sensitivity analysis: perform several analyses using different reasonable priors. Make more explicit the mapping from prior and data to conclusions. Be open about sensitivity, don't try to sweep the problem under the rug, etc etc. And, if you're going that route, I'd also like to see some analysis of sensitivity to assumptions that are not conventionally classified as "prior." You know, those assumptions that get thrown in because they're what everybody does. For example, Cox regression is great, but additivity is a prior assumption too! (One might argue that assumptions such as additivity, logistic links, etc., are exempt from Fisher's strictures by virtue of being default assumptions rather than being based on prior information—but I certainly don't think Mayo would take that position, given her strong feelings on Bayesian default priors.)

–Gelman

What links here

Created (3 years ago)

Empirical Bayes

Bayesian methods

Also known as maximum marginal likelihood

Empirical Bayes methods are a way of setting priors by looking at the data.

Empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out.

Created (3 years ago)
Showing 129 to 132