*Gibbs*

#
All 1 entries tagged

View all 2 entries tagged *Gibbs* on Warwick Blogs | View entries tagged *Gibbs* at Technorati | There are no images tagged *Gibbs* on this blog

## October 20, 2018

### Markov Chain Monte Carlo made easy: Gibbs sampling.

In a previous post we've introduced Monte Carlo techniques and hint its many applications. In another postwe've shown a simple Markov Chain. The core of Markov Chain Monte Carlo methods is coming up with a function that makes a probabilistic choice about what state to go next in a Markov Chain. So that, similarly to the pi example, each state is visitated in proportion to the target function, as a result of that estimating the desired parameters. A Gibbs sampling is just a method that does have these requisites.

The central idea in Gibbs sampling is that, instead of jumping to the next state at once, a separate small (probabilistic) jump is made for each parameter (k) in the model, where each choice depend on all the other parameters. The algorithm is given by:

with regard to z are the *k* parameters in the model, and T are the transitions or times that the model is sampled.

To sum up, Gibbs sampling walks through a k-dimensional state space. Every point in the walk is a collection of values for the random variables Z.