The game takes place on the beach of Monte Carlo. The pebbles are samples of the uniform probability distribution in the square. They are obtained directly. It is for this reason that the algorithm is called "direct-sampling" Monte Carlo.

fromrandomimport uniform
def direct_pi(N):
n_hits =0for i inrange(N):
x, y = uniform(-1.0,1.0), uniform(-1.0,1.0)if x ** 2 + y ** 2<1.0:
n_hits +=1return n_hits
n_trials =10000for attempt inrange(10):
print attempt,4 * direct_pi(n_trials) / float(n_trials)

Output of the direct-sampling program with "hits" in red and "non-hits" in blue.

Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):

Comments

Direct-sampling algorithms exist only for a handful of physically interesting models. They are very useful

The existence of a uniform (pseudo) random number generator is assumed. The setup of good random number generators is a mature branch of mathematics.

Markov Chain Monte Carlo: the adult's game

Adults playing on the Monte Carlo heliport

The game takes place at Monte Carlo heliport. The helipad is a too large for direct sampling and a Markov chain strategy should be adopted. An adult stands at the last pebble position and draws the new pebble inside a square of side delta. An important rejection problem has to be fixed every time the new pebble jumps outside the helipad. The solution we adopt allows to uniformly cover the large square with pebbles.

fromrandomimport uniform
def markov_pi(delta, N):
x, y =1.0,1.0
N_hits =0for i inrange(N):
del_x, del_y = uniform(-delta, delta), uniform(-delta, delta)ifabs(x + del_x)<1.0andabs( y + del_y )<1.0:
x, y = x + del_x, y + del_y
if x**2 + y**2<1.0:
N_hits +=1.0return N_hits
n_trials =10000for k inrange(10):
print4 * markov_pi(0.3, n_trials) / float(n_trials)

Comments

In Markov-chain sampling algorithms the initial condition must be allowed, not necessary typical

Here adults start their promenade from the "club house" located in (x,y) = (1,1).

The algorithm is correct for all step sizes delta, but best performance are obtained for moderate delta.

Rule of thumb: acceptance ratio of Markov chain should be close to 1/2

Detailed and global balance

For simplicity we discuss a simplified and discrete 3x3 pebble game. The pebble walks on a 3x3-chessboard without periodic boundary conditions.

We design a Markov chain algorithm, so that each site is visited with the same probability:

Here a pebble throw consists in moving from a site to each of its neighbors with probability 1/4.
Suppose we are on site a=9, at one time. We can only move to b=8 or c=6, or simply remain at a. This gives

On the same time, to get to a, we either come from a, or from b or from c.

This yields the '''global balance condition'''

A more restrictive condition is called '''detailed balance condition''':

Below a Python implementation for the 3x3 pebble game. With positions 1,2,...,9, the four neighbors of site 1 are (2,4,1,1). This ensures that the pebble moves with probability 1/4 to sites 2 and 4, and remains on site 1 with probability 1/2. We start the simulation from site 9.

## Monte Carlo algorithms

## Direct sampling or the children's game

The game takes place on the beach of Monte Carlo. The pebbles are samples of the uniform probability distribution in the square. They are obtained directly. It is for this reason that the algorithm is called "direct-sampling" Monte Carlo.

Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):

## Comments

## Markov Chain Monte Carlo: the adult's game

The game takes place at Monte Carlo heliport. The helipad is a too large for direct sampling and a Markov chain strategy should be adopted. An adult stands at the last pebble position and draws the new pebble inside a square of side

delta. An important rejection problem has to be fixed every time the new pebble jumps outside the helipad. The solution we adopt allows to uniformly cover the large square with pebbles.## Comments

delta, but best performance are obtained for moderatedelta.Rule of thumb:acceptance ratio of Markov chain should be close to 1/2## Detailed and global balance

For simplicity we discuss a simplified and discrete 3x3 pebble game. The pebble walks on a 3x3-chessboard without periodic boundary conditions.We design a Markov chain algorithm, so that each site is visited with the same probability:

Here a pebble throw consists in moving from a site to each of its neighbors with probability 1/4.

Suppose we are on site

a=9, at one time. We can only move tob=8 orc=6, or simply remain ata. This givesOn the same time, to get to

a, we either come froma, or frombor fromc.This yields the '''global balance condition'''

A more restrictive condition is called '''detailed balance condition''':

Below a Python implementation for the 3x3 pebble game. With positions 1,2,...,9, the four neighbors of site 1 are (2,4,1,1). This ensures that the pebble moves with probability 1/4 to sites 2 and 4, and remains on site 1 with probability 1/2. We start the simulation from site 9.

Here is output of the above Python program for 5, 10, 100 steps

## Inhomogeneous 3x3 pebble game (Metropolis algorithm)

For a general probability distributionwe can use the celebrated Metropolis algorithm

That we illustrate in a Python program for the inhomogeneous 3x3 pebble game.

## Comments

## References