Monte Carlo algorithms


Direct sampling or the children's game


IN_children.jpg
Children playing at the Monte Carlo beach


The game takes place on the beach of Monte Carlo. The pebbles are samples of the uniform probability distribution in the square. They are obtained directly. It is for this reason that the algorithm is called "direct-sampling" Monte Carlo.


from random import uniform
def direct_pi(N):
    n_hits = 0
    for i in range(N):
        x, y = uniform(-1.0, 1.0), uniform(-1.0, 1.0)
        if x ** 2 + y ** 2 < 1.0:
            n_hits += 1
    return n_hits
n_trials = 10000
for attempt in range(10):
    print attempt, 4 * direct_pi(n_trials) / float(n_trials)

direct_pi_color.png
Output of the direct-sampling program with "hits" in red and "non-hits" in blue.


Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):


Comments


  1. Direct-sampling algorithms exist only for a handful of physically interesting models. They are very useful
  2. The existence of a uniform (pseudo) random number generator is assumed. The setup of good random number generators is a mature branch of mathematics.

Markov Chain Monte Carlo: the adult's game


fig.1.02.jpg
Adults playing on the Monte Carlo heliport

The game takes place at Monte Carlo heliport. The helipad is a too large for direct sampling and a Markov chain strategy should be adopted. An adult stands at the last pebble position and draws the new pebble inside a square of side delta. An important rejection problem has to be fixed every time the new pebble jumps outside the helipad. The solution we adopt allows to uniformly cover the large square with pebbles.

from random import uniform
def markov_pi(delta, N):
    x, y = 1.0, 1.0
    N_hits = 0
    for i in range(N):
        del_x, del_y = uniform(-delta, delta), uniform(-delta, delta)
        if abs(x + del_x) < 1.0 and abs( y + del_y ) < 1.0:
            x, y = x + del_x, y + del_y
        if x**2 + y**2 < 1.0:
            N_hits += 1.0
     return N_hits
 
n_trials = 10000
for k in range(10):
    print 4 * markov_pi(0.3, n_trials) / float(n_trials)

Comments

  1. In Markov-chain sampling algorithms the initial condition must be allowed, not necessary typical
  2. Here adults start their promenade from the "club house" located in (x,y) = (1,1).
  3. The algorithm is correct for all step sizes delta, but best performance are obtained for moderate delta.
  4. Rule of thumb: acceptance ratio of Markov chain should be close to 1/2


Detailed and global balance

For simplicity we discuss a simplified and discrete 3x3 pebble game. The pebble walks on a 3x3-chessboard without periodic boundary conditions.

PRank1.png PRank2.png


We design a Markov chain algorithm, so that each site is visited with the same probability:

Here a pebble throw consists in moving from a site to each of its neighbors with probability 1/4.
Suppose we are on site a=9, at one time. We can only move to b=8 or c=6, or simply remain at a. This gives



On the same time, to get to a, we either come from a, or from b or from c.



This yields the '''global balance condition'''



A more restrictive condition is called '''detailed balance condition''':



Below a Python implementation for the 3x3 pebble game. With positions 1,2,...,9, the four neighbors of site 1 are (2,4,1,1). This ensures that the pebble moves with probability 1/4 to sites 2 and 4, and remains on site 1 with probability 1/2. We start the simulation from site 9.


import random, pylab
neighbor = {1 : [2, 4, 1, 1], 2 : [3, 5, 1, 2], 3 : [3, 6, 2, 3],
            4 : [5, 7, 4, 1], 5 : [6, 8, 4, 2], 6 : [6, 9, 5, 3],
            7 : [8, 7, 7, 4], 8 : [9, 8, 7, 5], 9 : [9, 9, 8, 6]}
all_pos = []
N_iter = 100
for iter1 in range(10000):
    pos = 9
    for iter in range(N_iter):
        pos = neighbor[ pos][ random.randint(0, 3)]
    all_pos.append(pos)
pylab.figure(1)
pylab.hist(all_pos,bins=9,range=(0.5,9.5),normed=True)
pylab.title('3x3 pebble game, starting at 9, after '+str(N_iter)+' steps')
pylab.savefig('histo_3x3_'+str(N_iter)+'_steps.png')
pylab.show()

Here is output of the above Python program for 5, 10, 100 steps

histo_3x3_5_steps.png histo_3x3_10_steps.pnghisto_3x3_100_steps.png

Inhomogeneous 3x3 pebble game (Metropolis algorithm)

For a general probability distribution

we can use the celebrated Metropolis algorithm

That we illustrate in a Python program for the inhomogeneous 3x3 pebble game.

histo_3x3_inhomogeneous_1000000_steps.png
Histogram obtained by Metropolis versus the exact probabilities (red dots)


Comments

  1. Markov-chain Monte Carlo algorithms are a very general tool for integration.
  2. They access the relevant information in the infinite-time limit , but one can do better.
  3. The dynamics of the Markov-chain Monte Carlo algorithm is not always physically relevant.
  4. Many Markov-chain Monte Carlo algorithms satisfy detailed balance, but the necessary condition is global balance.

References

  1. We were deeply inspired by the first chapter of SMAC pp 1-9; 15-22
  2. here you can find the original paper by N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller et E. Teller (1953)