Post your questions on this hwk in the blog at the end of the page. If you hand your Hwk by e-mail, you can send it to A. Rosso, M. Civelli or G. Roux .

Introduction

Monte Carlo calculations are useful only if we can come up with an estimate of the error on the observables, such as an approximate value for the mathematical constant π, or an estimate of the quark masses or the velocity of neutrinos, the value of a condensate fraction, an approximate equation of state, etc. Here we study different methods to analyse the data when we use a Markov chain method.

A particle is in equilibrium with a two dimensional external potential:

where the vector r=(x,y) identifies the position of the particle in the plane. We set to T=1 the temperature of the system.

1. Write a Markov chain algorithm for this simple system with a Metropolis acceptance rate:

Propose a random move delta = [random.uniform(-1,1),random.uniform(-1,1)] from position r0 to position r1=r0+δ

sampled e.g. through direct sampling, the statistical error in the evaluation of the mean of ξ can be computed using the Central Limit Theorem:

2. Use this method evaluate the error for two observables in the Mexican hat problem:
Observable A: the average distance from the origin |r|.
Observable B: the average horizontal coordinate x.

3. Is the estimation of the statistical error correct?

II. The Bunching algorithm: see SMAC page 60-62

We test here a basic, but powerful tool for error analysis of Markov chains: the bunching algorithm. The basic transformation of this algorithm is the reduction of a sequence of N data into a related sequence of N/2 data. An example of the algorithm is here:

list=[1.,2.,2.,1.5]
new_list=[]whilelist!=[]:
x=list.pop()
y=list.pop()
new_list.append((x+y)/2.)print new_list
list=new_list[:]# the "[:]" is essential list is a copy of new_list.

The original list [1.,2.,2.,1.5] has been contracted 2 by 2 into [1.5, 1.75]. Apply this bunching transformation to the observable A and B, and treat the successive lists as if they were made of independent random variables, to compute their statistical errors.

Plot the statistical errors as a function of the iteration of the bunching transformation. (For instance take 16 iterations for N=220 data).

Comment with the help of some plots your results and the difference between observable A and B.

III. The Autocorrelation method:

We introduce a second method to analyze our data.

The autocorrelation function is defined as

Far enough from the initial condition the system becomes invariant to a temporal translation and the autocorrelation function depend only on the distance between data:

The measure of the autocorrelation function allows to determine the statistical error in our Markov chain data:

Plot the autocorrelation function C(n) for observable A and B as a function of n. Comment.

Show that, for the evaluation of the error, the second method agrees perfectly with the results of the bunching algorithm.

Justify the formula (1) above of the autocorrelation method.

Homework 04: Errors evaluation for a Markov chainPost your questions on this hwk in the blog at the end of the page.If you hand your Hwk by e-mail, you can send it to A. Rosso, M. Civelli or G. Roux .

## Introduction

Monte Carlo calculations are useful only if we can come up with an estimate of the error on the observables, such as an approximate value for the mathematical constant

π, or an estimate of the quark masses or the velocity of neutrinos, the value of a condensate fraction, an approximate equation of state, etc. Here we study different methods to analyse the data when we use a Markov chain method.A particle is in equilibrium with a two dimensional external potential:

where the vector

r=(x,y) identifies the position of the particle in the plane. We set toT=1 the temperature of the system.1.Write a Markov chain algorithm for this simple system with a Metropolis acceptance rate:r0 to positionr1=r0+δπis the Boltzmann weightConsider a generic random variable

ξ.As discussed during Tutorial 04 - Errors and fluctuations, for uncorrelated realizationssampled

e.g.through direct sampling, the statistical error in the evaluation of the mean ofξcan be computed using the Central Limit Theorem:2.Use this method evaluate the error for two observables in the Mexican hat problem:Observable

A: the average distance from the origin |r|.Observable

B: the average horizontal coordinatex.3.Is the estimation of the statistical error correct?## II. The Bunching algorithm: see SMAC page 60-62

We test here a basic, but powerful tool for error analysis of Markov chains: the bunching algorithm. The basic transformation of this algorithm is the reduction of a sequence of

Ndata into a related sequence ofN/2 data. An example of the algorithm is here:The original list [1.,2.,2.,1.5] has been contracted 2 by 2 into [1.5, 1.75]. Apply this bunching transformation to the observable

AandB, and treat the successive lists as if they were made of independent random variables, to compute their statistical errors.N=220 data).AandB.## III. The Autocorrelation method:

We introduce a second method to analyze our data.

The autocorrelation function is defined as

Far enough from the initial condition the system becomes invariant to a temporal translation and the autocorrelation function depend only on the distance between data:

The measure of the autocorrelation function allows to determine the statistical error in our Markov chain data:

C(n) for observableAandBas a function ofn. Comment.[Print this page]