# Checking warmup and autocorrelation times

## Contents

# Checking warmup and autocorrelation times¶

Two common challenges in Monte Carlo studies are ensuring that the measured bins represent equilibrated configurations and that different bins are statistically independent. In this section, we will briefly explain these issues and present the tools pyALF offers for dealing with them.

## Preparations¶

As a first step, we use the same import as in previous sections.

```
# Enable Matplotlib Jupyter Widget Backend
%matplotlib widget
import numpy as np # Numerical libary
import matplotlib.pyplot as plt # Plotting library
from py_alf import analysis # Analysis function
from py_alf.utils import find_sim_dirs # Function for finding Monte Carlo bins
from py_alf.ana import load_res # Function for loading analysis results
```

We also import the functions `py_alf.check_warmup()`

and `py_alf.check_rebin()`

, which play the main role in this section.

```
from py_alf import check_warmup, check_rebin
```

Finally, from the local file `custom_obs.py`

, we import the same `custom_obs`

defined in the previous section.

```
from custom_obs import custom_obs
```

For demonstration purposes, we run a simulation with very small bins.

```
from py_alf import ALF_source, Simulation
sim = Simulation(
ALF_source(),
'Hubbard',
{
# Model specific parameters
'L1': 4,
'L2': 4,
'Ham_U': 5.0,
# QMC parameters
'Nbin': 5000,
'Nsweep': 5,
'Ltau': 0,
},
)
sim.compile()
sim.run()
```

```
Compiling ALF...
Cleaning up Prog/
Cleaning up Libraries/
Cleaning up Analysis/
Compiling Libraries
```

```
Compiling Analysis
Compiling Program
Parsing Hamiltonian parameters
filename: Hamiltonians/Hamiltonian_Kondo_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_Plain_Vanilla_smod.F90
filename: Hamiltonians/Hamiltonian_tV_smod.F90
filename: Hamiltonians/Hamiltonian_LRC_smod.F90
filename: Hamiltonians/Hamiltonian_Z2_Matter_smod.F90
Compiling program modules
Link program
Done.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=5.0" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
```

Set the directories to be considered.

```
dirs = find_sim_dirs()
dirs
```

```
['./ALF_data/Hubbard',
'./ALF_data/Hubbard_L1=4_L2=4_U=1.0',
'./ALF_data/Hubbard_L1=4_L2=4_U=2.0',
'./ALF_data/Hubbard_L1=4_L2=4_U=3.0',
'./ALF_data/Hubbard_L1=4_L2=4_U=4.0',
'./ALF_data/Hubbard_L1=4_L2=4_U=5.0',
'./ALF_data/Hubbard_Square',
'./ALF_data/temper_Hubbard_L1=4_L2=4_U=2.5/Temp_0',
'./ALF_data/temper_Hubbard_L1=4_L2=4_U=2.5/Temp_1']
```

## Check warmup¶

A Monte Carlo simulation creates a time series of configurations through stochastic updates. Usually, measurements from a number of updates get combined in one so-called bin. In the case of ALF, `Nsweep`

sweeps create one bin of measurements (for more details on updating procedures we refer to the ALF documentation). Usually, the simulation starts in a non-optimal state and it takes some time to reach equilibrium. Bins from this “warming up” period should be dismissed before calculating results. This is achieved by setting the variable `N_skip`

in the file `parameters`

, which will make the analysis omit the first `N_skip`

bins.

Warning

Different observables can have different warmup and autocorrelation times. For example, charge degrees of freedom may equilibrate much faster than spin degrees of freedom. Or a sum of observables might have much shorter autocorrelation times than an individual observable, e.g. the total spin versus one spin component.

To judge the correct value for `N_skip`

, pyALF offers the function `check_warmup()`

, which plots the time series of bins for a given list of scalar and custom observables. It can be used with the previous simulations as:

```
warmup_widget = check_warmup(
dirs,
['Ener_scal', 'Kin_scal', 'Pot_scal',
'E_pot_kin', 'R_Ferro', 'R_AFM',
'SpinZ_pipi', 'SpinXY_pipi', 'SpinXYZ_pipi'],
custom_obs=custom_obs, gui='ipy'
)
```

The first argument is a list of directories containing simulations, the second argument specifies which observables to plot, the keyword argument `custom_obs`

is needed, when plotting custom observables, e.g. *E_pot_kin* and `gui`

specifies which GUI framework to use. With `gui='ipy'`

, the function returns a Jupyther Widget, which allows to seamlessly work within Jupyter. Another option would be `gui='tk'`

, which open a separate window using `tkinter`

. The latter option might be suitable when working directly from a shell.

The variable `N_skip`

can be directly changed in the GUI, which automatically updates the file `parameters`

.

```
warmup_widget
```

## Check rebin¶

When estimating statistical errors, the analysis assumes different bins to be statistically independent. As a result, one bin must span over enough updates to generate statically independent configurations, or in other words, a bin must be larger than the autocorrelation time. Otherwise the statistical errors will be underestimated. To address this issue, the analysis employs so-called rebinning, which combines `N_rebin`

bins into one new bin. The pyALF function `check_rebin()`

helps in determining the correct `N_rebin`

. It plots the errors of the chosen observables against `N_rebin`

. With enough statistics, one should see growing errors with increasing `N_rebin`

until a saturation point is reached, this saturation point marks a suitable value for `N_rebin`

. The usage of `check_rebin()`

is very similar to `check_warmup()`

.

```
rebin_widget = check_rebin(
dirs,
['Ener_scal', 'Kin_scal', 'Pot_scal',
'E_pot_kin', 'R_Ferro', 'R_AFM',
'SpinZ_pipi', 'SpinXY_pipi', 'SpinXYZ_pipi'],
custom_obs=custom_obs, gui='ipy'
)
```

Below, we can see how different observables have different autocorrelation times. While the error of the kinetic energy saturates already at \(N_\text{rebin} = 3\), the correlations of the z component of the spin at \((\pi, \pi)\) (`SpinZ_pipi`

) need \(N_\text{rebin} \sim 40\). For the correlations of the total spin, on the other hand, \(N_\text{rebin} = 1\) is sufficient.

This is a good example for the concept of an improved estimator: The simulated Hubbard model is \(\text{SU}(2)\) symmetric, therefore correlations of the x, y and z components of the spin are equivalent, but with the chosen parameter `Mz=True`

(cf. Compiling and running ALF) the auxiliary field couples to the z component of the spin. As a result, the \(\text{SU}(2)\) symmetry is broken for an individual auxiliary field configuration, but restored by sampling the field configurations. Therefore, even though measuring the spin correlations through the z component, the x-y plane, or the full spin are equivalent, the latter option produces the most precise results and has the shortest autocorrelation times, because it explicitly restores the \(\text{SU}(2)\) symmetry instead of “waiting” for the sampling to do that.

Furthermore, the ferromagnetic correlation ratio \(R_{\text{Ferro}}\) doesn’t seem to converge at all in the considered range of \(N_{\text{rebin}}\). This is connected to the fact that the system is not close the ferromagnetic order and therefore \(R_{\text{Ferro}}\) is a bad observable.

```
rebin_widget
```

The next section will also show options for an improved estimator by employing symmetry operations on the Bravais lattice.