Compiling and running ALF#
This section focuses on the “ALF interface” part of pyALF, meaning how to compile ALF and run ALF simulations. This revolves around the classes ALF_source
and Simulation
defined in the module py_alf
that have already been briefly introduced in Minimal example.
We start with some imports:
from pprint import pprint # Pretty print
from py_alf import ALF_source, Simulation # Interface with ALF
Class ALF_source
#
The Class py_alf.ALF_source
points to a folder containing the ALF source code. It has the following signature:
class ALF_source(
alf_dir=os.getenv('ALF_DIR', './ALF'),
branch=None,
url='https://git.physik.uni-wuerzburg.de/ALF/ALF.git'
)
Where os.getenv('ALF_DIR', './ALF')
gets the environment variable $ALF_DIR
if present and otherwise return './ALF'
. If the directory alf_dir
does exist, the program assumes it contains
the ALF source code and will raise an Exception if that is not the case. If alf_dir
does not exist, the
source code will be cloned form url
. If branch
is set, git checks it out.
We will just use the default:
alf_src = ALF_source()
And see if it successfully found ALF:
alf_src.alf_dir
'/home/jschwab/Programs/ALF'
We can use the function py_alf.ALF_source.get_ham_names()
to see which Hamiltonians are implemented:
alf_src.get_ham_names()
['Kondo', 'Hubbard', 'Hubbard_Plain_Vanilla', 'tV', 'LRC', 'Z2_Matter']
And then view the list of parameters and their default values for a particular Hamiltonian. The Hamiltonian-specific parameters are listed first, followed by the Hamiltonian-independent parameters.
pprint(alf_src.get_default_params('Hubbard'))
OrderedDict([('VAR_lattice',
{'L1': {'comment': 'Length in direction a_1',
'defined_in_base': False,
'value': 6},
'L2': {'comment': 'Length in direction a_2',
'defined_in_base': False,
'value': 6},
'Lattice_type': {'comment': '',
'defined_in_base': False,
'value': 'Square'},
'Model': {'comment': 'Value not relevant',
'defined_in_base': False,
'value': 'Hubbard'}}),
('VAR_Model_Generic',
{'Beta': {'comment': 'Inverse temperature',
'defined_in_base': False,
'value': 5.0},
'Bulk': {'comment': 'Twist as a vector potential (.T.), or at '
'the boundary (.F.)',
'defined_in_base': False,
'value': True},
'Checkerboard': {'comment': 'Whether checkerboard decomposition '
'is used',
'defined_in_base': False,
'value': True},
'Dtau': {'comment': 'Thereby Ltrot=Beta/dtau',
'defined_in_base': False,
'value': 0.1},
'N_FL': {'comment': 'Number of flavors',
'defined_in_base': True,
'value': 1},
'N_Phi': {'comment': 'Total number of flux quanta traversing '
'the lattice',
'defined_in_base': False,
'value': 0},
'N_SUN': {'comment': 'Number of colors',
'defined_in_base': True,
'value': 2},
'Phi_X': {'comment': 'Twist along the L_1 direction, in units '
'of the flux quanta',
'defined_in_base': False,
'value': 0.0},
'Phi_Y': {'comment': 'Twist along the L_2 direction, in units '
'of the flux quanta',
'defined_in_base': False,
'value': 0.0},
'Projector': {'comment': 'Whether the projective algorithm is '
'used',
'defined_in_base': True,
'value': False},
'Symm': {'comment': 'Whether symmetrization takes place',
'defined_in_base': True,
'value': True},
'Theta': {'comment': 'Projection parameter',
'defined_in_base': False,
'value': 10.0}}),
('VAR_Hubbard',
{'Continuous': {'comment': 'Uses (T: continuous; F: discrete) HS '
'transformation',
'defined_in_base': False,
'value': False},
'Ham_U': {'comment': 'Hubbard interaction',
'defined_in_base': False,
'value': 4.0},
'Ham_U2': {'comment': 'For bilayer systems',
'defined_in_base': False,
'value': 4.0},
'Ham_chem': {'comment': 'Chemical potential',
'defined_in_base': False,
'value': 0.0},
'Mz': {'comment': 'When true, sets the M_z-Hubbard model: Nf=2, '
'demands that N_sun is even, HS field couples '
'to the z-component of magnetization; '
'otherwise, HS field couples to the density',
'defined_in_base': False,
'value': True},
'ham_T': {'comment': 'Hopping parameter',
'defined_in_base': False,
'value': 1.0},
'ham_T2': {'comment': 'For bilayer systems',
'defined_in_base': False,
'value': 1.0},
'ham_Tperp': {'comment': 'For bilayer systems',
'defined_in_base': False,
'value': 1.0}}),
('VAR_QMC',
{'CPU_MAX': {'comment': 'Code stops after CPU_MAX hours, if 0 or '
'not specified, the code stops after '
'Nbin bins',
'value': 0.0},
'Delta_t_Langevin_HMC': {'comment': 'Time step for Langevin or '
'HMC',
'value': 0.1},
'Global_moves': {'comment': 'Allows for global moves in space '
'and time.',
'value': False},
'Global_tau_moves': {'comment': 'Allows for global moves on a '
'single time slice.',
'value': False},
'HMC': {'comment': 'HMC update', 'value': False},
'LOBS_EN': {'comment': 'End measurements at time slice LOBS_EN',
'value': 0},
'LOBS_ST': {'comment': 'Start measurements at time slice '
'LOBS_ST',
'value': 0},
'Langevin': {'comment': 'Langevin update', 'value': False},
'Leapfrog_steps': {'comment': 'Number of leapfrog steps',
'value': 0},
'Ltau': {'comment': '1 to calculate time-displaced Green '
'functions; 0 otherwise.',
'value': 1},
'Max_Force': {'comment': 'Max Force for Langevin', 'value': 5.0},
'N_global': {'comment': 'Number of global moves per sweep.',
'value': 1},
'N_global_tau': {'comment': 'Number of global moves that will '
'be carried out on a single time '
'slice.',
'value': 1},
'Nbin': {'comment': 'Number of bins.', 'value': 5},
'Nsweep': {'comment': 'Number of sweeps per bin.', 'value': 20},
'Nt_sequential_end': {'comment': '', 'value': -1},
'Nt_sequential_start': {'comment': '', 'value': 0},
'Nwrap': {'comment': 'Stabilization. Green functions will be '
'computed from scratch after each time '
'interval Nwrap*Dtau.',
'value': 10},
'Propose_S0': {'comment': 'Proposes single spin flip moves with '
'probability exp(-S0).',
'value': False}}),
('VAR_errors',
{'N_Back': {'comment': 'If set to 1, substract background in '
'correlation functions. Is ignored in '
'Python analysis.',
'value': 1},
'N_Cov': {'comment': 'If set to 1, covariance computed for '
'time-displaced correlation functions. Is '
'ignored in Python analysis.',
'value': 0},
'N_auto': {'comment': 'If > 0, calculate autocorrelation. Is '
'ignored in Python analysis.',
'value': 0},
'N_rebin': {'comment': 'Rebinning: Number of bins to combine '
'into one.',
'value': 1},
'N_skip': {'comment': 'Number of bins to be skipped.',
'value': 1}}),
('VAR_TEMP',
{'N_Tempering_frequency': {'comment': 'The frequency, in units '
'of sweeps, at which the '
'exchange moves are '
'carried out.',
'value': 10},
'N_exchange_steps': {'comment': 'Number of exchange moves.',
'value': 6},
'Tempering_calc_det': {'comment': 'Specifies whether the '
'fermion weight has to be '
'taken into account while '
'tempering. Can be set to .F. '
'if the parameters that get '
'varied only enter the Ising '
'action S_0',
'value': True},
'mpi_per_parameter_set': {'comment': 'Number of mpi-processes '
'per parameter set.',
'value': 2}}),
('VAR_Max_Stoch',
{'Checkpoint': {'comment': '', 'value': False},
'NBins': {'comment': 'Number of bins for Monte Carlo.',
'value': 250},
'NSweeps': {'comment': 'Number of sweeps per bin.', 'value': 70},
'N_alpha': {'comment': 'Number of temperatures.', 'value': 14},
'Ndis': {'comment': 'Number of boxes for histogram.',
'value': 2000},
'Ngamma': {'comment': 'Number of Dirac delta-functions for '
'parametrization.',
'value': 400},
'Nwarm': {'comment': 'The Nwarm first bins will be ommitted.',
'value': 20},
'Om_en': {'comment': 'Frequency range upper bound.',
'value': 10.0},
'Om_st': {'comment': 'Frequency range lower bound.',
'value': -10.0},
'R': {'comment': '', 'value': 1.2},
'Tolerance': {'comment': '', 'value': 0.1},
'alpha_st': {'comment': '', 'value': 1.0}})])
Class Simulation
#
To set up a simulation, we create an instance of py_alf.Simulation
, which has the signature
class Simulation(alf_src, ham_name, sim_dict, **kwargs)
where alf_src
is an instance of py_alf.ALF_source
, ham_name
is the name of the Hamiltonian to simulate, sim_dict
is a dictionary of parameter: value
pairs overwriting the default parameters and **kwargs
represents optional keyword arguments.
The absolute minimum does not overwrite any default parameters:
sim = Simulation(alf_src, 'Hubbard', {})
Before running the simulation, ALF needs to be compiled.
sim.compile()
Compiling ALF...
Cleaning up Prog/
Cleaning up Libraries/
Cleaning up Analysis/
Compiling Libraries
Compiling Analysis
Compiling Program
Parsing Hamiltonian parameters
filename: Hamiltonians/Hamiltonian_Kondo_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_Plain_Vanilla_smod.F90
filename: Hamiltonians/Hamiltonian_tV_smod.F90
filename: Hamiltonians/Hamiltonian_LRC_smod.F90
filename: Hamiltonians/Hamiltonian_Z2_Matter_smod.F90
Compiling program modules
Link program
Done.
Preparation of the simulation
sim.run()
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
It is strongly advised to take a look at info files produced by ALF after finished runs, in particular “Precision Green” and “Precision Phase”. These should be of order \(10^{-8}\) or smaller. If they’re bigger, one should decrease the stabilization invervall Nwrap
(see parameter list 'VAR_QMC'
above). In our case, they’re about right.
sim.print_info_file()
===== /scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard/info =====
=====================================
Model is : Hubbard
Lattice is : Square
# unit cells : 36
# of orbitals : 1
Flux_1 : 0.0000000000000000
Flux_2 : 0.0000000000000000
Twist as phase factor in bulk
HS couples to z-component of spin
Checkerboard : T
Symm. decomp : T
Finite temperture version
Beta : 5.0000000000000000
dtau,Ltrot_eff: 0.10000000000000001 50
N_SUN : 2
N_FL : 2
t : 1.0000000000000000
Ham_U : 4.0000000000000000
t2 : 1.0000000000000000
Ham_U2 : 4.0000000000000000
Ham_tperp : 1.0000000000000000
Ham_chem : 0.0000000000000000
No initial configuration, Seed_in 790789
Sweeps : 20
Bins : 5
No CPU-time limitation
Measure Int. : 1 50
Stabilization,Wrap : 10
Nstm : 5
Ltau : 1
# of interacting Ops per time slice : 36
Default sequential updating
This executable represents commit 49127767 of branch master.
Precision Green Mean, Max : 2.4056654035685637E-011 1.3130008736858545E-007
Precision Phase, Max : 0.0000000000000000
Precision tau Mean, Max : 5.7966601817784516E-012 8.9310017603594360E-008
Acceptance : 0.42879722222222222
Effective Acceptance : 0.42879722222222222
Acceptance_Glob : 0.0000000000000000
Mean Phase diff Glob : 0.0000000000000000
Max Phase diff Glob : 0.0000000000000000
Average cluster size : 0.0000000000000000
Average accepted cluster size: 0.0000000000000000
CPU Time : 6.3128217360000001
Specifying parameters#
Here is an example of a simulation with non-default parameters. We changed the dimensions to 4 by 4 sites and increased the interaction \(U\) to \(4.0\) and the number of bins calculated to 20. Since we did not change the compile-time configuration (some of the **kwargs
do), a recompilation is not necessary.
sim = Simulation(
alf_src,
'Hubbard',
{
# Model specific parameters
'L1': 4,
'L2': 4,
'Ham_U': 4.0,
# QMC parameters
'Nbin': 20,
},
)
sim.run()
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=4.0" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
Note that the new simulation has been placed in ALF_data/Hubbard_L1=4_L2=4_U=4.0
relative to the current working directory.
That is, simulations are placed in the folder {sim_root}/{sim_dir}
, where sim_root
defaults to 'ALF_data'
and sim_dir
is generated out of the Hamiltonian name and the non-default model specific parameters. A behavior that can be overwritten through the **kwargs
. Note that Nbin
does not enter sim_dir
, since it is a QMC parameter and not a Hamiltonian parameter.
The info file looks good:
sim.print_info_file()
===== /scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=4.0/info =====
=====================================
Model is : Hubbard
Lattice is : Square
# unit cells : 16
# of orbitals : 1
Flux_1 : 0.0000000000000000
Flux_2 : 0.0000000000000000
Twist as phase factor in bulk
HS couples to z-component of spin
Checkerboard : T
Symm. decomp : T
Finite temperture version
Beta : 5.0000000000000000
dtau,Ltrot_eff: 0.10000000000000001 50
N_SUN : 2
N_FL : 2
t : 1.0000000000000000
Ham_U : 4.0000000000000000
t2 : 1.0000000000000000
Ham_U2 : 4.0000000000000000
Ham_tperp : 1.0000000000000000
Ham_chem : 0.0000000000000000
No initial configuration, Seed_in 790789
Sweeps : 20
Bins : 20
No CPU-time limitation
Measure Int. : 1 50
Stabilization,Wrap : 10
Nstm : 5
Ltau : 1
# of interacting Ops per time slice : 16
Default sequential updating
This executable represents commit 49127767 of branch master.
Precision Green Mean, Max : 3.1800655972164106E-011 2.0886004836739858E-007
Precision Phase, Max : 0.0000000000000000
Precision tau Mean, Max : 7.9871366756188930E-012 1.3460594816550042E-007
Acceptance : 0.42609999999999998
Effective Acceptance : 0.42609999999999998
Acceptance_Glob : 0.0000000000000000
Mean Phase diff Glob : 0.0000000000000000
Max Phase diff Glob : 0.0000000000000000
Average cluster size : 0.0000000000000000
Average accepted cluster size: 0.0000000000000000
CPU Time : 7.0076134589999999
Series of MPI runs#
Starting each run separately can be cumbersome, therefore we provide the following example, which creates a list of Simulation
instances that can be run in a loop, performing a scan in \(U\). To increase the statistics of the results, MPI parallelization is employed. Since the default MPI executable mpiexec
does not fit with the MPI libraries used during compilation on the test machine, is is changed to orterun
. The option mpiexec_args=['--oversubscribe']
hands over the flag --oversubscribe
to orterun
, which allows it to run more MPI tasks than there are slots available, see the Open MPI documentation for details.
sims = [
Simulation(
alf_src,
'Hubbard',
{
# Model specific parameters
'L1': 4,
'L2': 4,
'Ham_U': U,
# QMC parameters
'Nbin': 20,
},
mpi=True,
n_mpi=4,
mpiexec='orterun',
mpiexec_args=['--oversubscribe'],
)
for U in [1.0, 2.0, 3.0]]
sims
[<py_alf.simulation.Simulation at 0x7f88b737d520>,
<py_alf.simulation.Simulation at 0x7f88b73d7610>,
<py_alf.simulation.Simulation at 0x7f88b73d79d0>]
Note
The above employs Python’s list comprehensions, a convenient and readable way to create Python lists. Here is a simple example, employing list comprehension (and f-strings):
>>> [f'x={x}' for x in [1, 2, 3]]
['x=1', 'x=2', 'x=3']
Since we are changing from non-MPI to MPI, ALF has to be recompiled:
Warning
pyALF does not check how ALF has been compiled previously, so the user has to take care of issuing recompilation if necessary.
sims[0].compile()
Compiling ALF...
Cleaning up Prog/
Cleaning up Libraries/
Cleaning up Analysis/
Compiling Libraries
Compiling Analysis
Compiling Program
Parsing Hamiltonian parameters
filename: Hamiltonians/Hamiltonian_Kondo_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_Plain_Vanilla_smod.F90
filename: Hamiltonians/Hamiltonian_tV_smod.F90
filename: Hamiltonians/Hamiltonian_LRC_smod.F90
filename: Hamiltonians/Hamiltonian_Z2_Matter_smod.F90
Compiling program modules
Link program
Done.
Loop over list of jobs:
for sim in sims:
sim.run()
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=1.0" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=2.0" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=3.0" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
for sim in sims:
sim.print_info_file()
===== /scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=1.0/info =====
=====================================
Model is : Hubbard
Lattice is : Square
# unit cells : 16
# of orbitals : 1
Flux_1 : 0.0000000000000000
Flux_2 : 0.0000000000000000
Twist as phase factor in bulk
HS couples to z-component of spin
Checkerboard : T
Symm. decomp : T
Finite temperture version
Beta : 5.0000000000000000
dtau,Ltrot_eff: 0.10000000000000001 50
N_SUN : 2
N_FL : 2
t : 1.0000000000000000
Ham_U : 1.0000000000000000
t2 : 1.0000000000000000
Ham_U2 : 4.0000000000000000
Ham_tperp : 1.0000000000000000
Ham_chem : 0.0000000000000000
No initial configuration, Seed_in 814342
Sweeps : 20
Bins : 20
No CPU-time limitation
Measure Int. : 1 50
Stabilization,Wrap : 10
Nstm : 5
Ltau : 1
# of interacting Ops per time slice : 16
Default sequential updating
Number of mpi-processes : 4
This executable represents commit 49127767 of branch master.
Precision Green Mean, Max : 4.2611639821155390E-014 4.2457062171541438E-012
Precision Phase, Max : 0.0000000000000000
Precision tau Mean, Max : 1.2531602652146677E-014 2.9033442316972469E-012
Acceptance : 0.44413437499999997
Effective Acceptance : 0.44413437499999997
Acceptance_Glob : 0.0000000000000000
Mean Phase diff Glob : 0.0000000000000000
Max Phase diff Glob : 0.0000000000000000
Average cluster size : 0.0000000000000000
Average accepted cluster size: 0.0000000000000000
CPU Time : 7.0864093547499998
===== /scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=2.0/info =====
=====================================
Model is : Hubbard
Lattice is : Square
# unit cells : 16
# of orbitals : 1
Flux_1 : 0.0000000000000000
Flux_2 : 0.0000000000000000
Twist as phase factor in bulk
HS couples to z-component of spin
Checkerboard : T
Symm. decomp : T
Finite temperture version
Beta : 5.0000000000000000
dtau,Ltrot_eff: 0.10000000000000001 50
N_SUN : 2
N_FL : 2
t : 1.0000000000000000
Ham_U : 2.0000000000000000
t2 : 1.0000000000000000
Ham_U2 : 4.0000000000000000
Ham_tperp : 1.0000000000000000
Ham_chem : 0.0000000000000000
No initial configuration, Seed_in 814342
Sweeps : 20
Bins : 20
No CPU-time limitation
Measure Int. : 1 50
Stabilization,Wrap : 10
Nstm : 5
Ltau : 1
# of interacting Ops per time slice : 16
Default sequential updating
Number of mpi-processes : 4
This executable represents commit 49127767 of branch master.
Precision Green Mean, Max : 2.3749816037530496E-013 1.6972462324460480E-010
Precision Phase, Max : 0.0000000000000000
Precision tau Mean, Max : 6.8404474588242211E-014 1.8104495680404398E-010
Acceptance : 0.43510117187499997
Effective Acceptance : 0.43510117187499997
Acceptance_Glob : 0.0000000000000000
Mean Phase diff Glob : 0.0000000000000000
Max Phase diff Glob : 0.0000000000000000
Average cluster size : 0.0000000000000000
Average accepted cluster size: 0.0000000000000000
CPU Time : 7.4221536534999997
===== /scratch/pyalf-docu/doc/source/usage/ALF_data/Hubbard_L1=4_L2=4_U=3.0/info =====
=====================================
Model is : Hubbard
Lattice is : Square
# unit cells : 16
# of orbitals : 1
Flux_1 : 0.0000000000000000
Flux_2 : 0.0000000000000000
Twist as phase factor in bulk
HS couples to z-component of spin
Checkerboard : T
Symm. decomp : T
Finite temperture version
Beta : 5.0000000000000000
dtau,Ltrot_eff: 0.10000000000000001 50
N_SUN : 2
N_FL : 2
t : 1.0000000000000000
Ham_U : 3.0000000000000000
t2 : 1.0000000000000000
Ham_U2 : 4.0000000000000000
Ham_tperp : 1.0000000000000000
Ham_chem : 0.0000000000000000
No initial configuration, Seed_in 814342
Sweeps : 20
Bins : 20
No CPU-time limitation
Measure Int. : 1 50
Stabilization,Wrap : 10
Nstm : 5
Ltau : 1
# of interacting Ops per time slice : 16
Default sequential updating
Number of mpi-processes : 4
This executable represents commit 49127767 of branch master.
Precision Green Mean, Max : 2.2710847580992650E-012 6.8186442048201457E-009
Precision Phase, Max : 0.0000000000000000
Precision tau Mean, Max : 6.2428799576227245E-013 1.1235949171073401E-008
Acceptance : 0.42898437500000003
Effective Acceptance : 0.42898437500000003
Acceptance_Glob : 0.0000000000000000
Mean Phase diff Glob : 0.0000000000000000
Max Phase diff Glob : 0.0000000000000000
Average cluster size : 0.0000000000000000
Average accepted cluster size: 0.0000000000000000
CPU Time : 7.1656086535000005
Parallel Tempering#
ALF offers the possibility to employ Parallel Tempering[4], also known as Exchange Monte Carlo[5], where simulations with different parameters but the same configuration space are run in parallel and can exchange configurations. A method developed to overcome ergodicity issues.
To use Parallel Tempering in pyALF, sim_dict
has to be replaced by a list of dictionaries. This does also imply mpi=True
, since Parallel Tempering needs MPI.
sim = Simulation(
alf_src,
'Hubbard',
[
{
# Model specific parameters
'L1': 4,
'L2': 4,
'Ham_U': U,
# QMC parameters
'Nbin': 20,
'mpi_per_parameter_set': 2
} for U in [2.5, 3.5]
],
mpi=True,
n_mpi=4,
mpiexec='orterun',
mpiexec_args=['--oversubscribe'],
)
Recompile for Parallel Tempering:
sim.compile()
Compiling ALF...
Cleaning up Prog/
Cleaning up Libraries/
Cleaning up Analysis/
Compiling Libraries
Compiling Analysis
Compiling Program
Parsing Hamiltonian parameters
filename: Hamiltonians/Hamiltonian_Kondo_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_Plain_Vanilla_smod.F90
filename: Hamiltonians/Hamiltonian_tV_smod.F90
filename: Hamiltonians/Hamiltonian_LRC_smod.F90
filename: Hamiltonians/Hamiltonian_Z2_Matter_smod.F90
Compiling program modules
Link program
Done.
sim.run()
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/temper_Hubbard_L1=4_L2=4_U=2.5" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/temper_Hubbard_L1=4_L2=4_U=2.5/Temp_0" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/ALF_data/temper_Hubbard_L1=4_L2=4_U=2.5/Temp_1" for Monte Carlo run.
Create new directory.
Run /home/jschwab/Programs/ALF/Prog/ALF.out
ALF Copyright (C) 2016 - 2021 The ALF project contributors
This Program comes with ABSOLUTELY NO WARRANTY; for details see license.GPL
This is free software, and you are welcome to redistribute it under certain conditions.
No initial configuration
sim.print_info_file()
The output from this command has been omitted for brevity.
Only preparing runs#
In many cases, it might not be feasible to execute ALF directly through pyALF, for example when using an HPC scheduler, but one might still like to use pyALF for preparing the simulation directories. In this case the two options copy_bin
and only_prep
of py_alf.Simulation.run()
come in handy. THere we also demonstrate the keyword arguments sim_root
and sim_dir
.
import numpy as np
JK_list = np.linspace(0.0, 3.0, num=11)
print(JK_list)
sims = [
Simulation(
alf_src,
'Kondo',
{
"Model": "Kondo",
"Lattice_type": "Bilayer_square",
"L1": 16,
"L2": 16,
"Ham_JK": JK,
"Ham_Uf": 1.,
"Beta": 20.0,
"Nsweep": 500,
"NBin": 400,
"Ltau": 0,
"CPU_MAX": 48
},
mpi=True,
sim_root="KondoBilayerSquareL16",
sim_dir=f"JK{JK:2.1f}",
) for JK in JK_list
]
[0. 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3. ]
Do not forget to recompile when switching from Parallel Tempering back to normal MPI runs.
sims[0].compile()
Compiling ALF...
Cleaning up Prog/
Cleaning up Libraries/
Cleaning up Analysis/
Compiling Libraries
Compiling Analysis
Compiling Program
Parsing Hamiltonian parameters
filename: Hamiltonians/Hamiltonian_Kondo_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_smod.F90
filename: Hamiltonians/Hamiltonian_Hubbard_Plain_Vanilla_smod.F90
filename: Hamiltonians/Hamiltonian_tV_smod.F90
filename: Hamiltonians/Hamiltonian_LRC_smod.F90
filename: Hamiltonians/Hamiltonian_Z2_Matter_smod.F90
Compiling program modules
Link program
Done.
for sim in sims:
sim.run(copy_bin=True, only_prep=True)
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK0.0" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK0.3" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK0.6" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK0.9" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK1.2" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK1.5" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK1.8" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK2.1" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK2.4" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK2.7" for Monte Carlo run.
Create new directory.
Prepare directory "/scratch/pyalf-docu/doc/source/usage/KondoBilayerSquareL16/JK3.0" for Monte Carlo run.
Create new directory.
Now there are 11 directories, ready for the job scheduler.
ls KondoBilayerSquareL16/*
KondoBilayerSquareL16/JK0.0:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK0.3:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK0.6:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK0.9:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK1.2:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK1.5:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK1.8:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK2.1:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK2.4:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK2.7:
ALF.out* parameters seeds
KondoBilayerSquareL16/JK3.0:
ALF.out* parameters seeds