5. Conclusions#
\(\phantom{\xi}\)
In this thesis, I have presented three major projects that collectively contribute to advancing our understanding of highly correlated quantum systems. Each project was centered around the use of auxiliary field quantum Monte Carlo, leveraging the capabilities of the Algorithms for Lattice Fermions (ALF) package.
The first project (Chapter 2), conducted in collaboration with renormalization group (RG) experts, investigated nematic quantum phase transitions in Dirac fermions, where the Dirac points are pinned in the disordered phase and start to meander in the ordered phase. While there have been previous analytical investigations of this kind of transition, there has been no clear consensus whether it is of first or second order [37, 50, 51, 52, 53]. Set out to solve this long-standing problem, we conducted the first exact numerical investigation: We designed two sign-problem-free models exhibiting the intended phase transition. In our QMC studies combined with \(\epsilon\)-expansion, we found the transitions to be continuous, with a quantum critical regime that is characterized by large velocity anisotropies of the Dirac cones. These velocity anisotropies turned out to have a very slow RG flow (for one of the two models, we could explicitly show that it diverges logarithmically with the RG parameter). As a result, finitely sized systems, in both experimental and numerical investigations, will not be representative of the infrared fixed point, but of a quasiuniversal regime where the drift of the exponents tracks the velocity anisotropy. Notably, even though the \(\epsilon\)-expansion finds qualitatively distinct fixed points for the two investigated models, the numerical investigation finds no distinction in the exponents, within the margin of errors. Therefore, it seems that the quasiuniversial regime is —at least close to the ultraviolet beginning— identical for both models, even though their infrared universality is different. A particular challenge in this project was the broken Lorentz symmetry caused by both the Fermi velocity anisotropy and the meandering Dirac points. In particular, the variable zero-dimensional Fermi surface on a finite lattice lead to confusing artifacts that could be misinterpreted as sings of a first order phase transition. The findings of this project have already been published in Phys. Rev. Lett. [14].
In my second project (Chapter 3), we set out to perform the first exact numerical investigation to complement the seminal work of Read and Sachdev [18, 20, 21]. Hence, I simulated a generalized Heisenberg model on a square lattice, where each site hosts an irreducible representation of SU(\(N\)) described by a square Young tableau with \(N/2\) rows and \(2S\) columns. With my QMC simulations, we were able to map out its ground state phase diagram for \(S \in \{1/2, 1, 3/2, 4\}\) and \(N \in \{2, 4, \dots, 22\}\). Confirming the analytical work by Read and Sachdev, we found antiferromagnetic order for big \(S\) and dimerized order for big \(N\). Along a line defined by \(N=8S + 2\) in the \(S\) versus \(N\) phase diagram, we observe a rich variety of phases. For \(S = 1/2\) and \(3/2\), the system forms a four-fold degenerate VBS state, while for \(S = 1\), we identify a two-fold degenerate spin nematic state that breaks the \(C_4\) lattice symmetry down to \(C_2\). At \(S = 2\), we observe a unique symmetry-protected topological state, characterized by a dimerized SU(18) boundary state, reminiscent of the two-dimensional Affleck-Kennedy-Lieb-Tasaki (AKLT) state. These phases proximate to the Néel state align with the theoretical framework of monopole condensation of the antiferromagnetic order parameter, with degeneracies following a \(\text{mod}(4,2S)\) rule. The findings of this project have already been published in [17].
The third project is in a sense a meta-project, since it is –at least in parts– the result of my adaptions and extensions of ALF for working more efficiently on the first two projects. In Chapter 4 this is represented by the documentation of pyALF, a Python library for running and analyzing ALF simulations, but I also supplied significant contributions to the ALF Fortran code [13]. My most noteworthy contributions to ALF include an improved encapsulation of model definitions, implementing HDF5 for observables and an extension of the automatic test executed by our development platform, GitLab.
All in all, I believe my research offers new insights into highly correlated quantum systems and both my contributions to ALF and the development of pyALF provide significant advances in tooling for the numerical study of condensed matter models.
5.1. Outlook for (py)ALF#
Going forward, there are still many possibilities to further improve ALF and pyALF. What might come to mind first, are new features for models and enhancements on the QMC algorithm, such as the new type of interaction vertex for implementing general gauge theories the ALF collaboration is currently working on. Furthermore, ALF will have to start leveraging GPUs, if we want to keep on track with advances in HPC computing. But there is also a big potential for improving the structure of the code and its usability.
In particular, further development of pyALF has a big potential for improving usability. Lowering the barrier of entry can open ALF to a whole new host of users. I would like to group such improvements into two categories:
Reducing the needed IT knowledge, i.e. keeping the users away from Fortran code, compilers, terminal shells, etc.
Reduce the needed QMC knowledge, by automating common accuracy checks and detection of failing simulations.
Here are three partly interconnected measures from the first category:
Exposing ALF directly to Python through a module, instead of executing the ALF binary through a system call –as pyALF does it currently–, would generate a smoother user experience and eliminate many potential points of failure.
Furthermore, a possibility to define Hamiltonians directly in Python instead of Fortran would also lower the barrier of entry significantly. Here, the biggest challenge would be to find a way for implementing new observables.
Lastly, shipping binaries with the Python package would also improve the user experience, since they would not have to deal with compilers and the ALF source code any more, on the other hand, this may come at the expense of performance losses.
To let users treat (py)ALF as a black box that returns accurate results with error bars, we will have to automate a number of common checks. Properties to test are the Green function precision (i.e. numerical stability of the algorithm), warmup times, autocorrelation times, spikes from fat-tailed distributions and systematic errors from the imaginary time step \(\Delta_\tau\). Although the last one might also be kept with the user. Furthermore, estimating simulation times and issuing warnings if jobs might take longer than expected will also be a great help for novice users.
Checking the Green function precision will be relatively easy. One could issue a warning (pronounced enough to prompt most users to investigate) if the average or maximal deviation are above a certain threshold, e.g. \(10^{-5}\) and \(0.1\), respectively. Alternatively, one could take the management of stabilization completely out of the hand of (novice) users, by auto-tuning stabilization intervals and even switching between stabilization schemes if e.g. the scales become too big.
The correct estimation of warmup and autocorrelation times poses a bigger challenge. The general strategy for warmup estimations would most certainly involve a linear fit of the observable time series \(O(t)\) and dismissal of the leading elements until the slope is zero within some bounds. Though finding good criteria might be tricky, since for fluctuating data, a fit could be flat within error bars but still show a significant drift. For autocorrelation time, one could fit \(\bar{\gamma}_\tau(O) = \exp\left(-\frac{\tau}{\tau_\gamma(O)}\right)\) (cf. Eq. (1.7)), but results from this still have to be treated carefully. Overall, one can never be entirely sure the Markov Chain is not stuck in a local maximum of the weight, or if there is a very slow mode that can not be resolved yet.
To detect whether the simulation might sample a fat-tailed distribution, the observable time series has to be scanned for outliers –so-called spikes–, e.g. by comparing the deviation of individual bins from the mean to the average fluctuation, or by arranging the bins into a histogram.
Implementing these checks to at least work in many cases should be feasible, but extensive tests will be in order. Generally, my approach here would be to err on the side of caution and prompt the user to look directly at the data if the data quality looks ambiguous to the algorithm.
With these ideas for making (py)ALF more approachable, I am concluding my thesis.