|M.Sc Student||Firanko Raz|
|Subject||Controlled Approximation Schemes for the Contraction of|
2D Tensor Networks
|Department||Department of Physics||Supervisor||Dr. Itai Arad|
|Full Thesis text|
Tensor networks are used in quantum physics to describe physically relevant states such as ground states of lattice spin models. For states with such a description, computing expectation values is achieved by a contracting a network of low-rank tensors, aligned in a certain geometric pattern that corresponds to the underlying lattice. Tensor networks have been very successful for simulating 1D systems. This can be attributed to the corresponding 1D structure of the underlying tensor network, which crucially eases their contraction.
However, most 2D systems require a 2D tensor-network structures. A central example is the Projected Entangled Pair States (PEPS) tensor network. Unfortunately, the computational resources required for an exact contraction of such networks grow exponentially with the system size, in contrast to the one-dimensional case. Therefore, PEPS-based algorithms must perform these contractions approximately. These approximations degrade the accuracy of the algorithms, and in most cases still result in computational times that scale badly with the system size.
Although a general PEPS tensor network is NP-hard to contract (even approximately), this is not necessarily the case for PEPS that approximate ground states of gapped local Hamiltonians. Such states are subject to many additional constraints, such as the exponential decay of correlations, which might be useful for finding efficient contraction schemes.
We ask the following question: can tensor networks that approximate ground states of gapped local Hamiltonians be efficiently contracted, using the prior knowledge of the underlying Hamiltonian and some of its properties (such as the existence of a gap)?
Our approach is to try and use only a finite region around the local observable, in order to approximate its expectation value. We consider two cases: frustration-free Hamiltonians and frustrated Hamiltonians. For each case, we use a different approximation scheme, but both schemes only use the local information of the tensor-network around the observable. We examine a few exactly-solvable frustration-free models, and show the existence of PEPS description for their ground state in which our scheme works. We then discuss a general frustrated Hamiltonian and establish a controlled numerical algorithm for computing the reduced density matrix of the ground state to a region.