Informatics Forum, 5.05A
10 Crichton Street
Edinburgh, EH8 9AB
email: hguo AT inf ed ac uk

About Me

I am a lecturer in algorithms and complexity in the School of informatics, University of Edinburgh. Before Edinburgh, I have worked / studied in Berkeley, London, Madison, and Beijing. My hometown is Luoyang.

My research interest lies in theoretical computer science. I am particularly interested in mapping the complexity of counting problems. Typical problems include computing marginal probabilities and expectations of random variables, the evaluation of partition functions, etc. During this journey I inadvertently made some progress to understand the power of various algorithms, exact and approximate, deterministic and randomised alike. Often we ask: “this algorithm works in practise, but does it work in theory?”

For a list of publications, see the research page (with pdfs and slides, if available).

LFCS Seminars

I am organizing the LFCS seminar series. Send me an email if you want to give a talk!

Some Research

Uniform Sampling and the Lovász Local Lemma

Uniform Sampling through the Lovász Local Lemma (with Mark Jerrum and Jingcheng Liu), J. ACM, 66(3):18, 2019
A polynomial-time approximation algorithm for all-terminal network reliability (with Mark Jerrum), SIAM J. Comput, 48(3), 964-978, 2019 (also won the best paper award in ICALP 2018)

The Lovász Local Lemma is a classical gem in combinatorics to show the existence of “perfect” objects under dependency conditions. The local lemma is made algorithmic by Moser and Tardos (2010), showing that one can find a perfect object efficiently under the same conditions. However, it is still not clear how to generate these perfect objects uniformly at random. We showed that for extremal instances, the output of the Moser-Tardos algorithm is in fact uniform over all perfect objects. Furthermore, we found a new variant, called partial rejection sampling, whose output is guaranteed to be from the desire distribution, and with faster running time than rejection sampling. Using this new framework, we manage to confirm a conjecture of Gorodezky and Pak (2014). This leads to the first polynomial-time approximation algorithm for all-terminal network reliability, which is an open problem since 1980s, and is explicitly mentioned by, for example, Welsh (1993).

Rapid Mixing of Random Cluster Dynamics

Random Cluster Dynamics for the Ising Model is Rapidly Mixing (with Mark Jerrum), Ann. Appl. Probab. 28(2), 1292-1313, 2018

On the random cluster model at q = 2, we establish a mixing time upper bound for the Glauber dynamics that is polynomial in the size of the underlying graph. This result, through a comparison result of Ullrich (2014), implies that the Swendsen-Wang (1987) algorithm for ferromagnetic Ising models is always rapidly mixing, which has been conjectured by Sokal in 1990s. Note that our bound is much larger than the conjectured optimal upper bound O(n^0.25) due to Peres.

Computational Complexity Classifications

A Holant Dichotomy: Is the FKT Algorithm Universal? (with Jin-Yi Cai, Zhiguo Fu, and Tyson Williams), FOCS 2015
A Complete Dichotomy Rises from the Capture of Vanishing Signatures (with Jin-Yi Cai and Tyson Williams), SIAM J. Comput. 45(5), 1671-1728, 2016

The goal of computational complexity is to understand the intrinsic difficulty of various problems. Instead of looking at individual problems or asking overly broad questions, we turn to problems within certain expressive yet specific frameworks. The Holant framework, inspired by the seminal work of Valiant (2004) on holographic algorithms, is among the most expressive ones. One way to formulate the Holant is to evaluate the contraction of a given tensor network. In the two papers listed above, we establish complete classifications in both the general and planar settings, for the symmetric constraint functions (or tensors).


Apparently, Edinburgh is a good place for counting.

(Picture shot on W Nicolson St.)

In my spare time, I enjoy video games a lot.

In some distant past, I even play competitively. (See e.g. here and here.)