We consider the interacting tempering algorithm, a simplified version of the equi-energy sampler by Kou, Zhou and Wong (2006). As a first step towards a quantitative and non-asymptotic analysis of its convergence behavior, we show that under (easy to verify) assumptions on the distribution of interest, the interacting tempering process rapidly forgets its starting state. The result applies, among others, to exponential random graph models, the Ising and Potts models (in mean field or on a bounded degree graph), as well as (Edwards-Anderson) Ising spin glasses.

To the extent we believe that some of these distributions are hard to (approximately) simulate, the result suggests that in interacting tempering, at least for some distributions, forgetting the starting state might happen a lot earlier than convergence to the limiting distribution. For bounding the mixing time of the interacting tempering algorithm, the result allows us assume that the process started in a state drawn from the limiting distribution.