I read some good practical advice about when enough is enough in Markov Chain Monte Carlo sampling this morning. In their “Inference from simulations and monitoring convergence” chapter of Handbook of Markov Chain Monte Carlo, Andrew Gelman and Kenneth Shirley say many useful things in a quickly digested format.
You can get their 6 point summary of recommendations from Page 1 of the pdf, and I’ll give you some of their concluding summary here:
Monitoring convergence of iterative simulation is straightforward—discard the first part of the simulations and then compare the variances of quantities of interest within and between chains—and inference given approximate convergence is even simpler: just mix the simulations together and use them as a joint distribution. Both these ideas can and have been refined, but the basic concepts are straightforward and robust.
The hard part is knowing what to do when the simulations are slow to converge.
Their other important idea is that if the chain is slow to converge, this is probably because of something that was not modeled as well as it could be. Very different perspective from TCS, where I view a slowly converging chain as an exciting opportunity for designing new transition dynamics that mix more rapidly.