I love doing Math Reviews, for the random stuff I get to read. Here is a paper I probably would not have found:
MR3208124 Rom\’an, Jorge Carlos, Hobert, James P.\ and Presnell, Brett, On reparametrization and the Gibbs sampler, Statist. Probab. Lett. 91 (2014), 110–116.
The Gibbs sampler may be out of fashion with the Bayesian computation crowd these days, but the reparameterizations are still mysterious. I tried the PyMC3 NUTS sampler on the first example and it too has rather different mixing times: https://gist.github.com/aflaxman/be253895efd0e2962472
Filed under MCMC, Mysteries
From http://stats.stackexchange.com/questions/10798/measures-of-autocorrelation-in-categorical-values-of-a-markov-chain, a question I run into from time to time:
> Are there any measures of auto-correlation for a sequence of observations of an (unordered) categorical variable?
An (accepted) answer that got me thinking:
> [L]ook directly at the convergence rate for the Markov chain.
My interpretation, in PyMC2 terms: run chain, calculate empirical transition probabilities for categorical variable, examine spectral gap.
Experimental notebook tk.
The PyMC documentation is a little slim on the topic of defining a custom sampler, and I had to figure it out for some DisMod work over the years. Here is a minimal example of how I did it, in answer to a CrossValidated question.
I’ve been reading about Sequential Monte Carlo recently, and I think it will fit well into the PyMC3 framework. I will give it a try when I have a free minute, but maybe someone else will be inspired to try it first. This paper includes some pseudocode.
I almost didn’t share these HarleMCMC videos, but how long could I resist, really?
We’ll see how this holds up to repeated viewing…
Here is a math/dance video for the ages:
I’ve always enjoyed the Gaussian Process part of the PyMC package, and a question on the mailing list yesterday reminded me of a project I worked on with it that never came to fruition: how to implement constraints on the derivatives of the GP.
The best answer I could come up with is to use “potential” nodes, and do it approximately. That is to say, instead of constraining the derivative, I satisfy myself to constrain a secant that approximates the derivative. And instead of constraining it at every point in an interval, I satisfy myself to constrain it at a discrete subset of points.
Here is an ipython notebook example: [ipynb] [py]