# Monthly Archives: April 2011

## In the media

Jake Marcus, a student I’ve been working with since I came to IHME, has an essay in The New Republic! It’s about political movements and fighting disease, and it’s his answer to the question Why isnâ€™t there a global movement to combat noncommunicable diseases? The answer: it’s complicated.

Jake says credit for his article should also go to another of our coworkers, Steve Lim, who helped him understand some of the complications.

Filed under global health

## MCMC in Python: Part II of PyMC Step Methods and their pitfalls

I had a good time with the first round of my Step Method Pitfalls: besides making some great movies, I got a tip on how to combine Hit-and-Run with Adaptive Metropolis (together they are “the H-RAM“, fitting since the approach was suggested by Aram). And even more important than getting the tip, I did enough of a proof-of-concept to inspire Anand to rewrite it in the correct PyMC style. H-RAM lives.

Enter the GHME, where I had a lot of good discussions about methods for metrics, and where Mariel Finucane told me that her stress test for new step methods is always the banana (from Haario, Saksman, Tamminen, Adaptive proposal distribution for random walk Metropolis algorithm, Computational Statistics 1999):

The non-linear banana-shaped distributions are constructed from the Gaussian ones by ‘twisting’ them as follows. Let f be
the density of the multivariate normal distribution $N(0, C_1)$ with the covariance again given by $C_1 = {\rm diag}(100, 1, ..., 1)$. The density function of the ‘twisted’ Gaussian with the nonlinearity parameter $b > 0$ is given by $f_b = f \circ \phi_b$, where the function $\phi_b(x) = (x_1, x_2 + b x_1^2 - 100b, x_3, ..., x_n)$.

It’s a good distribution, and it makes for a good movie.

More detailed explorations to follow. What do you want to see?

1 Comment

Filed under statistics, TCS

## The surprising (to economists) truth about what motivates

I’ve been watching really fun 10 minute talks lately on youtuble. They are put together by the Royal Society for the encouragement of Arts, Manufacturers, and Commerce (weird name, huh? It seems they prefer “RSA” for short. But I’m still enough of a computer scientist to think that acronym is taken.)

Here is one that crossed my inbox yesterday, a talk by Dan Pink about what motivates us:

Filed under Mysteries

## Gaussian Processes and Jigsaw Puzzles with PyMC.gp

I was thinking about cutting something up into little pieces the other day, let’s not get into the details. The point is, I turned my destructive urge into creative energy when I started thinking about jigsaw puzzles. You might remember when my hobby was maze making with randomly generated bounded depth spanning trees a few months ago. It turns out that jigsaw puzzles are just as fun.

The secret ingredient to my jigsaw puzzle design is the Gaussian process with a Matern covariance function. (Maybe you knew that was coming.) GPs are an elegant way to make the little nubs that hold the puzzle together. It’s best to use two of them together to make the nub, like this:

Doing this is not hard at all, once you sort out the intricacies of the PyMC.gp package, and takes only a few lines of Python code:

def gp_puzzle_nub(diff_degree=2., amp=1., scale=1.5, steps=100):
""" Generate a puzzle nub connecting point a to point b"""

M, C = uninformative_prior_gp(0., diff_degree, amp, scale)
gp.observe(M, C, data.puzzle_t, data.puzzle_x, data.puzzle_V)
GPx = gp.GPSubmodel('GP', M, C, pl.arange(1))
X = GPx.value.f(pl.arange(0., 1.0001, 1. / steps))

M, C = uninformative_prior_gp(0., diff_degree, amp, scale)
gp.observe(M, C, data.puzzle_t, data.puzzle_y, data.puzzle_V)
GPy = gp.GPSubmodel('GP', M, C, pl.arange(1))
Y = GPy.value.f(pl.arange(0., 1.0001, 1. / steps))

return X, Y


I was inspired by something Andrew Gelman blogged, about the utility of writing a paper and a blog post about this or that. So I tried it out. It didn’t work for me, though. There isn’t a paper’s worth of ideas here, but now I’ve depleted my energy before finishing the blog. Here it is: an attempted paper to accompany this post. Patches welcome.

In addition to a aesthetically pleasing diversion, I also got something potentially useful out of this, a diagram of how misspecifying any one of the parameters of the Matern covariance function can lead to similarly strange looking results. This is my evidence that you can’t tell if your amplitude is too small or your scale is too large from a single bad fit:

Filed under statistics

## Backcalculations on the Price of Life

Peter Singer has a short article on the way our society implicitly values human lives. Very clear and very quantitative. His calculations conclude with this:

The US regards the life of an American as equivalent to the lives of 144 Afghans.