The most vital lesson from your work

Seed Magazine had a short article that captivated my imagination for the last few days:

If you only had a single statement to pass on to others summarizing the most vital lesson to be drawn from your work, what would it be? Seed asked eleven scientists this question.

I was pretty disappointed with the answers, but upon re-reading the question, I realized that it’s probably due to differences of interpretation. The only answer that I liked was from Steve Strogatz, who said:

You can make sense of anything that changes smoothly in space or time, no matter how wild and complicated it may appear, by reimagining it as an infinite series of infinitesimal changes, each proceeding at a constant (and hence much simpler) rate, and then adding all those simple little changes back together to reconstitute the original whole.

This is a pretty loose understanding of what is “from” means in “drawn from your work”. I don’t think Strogatz claims he invented infinitesimal calculus, just that has in work this has been a vital idea (the most vital? total orderings are tough sometimes.)

What’s the most vital lesson to be drawn from your work?

2 Comments

Filed under Uncategorized

In the media

Jake called my attention to a recent public interest story in the NYTimes, about using the results of an on-going telephone survey on predict the demographic profile of happiest man in America. It seems like they’re making fun of simple models, a pursuit which I approve of:

The New York Times asked Gallup to come up with a statistical composite for the happiest person in America, based on the characteristics that most closely correlated with happiness in 2010. Men, for example, tend to be happier than women, older people are happier than middle-aged people, and so on.

Gallup’s answer: he’s a tall, Asian-American, observant Jew who is at least 65 and married, has children, lives in Hawaii, runs his own business and has a household income of more than $120,000 a year.

But the joke is that they made a few phone calls and found the guy… he looks pretty happy.

1 Comment

Filed under global health, statistics

MCMC in Python: Part IIb of PyMC Step Methods and their pitfalls

Two weeks ago, I slapped together a post about the banana distribution and some of my new movies sampling from it. Today I’m going to explore the performance of MCMC with the Adaptive Metropolis step method when the banana dimension grows and it becomes more curvy.

To start things off, have a look at the banana model, as implemented for PyMC:

    C_1 = pl.ones(dim)
    C_1[0] = 100.
    X = mc.Uninformative('X', value=pl.zeros(dim))

    def banana_like(X, tau, b):
        phi_X = pl.copy(X)
        phi_X *= 30. # rescale X to match scale of other models
        phi_X[1] = phi_X[1] + b*phi_X[0]**2 - 100*b

        return mc.normal_like(phi_X, 0., tau)

    @mc.potential
    def banana(X=X, tau=C_1**-1, b=b):
        return banana_like(X, tau, b)

Pretty simple stuff, right? This is a pattern that I find myself using a lot, an uninformative stochastic variable that has the intended distribution implemented as a potential. If I understand correctly, it is something that you can’t do easily with BUGS.

Adaptive Metropolis is (or was) the default step method that PyMC uses to fit a model like this, and you can ensure that it is used (and fiddle with the AM parameters) with the use_step_method() function:

mod.use_step_method(mc.AdaptiveMetropolis, mod.X)

Important Tip: If you want to experiment with different step methods, you need to call mod.use_step_method() before doing any sampling. When you call mod.sample(), any stochastics without step methods assigned are given the method that PyMC deems best, and after they are assigned, use_step_method adds additional methods, but doesn’t remove the automatically added ones. (See this discussion from PyMC mailing list.)

So let’s see how this Adaptive Metropolis performs on a flattened banana as the dimension increases:

You can tell what dimension the banana lives in by counting up the panels in the autocorrelation plots in the upper right of each video, 2, 4 or 8. Definitely AM is up to the challenge of sampling from an 8-dimensional “flattened banana”, but this is nothing more than an uncorrelated normal distribution that has been stretched along one axis, so the results shouldn’t impress anyone.

Here is what happens in a more curvy banana:

Now you can see the performance start to degrade. AM does just fine in a 2-d banana, but its going to need more time to sample appropriately from a 4-d one and even more for the 8-d. This is similar to the way a cross-diagonal region breaks AM, but a lot more like something that could come up in a routine statistical application.

For a serious stress test, I can curve the banana even more:

Now AM has trouble even with the two dimensional version.

I’ve had an idea to take this qualitative exploration and quantify it to do a real “experimental analysis of algorithms” -style analysis. Have you seen something like this already? I do have enough to do without making additional work for myself.

Comments Off on MCMC in Python: Part IIb of PyMC Step Methods and their pitfalls

Filed under statistics, TCS

In the media

Jake Marcus, a student I’ve been working with since I came to IHME, has an essay in The New Republic! It’s about political movements and fighting disease, and it’s his answer to the question Why isn’t there a global movement to combat noncommunicable diseases? The answer: it’s complicated.

Jake says credit for his article should also go to another of our coworkers, Steve Lim, who helped him understand some of the complications.

Comments Off on In the media

Filed under global health

MCMC in Python: Part II of PyMC Step Methods and their pitfalls

I had a good time with the first round of my Step Method Pitfalls: besides making some great movies, I got a tip on how to combine Hit-and-Run with Adaptive Metropolis (together they are “the H-RAM“, fitting since the approach was suggested by Aram). And even more important than getting the tip, I did enough of a proof-of-concept to inspire Anand to rewrite it in the correct PyMC style. H-RAM lives.

Enter the GHME, where I had a lot of good discussions about methods for metrics, and where Mariel Finucane told me that her stress test for new step methods is always the banana (from Haario, Saksman, Tamminen, Adaptive proposal distribution for random walk Metropolis algorithm, Computational Statistics 1999):

The non-linear banana-shaped distributions are constructed from the Gaussian ones by ‘twisting’ them as follows. Let f be
the density of the multivariate normal distribution N(0, C_1) with the covariance again given by C_1 = {\rm diag}(100, 1, ..., 1). The density function of the ‘twisted’ Gaussian with the nonlinearity parameter b > 0 is given by f_b = f \circ \phi_b, where the function \phi_b(x) = (x_1, x_2 + b x_1^2 - 100b, x_3, ..., x_n).

It’s a good distribution, and it makes for a good movie.

More detailed explorations to follow. What do you want to see?

1 Comment

Filed under statistics, TCS

The surprising (to economists) truth about what motivates

I’ve been watching really fun 10 minute talks lately on youtuble. They are put together by the Royal Society for the encouragement of Arts, Manufacturers, and Commerce (weird name, huh? It seems they prefer “RSA” for short. But I’m still enough of a computer scientist to think that acronym is taken.)

Here is one that crossed my inbox yesterday, a talk by Dan Pink about what motivates us:

2 Comments

Filed under Mysteries

Gaussian Processes and Jigsaw Puzzles with PyMC.gp

I was thinking about cutting something up into little pieces the other day, let’s not get into the details. The point is, I turned my destructive urge into creative energy when I started thinking about jigsaw puzzles. You might remember when my hobby was maze making with randomly generated bounded depth spanning trees a few months ago. It turns out that jigsaw puzzles are just as fun.

The secret ingredient to my jigsaw puzzle design is the Gaussian process with a Matern covariance function. (Maybe you knew that was coming.) GPs are an elegant way to make the little nubs that hold the puzzle together. It’s best to use two of them together to make the nub, like this:

Doing this is not hard at all, once you sort out the intricacies of the PyMC.gp package, and takes only a few lines of Python code:

def gp_puzzle_nub(diff_degree=2., amp=1., scale=1.5, steps=100):
    """ Generate a puzzle nub connecting point a to point b"""

    M, C = uninformative_prior_gp(0., diff_degree, amp, scale)
    gp.observe(M, C, data.puzzle_t, data.puzzle_x, data.puzzle_V)
    GPx = gp.GPSubmodel('GP', M, C, pl.arange(1))
    X = GPx.value.f(pl.arange(0., 1.0001, 1. / steps))

    M, C = uninformative_prior_gp(0., diff_degree, amp, scale)
    gp.observe(M, C, data.puzzle_t, data.puzzle_y, data.puzzle_V)
    GPy = gp.GPSubmodel('GP', M, C, pl.arange(1))
    Y = GPy.value.f(pl.arange(0., 1.0001, 1. / steps))
    
    return X, Y

(full code here)

I was inspired by something Andrew Gelman blogged, about the utility of writing a paper and a blog post about this or that. So I tried it out. It didn’t work for me, though. There isn’t a paper’s worth of ideas here, but now I’ve depleted my energy before finishing the blog. Here it is: an attempted paper to accompany this post. Patches welcome.

In addition to a aesthetically pleasing diversion, I also got something potentially useful out of this, a diagram of how misspecifying any one of the parameters of the Matern covariance function can lead to similarly strange looking results. This is my evidence that you can’t tell if your amplitude is too small or your scale is too large from a single bad fit:

Comments Off on Gaussian Processes and Jigsaw Puzzles with PyMC.gp

Filed under statistics

Backcalculations on the Price of Life

Peter Singer has a short article on the way our society implicitly values human lives. Very clear and very quantitative. His calculations conclude with this:

The US regards the life of an American as equivalent to the lives of 144 Afghans.

Comments Off on Backcalculations on the Price of Life

Filed under global health

Videos of the GHME

The Global Health Metrics and Evaluation conference that I attended two weeks ago was scrupulously videotaped, and most sessions are now online. I had a really good time at the session Responsible data sharing and strengthening country capacity for analysis, which started with this unusual framing by moderator Elizabeth Pisani:

1 Comment

Filed under global health, science policy

Data Wrangling in R, Stata and Python

It was nearly a year ago when I was accosted by students I had urged to try Python and their complaints that the data manipulation capabilities they found so convenient in R and Stata were nowhere to be found. At the time, I did some digging and told them to try la.larry (or pandas, as mentioned by readers of that post). With some more experience, these recommendations have come up again, and in hindsight it seems like la.larry is too heavy a hammer for our most common tasks.

I’m hoping to put together a translation guide for R, Stata, and Python (there is already an extensive one… ours will be much more specialized, to just a few data wrangling commands), and until then, here are Kyle’s top two:

The easiest way to build record arrays (aside from csv2rec) IMO:

import numpy as np
a = ['USA','USA','CAN']
b = [1,6,4]
c = [1990.1,2005.,1995.]
d = ['x','y','z']
some_recarray = np.core.records.fromarrays([a,b,c,d], names=['country','age','year','data'])

The fromarrays method is especially nice because it automatically deals with datatypes.

To subset a particular country-year-age:

some_recarray[(some_recarray.country=='USA')
                & (some_recarray.age==6)
                & (some_recarray.year==2005)]

I’ve also found that caching each of the indices independently vastly speeds things up if you’re going to be iterating.

Love the recarray, hate the documentation.

6 Comments

Filed under software engineering