# Gaussian Processes and Jigsaw Puzzles with PyMC.gp

I was thinking about cutting something up into little pieces the other day, let’s not get into the details. The point is, I turned my destructive urge into creative energy when I started thinking about jigsaw puzzles. You might remember when my hobby was maze making with randomly generated bounded depth spanning trees a few months ago. It turns out that jigsaw puzzles are just as fun.

The secret ingredient to my jigsaw puzzle design is the Gaussian process with a Matern covariance function. (Maybe you knew that was coming.) GPs are an elegant way to make the little nubs that hold the puzzle together. It’s best to use two of them together to make the nub, like this:

Doing this is not hard at all, once you sort out the intricacies of the PyMC.gp package, and takes only a few lines of Python code:

```def gp_puzzle_nub(diff_degree=2., amp=1., scale=1.5, steps=100):
""" Generate a puzzle nub connecting point a to point b"""

M, C = uninformative_prior_gp(0., diff_degree, amp, scale)
gp.observe(M, C, data.puzzle_t, data.puzzle_x, data.puzzle_V)
GPx = gp.GPSubmodel('GP', M, C, pl.arange(1))
X = GPx.value.f(pl.arange(0., 1.0001, 1. / steps))

M, C = uninformative_prior_gp(0., diff_degree, amp, scale)
gp.observe(M, C, data.puzzle_t, data.puzzle_y, data.puzzle_V)
GPy = gp.GPSubmodel('GP', M, C, pl.arange(1))
Y = GPy.value.f(pl.arange(0., 1.0001, 1. / steps))

return X, Y
```

I was inspired by something Andrew Gelman blogged, about the utility of writing a paper and a blog post about this or that. So I tried it out. It didn’t work for me, though. There isn’t a paper’s worth of ideas here, but now I’ve depleted my energy before finishing the blog. Here it is: an attempted paper to accompany this post. Patches welcome.

In addition to a aesthetically pleasing diversion, I also got something potentially useful out of this, a diagram of how misspecifying any one of the parameters of the Matern covariance function can lead to similarly strange looking results. This is my evidence that you can’t tell if your amplitude is too small or your scale is too large from a single bad fit: