I’ve started summarizing papers for MathSciNet again, a strangely arcane practice coordinated by the American Mathematical Society. MathSciNet is a vast database of short reviews of math publications, and I loved it when I was writing the background sections of my graduate thesis. It was so useful then that I’m compelled to write summaries for it now, if I can make the time. Like reviewing, it makes me read a range of recent papers, but unlike reviewing, I don’t need to make a fuss about things that seem wrong. Fun.
I used to get to read some of the latest and greatest results in random graphs this way, but since I started up again, I’ve adjusted my preferences to receive some latest and greatest results in statistical computation. Hence my new appreciation for “data augmentation”, a term that doesn’t seem like good branding to me, but might be the key to MCMC coming into vogue for Bayesian computation.
Here is a little research list for my future self to investigate further:
- Tanner and Wong, The Calculation of Posterior Distributions by Data Augmentation – 1987 paper that introduced term
- van Dyk and Meng, The Art of Data Augmentation
- Tanner and Wong, From EM to Data Augmentation: The Emergence of MCMC Bayesian Computation in the 1980s – 2011 paper telling the tale
Hi Abe,
I wasn’t aware of it before, but MathSciNet looks fantastic, and I’ll look forward to reading your upcoming reviews there. I wanted to mention that there are more grassroots programs providing brief summaries of articles in other fields:
http://astrobites.org/ (astronomy; a project I’m involved in)
http://chembites.org/ (chemistry)
I think resources like this are critical tools for young scientists, reducing the barrier to entry for research, and they need to be developed in every field!