A report on Suicide Prevention crossed my desk recently, which includes a ranking in terms of deaths and in terms of years of life lost:
Monthly Archives: November 2014
An email I sent recently:
NYTimes ran an interesting article on paternity leave just as I was returning from my unpaid time off spent with my baby. It doesn’t say anything specific about university professors, but may be of interest to others on this list anyway: http://nyti.ms/1pvfxba
For a shorter and more personal version of this material, see a Facebook VP’s version here: https://www.facebook.com/tstocky/posts/996111776858
I found some archival videos of Donald Knuth teaching literate programming in his mathematical writing class in 1987:
Lots of other promising stuff on the Stanford page that links to it: http://scpd.stanford.edu/free-stuff/engineering-archives/donald-e-knuth-lectures
There is a proposal to drop some questions from the American Community Survey (ACS), and I was planning to use one of them in a project I’m trying to get started. I hope they keep it.
“I know there’s a lot of angst in the community right now,” Treat says. “But I think there’s a lack of understanding that the survey is under attack. So I encourage everybody to respond to the notice. The more responses we get, the better understanding there will be about the value of collecting this information.”
Some of the most interesting stuff that has crossed my desk on the spread of Ebola codes from the anthropologists who have been working in West Africa for a long time:
I should read some of these, and stash a few for the PGF journal club:
I’ve been digging for presentation materials lately, and one source I want to remember is this tunblr full of visual representations of big data: http://bigdatapix.tumblr.com/
Big Data is visualized in so many ways…all of them blue and with numbers and lens flare.
Here is an interesting resource to watch: The Exemplar Public Health Datasets Collection in Open Health Data. Large longitudinal cohorts for secondary data analysis, anyone?
You might be interested in a new data science competition site, http://www.drivendata.org/, which is like kaggle meets change.org. They call it “Data science competitions to save the world”, which might be a little bit tongue-in-cheek. For the first cash-prize-awarding competition, they have a multi-class, multi-label classification challenge, which they are calling Box-Plots for Education.
Greg Wilson has sparked an interesting discussion in the last little while, about writing automatic tests for scientific code. Here is his blog about it, which ends with a request for input about how you would unit test this physics simulation benchmark.
I’ve been thinking about testing recently myself, so this discussion was well timed. For me, the answer is that it is too late… you need to think about and maybe even write your tests _before_ you write your n-body simulation, or whatever. And it is too removed from context. The point of automatic tests is that you can run them again and again. But why would you run them again? It all depends what you are going to change. If I’m reading this right, the reason debian developers are interested in reference implementations of the n-body problem is to compare the speed of this algorithm when implemented in different programming languages. So the most important test is really a “regression test”: does the output generated match the output expected?
Actually, this test is recommended precisely:
ndiff -abserr 1.0e-8program output N = 1000 with this output file to check your program is correct before contributing.
Some of the things I want to test over and over and over again are: Is the input data formatted correctly? Does it look reasonable? Did I convert dates correctly? Did I make a change that breaks something which I will not see for hours (or days) when running on my full dataset?