We wrote this up for a conference, but it didn’t have proceedings, so I’m putting it online here: Christine Allen, James Collins, Zane Rankin, Kate Wilson, Derrick Tsoi, Kelly Compton, Enabling Model Complexity Through an Improved Workflow, presented at Modeling World Systems Conference, Washington DC, May 13-15, 2019.
You can find it here while it makes its way onto medRxiv (and eventually makes it through the peer-review process, I hope!)
The preprint is now up to 68 pages, and a table starting on page 65 lists all of the dates that David Pigott and his army of data analysts compiled as part of this work. I think it might be useful to other researchers, and if it is, I’m sure they have better things to do than transform it into a machine-readable format.
Gene Balk might be the highest profile user of Census data in Seattle. He is a columnist for the Seattle Times, and as the “FYI Guy”, his columns center on facts in figures. Will the Census Bureau’s dedication to differential privacy ruin his work?
Let’s investigate. His July 17th column was titled “For first time this decade, a dip in King County’s white population, census data shows”. As you might suspect from the title, it turns entirely on census data, specifically the recently published 2018 Annual Population Estimates, which include estimates of the racial composition of the United States, each state and county, by year. In the column, Gene focused on how the race-specific population numbers changed from 2017 to 2018:
If I have understood correctly, there is not currently differential privacy in the disclosure avoidance system for Population Estimates. But what if there was? That would likely mean that the reported values would be noisy versions of the version now available on the Census website. Instead of saying that there were an estimated 744,955 people living in Seattle on July 1, 2018, it might say that there were 744,955 + X – Y, where X and Y are numbers drawn randomly from a geometric distribution. (It might be a bit more complicated than that, but I don’t think too much more complicated.) There is a crucial and yet-to-be-made decision about the appropriate standard deviation of this distribution, often described as the “privacy loss budget”, and also known as epsilon.
The privacy loss epsilon that people have been talking about for the decennial census is 0.25, which I recently learned is just for testing purposes.
Let’s reproduce the key figure with an epsilon of noise added. I started by trying epsilon of 0.01, since that sounds very private to me. I’m not being very precise here, but I am following some code from the Census Bureau’s End-to-End test.
For the purposes of having something that visibly changes, let’s also try a reduced epsilon of 0.001, which sounds too small, to me. Is this a useful way to investigate how DP might change things? It looks like even at this level of privacy loss, in 99 out of 100 noisy versions the story is the same: a dip in White population of King County.
At least in this case, DP will not change the story for the FYI guy!
I have become very interested in the 2020 US Census. There is a workshop at UW about it tomorrow. Expect more technical content about the decennial census on this blog soon. http://ccde.com.washington.edu/census2020/
My colleagues at IHME have been running a Statistics reading group for a year or more now, and this quarter I have joined them to carefully read Cameron Davidson-Pilon’s book Probabilistic Programming & Bayesian Methods for Hackers. We are now three sessions in, and it is going really well, IMHO. I’ve been doing some good thinking about what it takes to get started in applied statistics and Bayesian methods.
More thoughts on my recent 12 hours of Software-Carpentry-inspired teaching: one feedback card that I will keep in my rainy-day folder said the learner liked my jokes.
The jokes came in after the second break on the first day, before I figured out that 15 minutes was the right length for the break. I was trying to bring the group back together after only 5 minutes off, and having trouble. “Knock knock,” I said, not too loudly. “Who’s there?” answers some handful of learners who heard me over the racket. Now the room was starting to focus on this. But what did I have to deliver? “Isabel,” I offered, thanks to my 7-year-old neighbor.
Do you know this one? I need to get some Python-relevant material for future courses. Anyway, more of the class was now working with me on it. “Isabel who?” they politely offered. “Is a bell necessary on a bicycle?” Definitely a winner… you never know what will go over until you try it on stage.
I’ve recently completed 12 hours of teaching Introduction to Python and SQL for an audience of new Institute for Health Metrics and Evaluation (IHME) staff and fellows. SWC is a gem! (I have been thinking this for a while.)
In retrospect, what worked and what might I do differently next time?
Some SWC mechanics that worked well: Live coding, Hands-on exercises, Sticky notes, Jupyter notebooks, and friendly teaching assistants.
Some things to change: Remember to give the big-picture framing for each section, Do more explanation of solutions after hands-on exercises, Share the syllabus ahead of time, and emphasize that this is *introduction* material.
Some changes that I made mid-stream: longer breaks (15 minutes every hour or so), connect the examples to IHME-specific domains.
I also did not use an etherpad until we got through Creating Functions (Section 6 in the Python Inflammation Lession). That might have been too much typing in the first two sessions, and it was definitely appreciated.