I have been a fan of this educational offering for a while now, and I have been mentioning that for a while now, too. But I am moved to say it again, because I’m planning a four-session Intro to Python training for aspiring Health Metrics Scientists, and the SWC curriculum is making that so easy. It could have been so hard. ❤ u SWC.
Monthly Archives: October 2018
Some papers from this summer’s SummerSim (editor’s note: summer-before-last) are available online now:
- Untangling Uncertainty with Common Random Numbers: A Simulation Study
- Microsimulation Models for Cost-Effectiveness Analysis: A Review and Introduction To CEAM
I sat-in on a CSE seminar recently, where a big crowd is exploring the state of the art in human-and-computer-together intelligence. It was really fun. The topic was a discussion of a paper on human/computer collaboration from the 1990s:
Grosz, B. J. (1996). Collaborative systems (AAAI-94 presidential address). AI magazine, 17(2), 67.
But just as fun as the classic article and discussion it inspired, was an even older vision of what digital assistants might be, from Apple in the 1980s:
I left thinking that a knowledge navigator like the one Apple envisioned is not really collaboration, but when it makes the Brazil and Sahara simulations work together, that might be collaboration. But to be a true collaborator, both agents need to want something (or “desire” something?) for themselves.
I hope I have time to attend again soon.
I’m not sure this list is useful, but at least I’ll find it when I next search:
I read random papers once in a while from the AMS Math Reviews program, and I read one recently about an MCMC approach to X-ray imaging. It was a fun, detailed look at a few different ways to do sampling, and use effective sample size to figure out which worked better when.
It did also leave me wondering what the giant X-ray machines buried 1,000 feet underground are for, though.
This turned out to be a bit of a downer, but it was a good learning exercise, and the general approach will be useful for generating test data on a different project. See notebook here.
This was about 2 hours of fuss that I wish I had avoided. With my updated Jupyter Notebooks, I need to be explicit about what conda environment for python I am using.
It is all laid out clearly, if only I had been looking in this bit of the IPython docs:
For example, using conda environments, install a
Python (myenv) Kernel in a first environment:
source activate myenv python -m ipykernel install --user --name myenv --display-name "Python (myenv)"
And in a second environment, after making sure ipykernel is installed in it:
source activate other-env python -m ipykernel install --user --name other-env --display-name "Python (other-env)"