Subject: Talk: Balasubramanian Sivan / Optimal Crowdsourcing Contests / Wed 1/28, 3:30pm / CSE 403
SPEAKER: Balasubramanian Sivan (MSR)
TITLE: Optimal Crowdsourcing Contests
WHEN: Wednesday, 1/28, 3:30pm
WHERE: CSE 403
We study the design and approximation of optimal crowdsourcing
contests. Crowdsourcing contests can be modeled as all-pay auctions because
entrants must exert effort up-front to enter. Unlike all-pay auctions where a
usual design objective would be to maximize revenue, in crowdsourcing contests,
the principal only benefits from the submission with the highest quality. We
give a theory for optimal crowdsourcing contests that mirrors the theory of
optimal auction design. We also compare crowdsourcing contests with more
conventional means of procurement and show that crowdsourcing contests are
constant factor approximations to conventional methods.
Joint work with Shuchi Chawla and Jason Hartline.
From: Abraham D. Flaxman
Subject: FW: Talk: Balasubramanian Sivan / Optimal Crowdsourcing Contests / Wed 1/28, 3:30pm / CSE 403
Sorry I missed this. Jason told me about this project a little while back, and it convinced me to enter a contest. It was more fun than writing a grant proposal, and when it was rejected they gave me a 2nd runner up cash prize…
A cool addition to the big verbal autopsy study I worked on a few years ago is out now: “symptomatic diagnosis” takes the verbal autopsy approach and applies it to find out what ails people non-fatally. http://www.biomedcentral.com/1741-7015/13/15
I am flipping through yet another National Academy report this week. They know what hooks me. This time: What Research Says About Effective Instruction in Undergraduate Science and Engineering (2015). http://www.nap.edu/catalog/18687/reaching-students-what-research-says-about-effective-instruction-in-undergraduate
Lots of ideas for little changes to my class in here…
I mean, not exactly what I will do, but lots of inspiration.
A set of slides from a talk by Matthew Salgnik crossed my inbox recently, titled “Open and Reproducible Research: Goals, Obstacles, and Solutions”. Good stuff! I liked the *bonus points* in the Data-is-available section:
bonus points for releasing extra variables that are not need to reproduce specific analysis.
This gets at what I think is really the point of reproducible research. To make it faster and easier to make new knowledge.
I wonder if this will be useful: Modeling Good Research Practices—Overview: A Report of the ISPOR-SMDM Modeling Good Research Practices Task Force-1 http://www.ispor.org/workpaper/Modeling_Methods/Modeling_Good_Research_Practices_Overview-1.pdf
It has quite a lot of best practices!
I’ve got a fun class going this quarter, on “artificial intelligence for health metricians”, and the course content mixed with some of the student interest has got me looking at the options for doing Gaussian process regression in Python. `PyMC2` has some nice stuff, but the `sklearn` version fits with the rest of my course examples more naturally, so I’m using that instead.
But `sklearn` doesn’t have the fanciest of fancy covariance functions implemented, and at IHME we have been down the road of the Matern covariance function for over five years now. It’s in `PyMC`, so I took a crack at mash-up. (Took a mash at a mash-up?) There is some room for improvement, but it is a start. If you need to do non-parametric regression for something that is differentiable more than once, but less than infinity times, you could try starting here: http://nbviewer.ipython.org/gist/aflaxman/af7bdb56987c50f3812b
p.s. Chris Fonnesbeck has some great notes on doing stuff like this and much more here: http://nbviewer.ipython.org/github/fonnesbeck/Bios366/blob/master/notebooks/Section5_1-Gaussian-Processes.ipynb
It’s not quite d3-broke-and-made-art quality, but I like the plot in this bug report: https://github.com/jakevdp/mpld3/issues/274#issuecomment-68576519