I learned about it because it doesn’t work in `mpld3`… just one more benefit of being part of an open-source project. It would be so cool to have a `mpld3` version with some interactivity included, since interactivity can address one pitfalls of the stacked bar chart, the challenge of comparing lengths with different baselines.

]]>

def print_tree(t, root=0, depth=1): if depth == 1: print 'def predict(X_i):' indent = ' '*depth print indent + '# node %s: impurity = %.2f' % (str(root), t.impurity[root]) left_child = t.children_left[root] right_child = t.children_right[root] if left_child == sklearn.tree._tree.TREE_LEAF: print indent + 'return %s # (node %d)' % (str(t.value[root]), root) else: print indent + 'if X_i[%d] < %.2f: # (node %d)' % (t.feature[root], t.threshold[root], root) print_tree(t, root=left_child, depth=depth+1) print indent + 'else:' print_tree(t,root=right_child, depth=depth+1)

Did I do this for MILK a few years ago? I’m becoming an absent-minded professor ahead of my time.

]]>

]]>

Good, simple ideas are our most precious intellectual commodity.

]]>

]]>

http://www.carrierroutes.com/ZIPCodes.html

]]>

]]>

But I had a great idea, or at least one that I think is great: see what people are confused by online. I tried this out for my lecture last week on cross-validation, using the stats.stackexchange site: http://stats.stackexchange.com/questions/tagged/cross-validation?sort=votes&pageSize=50

After reading a ton of these, I decided that if my students know when they need test/train/validation splits and when they can get aways with test/train splits then they’ve really figured things out. Now I can’t find the question that I thought distilled this best, though.

]]>

Does the talk exist somewhere?

]]>

`From: theory-group-admin@cs.washington.edu`

Subject: Talk: Balasubramanian Sivan / Optimal Crowdsourcing Contests / Wed 1/28, 3:30pm / CSE 403

SPEAKER: Balasubramanian Sivan (MSR)

TITLE: Optimal Crowdsourcing Contests

WHEN: Wednesday, 1/28, 3:30pm

WHERE: CSE 403

ABSTRACT:

We study the design and approximation of optimal crowdsourcing

contests. Crowdsourcing contests can be modeled as all-pay auctions because

entrants must exert effort up-front to enter. Unlike all-pay auctions where a

usual design objective would be to maximize revenue, in crowdsourcing contests,

the principal only benefits from the submission with the highest quality. We

give a theory for optimal crowdsourcing contests that mirrors the theory of

optimal auction design. We also compare crowdsourcing contests with more

conventional means of procurement and show that crowdsourcing contests are

constant factor approximations to conventional methods.

Joint work with Shuchi Chawla and Jason Hartline.

`From: Abraham D. Flaxman`

Subject: FW: Talk: Balasubramanian Sivan / Optimal Crowdsourcing Contests / Wed 1/28, 3:30pm / CSE 403

Sorry I missed this. Jason told me about this project a little while back, and it convinced me to enter a contest. It was more fun than writing a grant proposal, and when it was rejected they gave me a 2nd runner up cash prizeā¦

–Abie

]]>