Interesting Q/A: some good questions about data transformation

I’m continuing my class-prep practice of searching through Cross-Validated questions with tags corresponding to upcoming class topics, and here are some interesting ones I found about data transformations:

http://stats.stackexchange.com/questions/46418/why-is-the-square-root-transformation-recommended-for-count-data

http://stats.stackexchange.com/questions/1444/how-should-i-transform-non-negative-data-including-zeros

http://stats.stackexchange.com/questions/27951/when-are-log-scales-appropriate

http://stats.stackexchange.com/questions/90149/pitfalls-to-avoid-when-transforming-data

http://stats.stackexchange.com/questions/60777/what-are-the-assumptions-of-negative-binomial-regression

The last one isn’t really about data transformations, but is still interesting.

1 Comment

Filed under machine learning

Tables of Stacked Bars in mpl (but not mpld3)

Here is a little feature in Matplotlib that I never saw before: stacked bar plots with tables attached. Perhaps too ugly for my Iraq Mortality stacked bar charts, but definitely handy for exploratory work.

I learned about it because it doesn’t work in `mpld3`… just one more benefit of being part of an open-source project. It would be so cool to have a `mpld3` version with some interactivity included, since interactivity can address one pitfalls of the stacked bar chart, the challenge of comparing lengths with different baselines.

Leave a comment

Filed under dataviz

ML in Python: Getting the Decision Tree out of sklearn

I helped my students understand the decision tree classifier in sklearn recently. Maybe they think I helped too much. But I think it was good for them. We did an interesting little exercise, too, writing a program that writes a program that represents a decision tree. Maybe it will be useful to someone else as well:

def print_tree(t, root=0, depth=1):
    if depth == 1:
        print 'def predict(X_i):'
    indent = '    '*depth
    print indent + '# node %s: impurity = %.2f' % (str(root), t.impurity[root])
    left_child = t.children_left[root]
    right_child = t.children_right[root]
    
    if left_child == sklearn.tree._tree.TREE_LEAF:
        print indent + 'return %s # (node %d)' % (str(t.value[root]), root)
    else:
        print indent + 'if X_i[%d] < %.2f: # (node %d)' % (t.feature[root], t.threshold[root], root)
        print_tree(t, root=left_child, depth=depth+1)
        
        print indent + 'else:'
        print_tree(t,root=right_child, depth=depth+1)

See it in action here.

Did I do this for MILK a few years ago? I’m becoming an absent-minded professor ahead of my time.

Comments Off

Filed under machine learning

Data Science Seminars

These seminars that eScience and company are putting on are great. I have to go to the IHME seminars scheduled at competing time once in a while, so someone else attend at tell me about this one: http://data.uw.edu/seminar/2015/mullainathan/

Comments Off

Filed under Uncategorized

Stephen Few on Missing Values

A new edition of the Visual Business Intelligence Newsletter crossed my inbox recently, on how to display timeseries with missing and incomplete values: http://www.perceptualedge.com/articles/visual_business_intelligence/missing_values_and_incomplete_periods_in_time_series.pdf

Good, simple ideas are our most precious intellectual commodity.

Comments Off

Filed under Uncategorized

That Docker thing sounds promising

I missed this presentation, but I am going to figure out how to use Docker for reproducible research soon! http://benmarwick.github.io/UW-eScience-docker-for-reproducible-research/#1

2 Comments

Filed under software engineering

How many 3-digit zip codes are there?

There are 929 3-digit ZIP Codes in the country (USA).

http://www.carrierroutes.com/ZIPCodes.html

Comments Off

Filed under global health