Tag Archives: random effects

Laplace approximation in Python: another cool trick with PyMC3

I admit that I’ve been skeptical of the complete rewrite of PyMC that underlies version 3. It seemed to me motivated by an interest in using unproven new step methods that require knowing the derivative of the posterior distribution. But, it is really coming together, and regardless of whether or not the Hamiltonian Monte Carlo stuff pays off, there are some cool tricks you can do when you can get derivatives without a hassle.

Exhibit 1: A Laplace approximation approach to fitting mixed effect models (as described in http://www.seanet.com/~bradbell/tmb.htm)

http://nbviewer.ipython.org/gist/aflaxman/9dab52248d159e02b2ae

Comments Off on Laplace approximation in Python: another cool trick with PyMC3

Filed under software engineering, statistics

MCMC in Python: A random effects logistic regression example

I have had this idea for a while, to go through the examples from the OpenBUGS webpage and port them to PyMC, so that I can be sure I’m not going much slower than I could be, and so that people can compare MCMC samplers “apples-to-apples”. But its easy to have ideas. Acting on them takes more time.

So I’m happy that I finally found a little time to sit with Kyle Foreman and get started. We ported one example over, the “seeds” random effects logistic regression. It is a nice little example, and it also gave me a chance to put something in the ipython notebook, which I continue to think is a great way to share code.





[py] [pdf]

3 Comments

Filed under MCMC, software engineering

What is this “effects” business?

I’ve got to figure out what people mean when they say “fixed effect” and “random effect”, because I’ve been confused about it for a year and I’ve been hearing it all the time lately.

Bayesian Data Analysis is my starting guide, which includes a footnote on page 391:

The terms ‘fixed’ and ‘random’ come from the non-Bayesian statistical tradition are are somewhat confusing in a Bayesian context where all unknown parameters are treated as ‘random’. The non-Bayesian view considers fixed effects to be fixed unknown quantities, but the standard procedures proposed to estimate these parameters, based on specified repeated-sampling properties, happen to be equivalent to the Bayesian posterior inferences under a noninformative (uniform) prior distribution.

That doesn’t totally resolve my confusion, though, because my doctor-economist colleagues are often asking for the posterior mean of the random effects, or similarly non-non-Bayesian sounding quantities.

I was about to formulate my working definition, and see how long I can stick to it, but then I was volunteered to teach a seminar on this very topic! So instead of doing the work now, I turn to you, wise internet, to tell me how I can understand this thing.

6 Comments

Filed under statistics