I wrote a case study on the Lotka-Volterra model in Stan, simply because it seems to be the hello-world of ODE models in applied math courses. Stan can compute both the log likelihood and derivatives of an ODE system where the user only has to code the ODE. Our autodiff system does the heavy lifting by internally creating a coupled ODE with sensitivites so that we can control error in derivatives. The full technique’s in our code and also described in our arXiv paper on Stan’s autodiff. Our ODE devs (Yi Zhang is leading the effort now) have done a lot more with stiff solvers and efficient internal Jacobians than when I wrote the arXiv paper, but it describes the basic technique that’ll work fine for Lotka-Volterra. Yi’s even got some prototype PDE solvers working with sensitivities.
On another related topic, how do you feel about using emulation when it’s possible to draw from the sampling distribution (the likelihood viewed as a distribution with fixed parametes)? The idea is to fit a GP or neural network or some other non-parametric model of the intractable density function using simulations as data. I’m about to set off on writing a case study on that in Stan as people have been using it for complicated PDE models that are just a component of a larger joint model. But I haven’t seen much mention of the technique being used in the wild. The best use case I know is Maggie Lieu’s model of galactic mass using gravitationally lensed data.

]]>