My google-fu failed me on this one, so for the next person googling “stupid fast library for solving a system of equations in R such that the trade off between speed and ‘robustness’ is way on the speed side”, you’re welcome.

I’m on the final leg of my dissertation and I have some code that loops through a discreet state-space with about 300k points. Each of these iterations requires a two-equation non-linear system to be solved. Because all of this is contained in a loop that is (supposed to be) converging on a pair of value functions, its not super critical that the solutions to this system are “correct” every time. I want speed, baby.

Here’s the R libraries I tried:

**nleqslv**with the functions written to have a high penalty: each iteration took 0.020 seconds with a standard deviation of 0.007 seconds**optim**using the “make the system a single quadratic” trick and having a high penalty: mean 0.253 seconds with a standard deviation of 0.130 seconds**optim**using the “make the system a single quadratic” trick and using the “L-BFGS-B” algorithm to deal with boundedness: mean 0.014 seconds with a standard deviation of 0.032 seconds**pso**(partical swarm optimization) with bounds: mean 20 seconds!

I was kind of excited about particle swarm optimization but it was waaaaaay slow (at least with the default settings). The best was optim (available with the R core installation) using the quadratic trick. The quadratic trick is just defining all your functions in the system with the right hand side equal to zero and then having the algorithm minimize the sum of squares of those equations. Interestingly the nleqslv library, which is built for solving “medium-sized” systems, did almost as well on average but had had a much better variance. I’m not sure if this is because nleqslv gives up earlier or if it does a better job staying out of pathological parts of the parameter space relative to optim.

Anyway, there you go. If you find your self solving a small system of equations lots and lots of times over and over again, optim should do the trick for you in terms of speed.

PS – R seems to be the wrong environment for this task. Fortran code that does a very similar thing runs about 10 times faster.

1. The relative performance of optimization algorithms is highly problem-dependent. The program that works the best for one application is unlikely to be the best for another.

2. R is is a rather slow front-end giving access to a bunch of fast C and FORTRAN libraries. In particular, optim’s L-BFGS-B algorithm is implemented in FORTRAN. The reason your code runs slower in R than in FORTRAN probably has nothing to do with the optimization routine. Rather, the likely culprit is the loop that sets up each optimization problem and calls optim. There are a bunch of tricks for speeding up R code in such situations (which also work in environments such as MATLAB/Octave), which all basically amount to two ideas: (1) moving as much work as possible from the slow R front-end to the fast libraries, and (2) pre-allocating and re-using objects in memory to minimize allocation and re-allocation operations. Or just bite the bullet and write everything in C and FORTRAN. If you are willing to sacrifice programming time for computational speed, that’s definitely the way to go.

3. If your 300k problems are structured so that “neighboring” problems have similar solutions, homotopy methods can work nicely.

4. You quadratic trick is generally not recommended, for numerical precision reasons. You probably know this, but any googlers should be aware that (relative to methods designed for equation solving) making the system a quadratic optimization problem will square the condition number. You will lose a lot of significant digits, sometimes all of them (rendering your answer meaningless). See pp. 171-172 of Ken Judd’s numerical methods textbook for details. This is not a huge problem if you don’t require precision and your system is well conditioned, but be careful.

Wow. Great comments Steven. I love the internet.

Regarding #2: playing with system.time suggests its not the loops but the optimization routines. But, yeah, if I had more than a few weeks to hammer this out, I would move it to FORTRAN.