optim {stats}  R Documentation 
Generalpurpose optimization based on Nelder–Mead, quasiNewton and conjugategradient algorithms. It includes an option for boxconstrained optimization and simulated annealing.
optim(par, fn, gr = NULL, method = c("NelderMead", "BFGS", "CG", "LBFGSB", "SANN"), lower = Inf, upper = Inf, control = list(), hessian = FALSE, ...)
par 
Initial values for the parameters to be optimized over. 
fn 
A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place. It should return a scalar result. 
gr 
A function to return the gradient for the "BFGS" ,
"CG" and "LBFGSB" methods. If it is NULL , a
finitedifference approximation will be used.
For the "SANN" method it specifies a function to generate a new
candidate point. If it is NULL a default Gaussian Markov
kernel is used.

method 
The method to be used. See Details. 
lower, upper 
Bounds on the variables for the "LBFGSB" method. 
control 
A list of control parameters. See Details. 
hessian 
Logical. Should a numerically differentiated Hessian matrix be returned? 
... 
Further arguments to be passed to fn and gr . 
By default this function performs minimization, but it will maximize
if control$fnscale
is negative.
The default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for nondifferentiable functions.
Method "BFGS"
is a quasiNewton method (also known as a variable
metric algorithm), specifically that published simultaneously in 1970
by Broyden, Fletcher, Goldfarb and Shanno. This uses function values
and gradients to build up a picture of the surface to be optimized.
Method "CG"
is a conjugate gradients method based on that by
Fletcher and Reeves (1964) (but with the option of Polak–Ribiere or
Beale–Sorenson updates). Conjugate gradient methods will generally
be more fragile that the BFGS method, but as they do not store a
matrix they may be successful in much larger optimization problems.
Method "LBFGSB"
is that of Byrd et. al. (1995) which
allows box constraints, that is each variable can be given a lower
and/or upper bound. The initial value must satisfy the constraints.
This uses a limitedmemory modification of the BFGS quasiNewton
method. If nontrivial bounds are supplied, this method will be
selected, with a warning.
Nocedal and Wright (1999) is a comprehensive reference for the previous three methods.
Method "SANN"
is by default a variant of simulated annealing
given in Belisle (1992). Simulatedannealing belongs to the class of
stochastic global optimization methods. It uses only function values
but is relatively slow. It will also work for nondifferentiable
functions. This implementation uses the Metropolis function for the
acceptance probability. By default the next candidate point is
generated from a Gaussian Markov kernel with scale proportional to the
actual temperature. If a function to generate a new candidate point is
given, method "SANN"
can also be used to solve combinatorial
optimization problems. Temperatures are decreased according to the
logarithmic cooling schedule as given in Belisle (1992, p. 890). Note
that the "SANN"
method depends critically on the settings of
the control parameters. It is not a generalpurpose method but can be
very useful in getting to a good value on a very rough surface.
Function fn
can return NA
or Inf
if the function
cannot be evaluated at the supplied value, but the initial value must
have a computable finite value of fn
.
(Except for method "LBFGSB"
where the values should always be
finite.)
optim
can be used recursively, and for a single parameter
as well as many.
The control
argument is a list that can supply any of the
following components:
trace
"LBFGSB"
there are six levels of tracing. (To understand exactly what
these do see the source code: higher levels give more detail.)fnscale
fn
and gr
during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on fn(par)/fnscale
.parscale
par/parscale
and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value.ndeps
par/parscale
scale. Defaults to 1e3
.maxit
100
for the derivativebased methods, and
500
for "NelderMead"
. For "SANN"
maxit
gives the total number of function evaluations. There is
no other stopping criterion. Defaults to 10000
.abstol
reltol
reltol * (abs(val) + reltol)
at a step. Defaults to
sqrt(.Machine$double.eps)
, typically about 1e8
.alpha
, beta
, gamma
"NelderMead"
method. alpha
is the reflection
factor (default 1.0), beta
the contraction factor (0.5) and
gamma
the expansion factor (2.0).REPORT
"BFGS"
and "LBFGSB"
methods if control$trace
is positive.
Defaults to every 10 iterations.type
1
for the Fletcher–Reeves update, 2
for
Polak–Ribiere and 3
for Beale–Sorenson.lmm
"LBFGSB"
method, It defaults to 5
.factr
"LBFGSB"
method. Convergence occurs when the reduction in the objective is
within this factor of the machine tolerance. Default is 1e7
,
that is a tolerance of about 1e8
.pgtol
"LBFGSB"
method. It is a tolerance on the projected gradient in the current
search direction. This defaults to zero, when the check is
suppressed.temp
"SANN"
method. It is the
starting temperature for the cooling schedule. Defaults to
10
.tmax
"SANN"
method. Defaults to 10
.A list with components:
par 
The best set of parameters found. 
value 
The value of fn corresponding to par . 
counts 
A twoelement integer vector giving the number of calls
to fn and gr respectively. This excludes those calls needed
to compute the Hessian, if requested, and any calls to fn to
compute a finitedifference approximation to the gradient. 
convergence 
An integer code. 0 indicates successful
convergence. Error codes are

message 
A character string giving any additional information
returned by the optimizer, or NULL . 
hessian 
Only if argument hessian is true. A symmetric
matrix giving an estimate of the Hessian at the solution found. Note
that this is the Hessian of the unconstrained problem even if the
box constraints are active. 
optim
will work with onedimensional par
s, but the
default method does not work well (and will warn). Use
optimize
instead.
The code for methods "NelderMead"
, "BFGS"
and
"CG"
was based originally on Pascal code in Nash (1990) that was
translated by p2c
and then handoptimized. Dr Nash has agreed
that the code can be made freely available.
The code for method "LBFGSB"
is based on Fortran code by Zhu,
Byrd, LuChen and Nocedal obtained from Netlib (file
‘opt/lbfgs_bcm.shar’: another version is in ‘toms/778’).
The code for method "SANN"
was contributed by A. Trapletti.
Belisle, C. J. P. (1992) Convergence theorems for a class of simulated annealing algorithms on Rd. J Applied Probability, 29, 885–895.
Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995) A limited memory algorithm for bound constrained optimization. SIAM J. Scientific Computing, 16, 1190–1208.
Fletcher, R. and Reeves, C. M. (1964) Function minimization by conjugate gradients. Computer Journal 7, 148–154.
Nash, J. C. (1990) Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation. Adam Hilger.
Nelder, J. A. and Mead, R. (1965) A simplex algorithm for function minimization. Computer Journal 7, 308–313.
Nocedal, J. and Wright, S. J. (1999) Numerical Optimization. Springer.
optimize
for onedimensional minimization and
constrOptim
for constrained optimization.
fr < function(x) { ## Rosenbrock Banana function x1 < x[1] x2 < x[2] 100 * (x2  x1 * x1)^2 + (1  x1)^2 } grr < function(x) { ## Gradient of 'fr' x1 < x[1] x2 < x[2] c(400 * x1 * (x2  x1 * x1)  2 * (1  x1), 200 * (x2  x1 * x1)) } optim(c(1.2,1), fr) optim(c(1.2,1), fr, grr, method = "BFGS") optim(c(1.2,1), fr, NULL, method = "BFGS", hessian = TRUE) optim(c(1.2,1), fr, grr, method = "CG") optim(c(1.2,1), fr, grr, method = "CG", control=list(type=2)) optim(c(1.2,1), fr, grr, method = "LBFGSB") flb < function(x) { p < length(x); sum(c(1, rep(4, p1)) * (x  c(1, x[p])^2)^2) } ## 25dimensional box constrained optim(rep(3, 25), flb, NULL, "LBFGSB", lower=rep(2, 25), upper=rep(4, 25)) # par[24] is *not* at boundary ## "wild" function , global minimum at about 15.81515 fw < function (x) 10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80 plot(fw, 50, 50, n=1000, main = "optim() minimising 'wild function'") res < optim(50, fw, method="SANN", control=list(maxit=20000, temp=20, parscale=20)) res ## Now improve locally (r2 < optim(res$par, fw, method="BFGS")) points(r2$par, r2$val, pch = 8, col = "red", cex = 2) ## Combinatorial optimization: Traveling salesman problem library(stats) # normally loaded eurodistmat < as.matrix(eurodist) distance < function(sq) { # Target function sq2 < embed(sq, 2) return(sum(eurodistmat[cbind(sq2[,2],sq2[,1])])) } genseq < function(sq) { # Generate new candidate sequence idx < seq(2, NROW(eurodistmat)1, by=1) changepoints < sample(idx, size=2, replace=FALSE) tmp < sq[changepoints[1]] sq[changepoints[1]] < sq[changepoints[2]] sq[changepoints[2]] < tmp return(sq) } sq < c(1,2:NROW(eurodistmat),1) # Initial sequence distance(sq) set.seed(2222) # chosen to get a good soln quickly res < optim(sq, distance, genseq, method="SANN", control = list(maxit=6000, temp=2000, trace=TRUE)) res # Near optimum distance around 12842 loc < cmdscale(eurodist) rx < range(x < loc[,1]) ry < range(y < loc[,2]) tspinit < loc[sq,] tspres < loc[res$par,] s < seq(NROW(tspres)1) plot(x, y, type="n", asp=1, xlab="", ylab="", main="initial solution of traveling salesman problem") arrows(tspinit[s,1], tspinit[s,2], tspinit[s+1,1], tspinit[s+1,2], angle=10, col="green") text(x, y, labels(eurodist), cex=0.8) plot(x, y, type="n", asp=1, xlab="", ylab="", main="optim() 'solving' traveling salesman problem") arrows(tspres[s,1], tspres[s,2], tspres[s+1,1], tspres[s+1,2], angle=10, col="red") text(x, y, labels(eurodist), cex=0.8)