# F#/.NET function minimization (optimization)

I have done some research on function minimization algorithms implemented on .NET. Short summary can be found below.

## Gradient descent

Gradient descent is one of the simplest function optimization algorithms. You can implement it by yourself or using one of the following articles:

# DotNumerics

DotNumerics is a Numerical Library for .NET. The library is written in pure C# and has more than 100,000 lines of code with the most advanced algorithms for Linear Algebra, Differential Equations and Optimization problems.

Unfortunately, dotNumerics does not have a detailed documentation. Let’s go through all minimization algorithms implemented in dotNumerics. First of all, we implement banana function from simplex method example available on the library site.

```#r @"DotNumerics.dll"
open System
open DotNumerics.Optimization

//f(a,b) = 100*(b-a^2)^2 + (1-a)^2
let BananaFunction (x: float array) =
100.0 * Math.Pow((x. - x. * x.), 2.0) + Math.Pow((1.0 - x.), 2.0)
```

## Downhill Simplex

Downhill Simplex method of Nelder and Mead

The key advantage of Downhill Simplex method is that it does not require the gradient function. All you need is a function and an initial guess.

```let initialGuess = [|0.1; 2.0|]

let simplexMin =
let simplex = Simplex();
simplex.ComputeMin(BananaFunction,initialGuess);
```

We have a bit of control over the evaluation model. We can restrict MaxFunEvaluations and specify custom Tolerance in Simplex model. In this case, model instantiation looks like below.

```    let simplex = Simplex(MaxFunEvaluations=10000, Tolerance=1e-5);
```

## Truncated Newton

“A Survey of Truncated-Newton Methods”, Journal of Computational and Applied Mathematics.

All other algorithms require gradient function to make calculation.

```
//f'a(a,b) = (100*(b-a^2)^2 + (1-a)^2)'a = 100*2*(b-a^2)*(-2a) - 2*(1-a)
//f'b(a,b) = (100*(b-a^2)^2 + (1-a)^2)'b = 100*2*(b-a^2)
let BananaFunctionGradient (x: float array) =
[|100.0 * 2.0 * (x. - x. * x.) * (-2.0 * x.) - 2.0 * (1.0 - x.);
100.0 * 2.0 * (x. - x. * x.)|]

let newtonMin =
let newton = TruncatedNewton()
newton.ComputeMin(BananaFunction,BananaFunctionGradient,initialGuess);
```

Truncated Newton algorithm has three more configuration parameters than Downhill Simplex: Accuracy, MaximunStep and SearchSeverity.

## L-BFGS-B

Limited memory Broyden–Fletcher–Goldfarb–Shanno method

```let bfgsMin =
let lbfgsb = L_BFGS_B()
lbfgsb.ComputeMin(BananaFunction, BananaFunctionGradient, initialGuess);
```

L-BFGS-B has one more configuration parameters than Downhill Simplex – it is AccuracyFactor.

## Results

Below you can find evaluation results received from models with default parameters.

```Real: 00:00:00.024, CPU: 00:00:00.062, GC gen0: 0, gen1: 0, gen2: 0
val simplexMin : float [] = [|0.999999998; 0.9999999956|]
Real: 00:00:00.074, CPU: 00:00:00.078, GC gen0: 0, gen1: 0, gen2: 0
val newtonMin : float [] = [|0.9999999999; 0.9999999999|]
Real: 00:00:00.137, CPU: 00:00:00.140, GC gen0: 0, gen1: 0, gen2: 0
val bfgsMin : float [] = [|1.0; 1.0|]```