Open Access Research

Modified nonlinear conjugate gradient method with sufficient descent condition for unconstrained optimization

Jinkui Liu* and Shaoheng Wang

Author Affiliations

School of Mathematics and Statistics, Chongqing Three Gorges University, Chongqing, Wanzhou, People's Republic of China

For all author emails, please log on.

Journal of Inequalities and Applications 2011, 2011:57  doi:10.1186/1029-242X-2011-57


The electronic version of this article is the complete one and can be found online at: http://www.journalofinequalitiesandapplications.com/content/2011/1/57


Received: 3 March 2011
Accepted: 17 September 2011
Published: 17 September 2011

© 2011 Liu and Wang; licensee Springer.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, an efficient modified nonlinear conjugate gradient method for solving unconstrained optimization problems is proposed. An attractive property of the modified method is that the generated direction in each step is always descending without any line search. The global convergence result of the modified method is established under the general Wolfe line search condition. Numerical results show that the modified method is efficient and stationary by comparing with the well-known Polak-Ribiére-Polyak method, CG-DESCENT method and DSP-CG method using the unconstrained optimization problems from More and Garbow (ACM Trans Math Softw 7, 17-41, 1981), so it can be widely used in scientific computation.

Mathematics Subject Classification (2010) 90C26 · 65H10

1 Introduction

The conjugate gradient method comprises a class of unconstrained optimization algorithms which is characterized by low memory requirements and strong local or global convergence properties. The purpose of this paper is to study the global convergence properties and practical computational performance of a modified nonlinear conjugate gradient method for unconstrained optimization without restarts, and with appropriate conditions.

In this paper, we consider the unconstrained optimization problem:

min { f ( x ) | x R n } , (1.1)

where f : Rn → R is a real-valued, continuously differentiable function.

When applied to the nonlinear problem (1.1), a nonlinear conjugate gradient method generates a sequence {xk}, k ≥ 1, starting from an initial guess x1 Rn, using the recurrence

x k + 1 = x k + α k d k , (1.2)

where the positive step size αk is obtained by some line search, and the search direction dk is generated by the rule:

d k = - g k , f o r k = 1 , - g k + β k d k - 1 , f o r k 2 . (1) (1.3)

where gk = ▽f (xk) and βk is a scalar. Well-known formulas for βk are called Liu-Storey (LS) formula, Polak-Ribiére-Polyak (PRP) formula, and are given by

β k LS = - g k T y k - 1 g k - 1 T d k - 1 (Liu - Storey [1]), (1.4)

β k PRP = g k T y k 1 | | g k 1 | | 2 ( Polak Ribi é re Polyak [ 2 , 3 ] ) , (1.5)

respectively, where symbol || · || denotes the Euclidean norm and yk-1 = gk - gk-1 . Their corresponding methods generally specified as LS and PRP conjugate gradient methods. If f is a strictly convex quadratic function, the both methods are equivalent in the case that an exact line search is used. If f is non-convex, their behaviors may be distinctly different. In the past two decades, the convergence properties of LS and PRP methods have been intensively studied by many researchers (e.g., [1-5]).

In practical computation, the PRP method, which is generally believed to be the most efficient conjugate gradient methods, and has got meticulous in recent years. One remarkable property of the method is that they essentially perform a restart if a bad direction occurs (see [6]). However, Powell [7] constructed an example showed that the method can cycle infinitely without approaching any stationary point even if an exact line search is used. This counter-example also indicates that the method has a drawback that it is impossible to be convergent when the objective function is non-convex. Therefore, during the past few years, much effort has been investigated to create new formulae for βk, which not only possesses global convergence for general functions but also is superior to original method from the computation point of view (see [8-17]).

In this paper, we further study the conjugate gradient method for the solution of unconstrained optimization problems. Meanwhile, we focus our attention on the scalar for βk with [12]. We introduce a version of modified LS conjugate gradient method. An attractive property of the proposed method is that the generated directions are always descending. Besides, this property is independent of line search used and the convexity of objective function. Under the general Wolfe line search condition, we establish the global convergence of the proposed method. We also do some numerical experiments by using a large set of unconstrained optimization problems from [18], which indicates the proposed method possesses better performances when compared with the classic PRP method, CG-DESCENT method and DSP-CG method. This paper is organized as follows. In section 2, we propose our algorithm, some assumptions on objective function and some lemmas. In section 3, global convergence analysis is provided with the general Wolfe line search condition. In the last section, we perform the numerical experiments by using a set of large problems, and do some numerical comparisons with PRP method, CG-DESCENT method and DSP-CG method.

2 The sufficient descent property

Algorithom 2.1:

Step 1: Data x1 Rn, ε ≥ 0. Set d1 = -g1, if ||g1|| ≤ ε, then stop.

Step 2: Compute αk by the general Wolfe line searches (σ1 ∈ (δ, 1), σ2 ≥ 0):

f ( x k + α k d k ) f ( x k ) + δ α k g k T d k , (2.1)

σ 1 g k T d k g ( x k + α k d k ) T d k - σ 2 g k T d k . (2.2)

Step 3: Let xk+1 = xk + αk dk, gk+1 = g(xk+1), if ||gk+1|| ≤ ε, then stop.

Step 4: Generate dk+1 by (1.3) in which βk+1 is computed by

β k + 1 VLS = max β k + 1 LS - u | | y k | | 2 ( g k T d k ) 2 g k + 1 T d k , 0 , u > 1 4 . (2.3)

Step 5: Set k = k + 1, go to step 2.

In this paper, we prove the global convergence of the new algorithm under the following assumption.

Assumption (H):

(i) The level set Ω = {x Rn| f(x) ≤ f(x1)} is bounded, where x1 is the starting point.

(ii) In a neighborhood V of Ω, f is continuously differentiable and its gradient g is Lipschitz continuous, namely, there exists a constant L > 0 such that

| | g ( x ) - g ( y ) | | L | | x - y | | , x , y V . (2.4)

Obviously, from the Assumption (H) (i), there exists a positive constant ξ, so that

ξ = max { | | x - y | | : x , y Ω } , (2.5)

where ξ is the diameter of Ω.

From Assumption (H) (ii), we also know that there exists a constant r ̃ > 0 , such that

| | g ( x ) | | r ̃ , x V . (2.6)

On some studies of the conjugate gradient methods, the sufficient descent condition

g k T d k - c | | g k | | 2 , c > 0 .

plays an important role. Unfortunately, this condition is hard to hold. However, the following lemma proves the sufficient descent property of Algorithm 2.1 independent of any line search and the convexity of objective function.

Lemma 2.1 Consider any method (1.2)-(1.3), where β k = β k VLS . We get

g k T d k - 1 - 1 4 u | | g k | | 2 , k 1 . (2.7)

Proof. Multiplying (1.3) by g k T , we have

g k T d k = - | | g k | | 2 + β k g k T d k - 1 . (2.8)

From (2.3), if βk = 0, then

g k T d k = - | | g k | | 2 - 1 - 1 4 u | | g k | | 2 .

If β k = β k LS - u | | y k - 1 | | 2 ( g k - 1 T d k - 1 ) 2 g k T d k - 1 , then from (1.4) and (2.8), we have

g k T d k = - | | g k | | 2 + - g k T y k - 1 g k - 1 T d k - 1 - u | | y k - 1 | | 2 ( g k - 1 T d k - 1 ) 2 g k T d k - 1   g k T d k - 1 (1) = - g k T y k - 1 g k - 1 T d k - 1 g k T d k - 1 - u | | y k - 1 | | 2 ( g k T d k - 1 ) 2 - | | g k | | 2 ( g k - 1 T d k - 1 ) 2 ( g k - 1 T d k - 1 ) 2 . (2) (3)  (2.9)

We apply the inequality

A T B 1 2 ( | | A | | 2 + | | B | | 2 )

to the first term in (2.9) with

A T = ( - g k - 1 d k - 1 ) 2 u g k T , B = 2 u ( g k T d k - 1 ) y k - 1 ,

then we have

- g k T y k - 1 g k - 1 T d k - 1 g k T d k - 1 = A T B ( - g k - 1 T d k - 1 ) 2 4 u | | g k | | 2 + u ( g k T d k - 1 ) 2 | | y k - 1 | | 2 .

From the above inequality and (2.9), we have

g k T d k - 1 - 1 4 u | | g k | | 2 .

From the above proof, we obtain that the conclusion (2.7) holds.

3 Global convergence of the modified method

The conclusion of the following lemma, often called the Zoutendijk condition, is used to prove the global convergence of nonlinear conjugate gradient methods. It was originally given by Zoutendijk [19] under the Wolfe line searches. In the following lemma, we will prove the Zoutendijk condition under the general Wolfe line searches.

Lemma 3.1 Suppose Assumption (H) holds. Consider iteration of the form (1.2)-(1.3), where dk satisfies g k T d k < 0 for k N + and αk satisfies the general Wolfe line searches. Then

k 1 ( g k T d k ) 2 | | d k | | 2 < + . (3.1)

Proof. From (2.2) and Assumption (H) (ii), we have

- ( 1 - σ 1 ) d k T g k d k T ( g k + 1 - g k ) | | d k | | | | g k + 1 - g k | | L α k | | d k | | ,

then

α k σ 1 - 1 L d k T g k | | d k | | 2 .

From (2.1) and the equality above, we get

f ( x k ) - f ( x k + α k d k ) δ ( 1 - σ 1 ) L ( d k T g k ) 2 | | d k | | 2 .

From Assumption (H) (i), and combining this inequality, we have

k 1 ( g k T d k ) 2 | | d k | | 2 < + .

Lemma 3.2 Suppose Assumption (H) holds. Consider the method (1.2)-(1.3), where β k = β k VLS , and αk satisfies the general Wolfe line searches. If there exists a positive constant r, such that

| | g k | | r , k 1 , (3.2)

then we have

||dk|| ≠ 0, for each k and k 2 | | u k - u k - 1 | | 2 < + ,

where u k = d k | | d k | | .

Proof. From (3.2), it follows from the descent property of Lemma 2.1 that dk ≠ 0 for each k. Define

r k = - g k | | d k | | , δ k = β k VLS | | d k - 1 | | | | d k | | .

By (1.3), we have

u k = d k | | d k | | = - g k + β k VLS d k - 1 | | d k | | = r k + δ k u k - 1 .

Since the uk is unit vector, we have

| | r k | | = | | u k - δ k u k - 1 | | = | | δ k u k - u k - 1 | | .

Since δk ≥ 0, it follows that

u k - u k - 1 ( 1 + δ k ) u k - u k - 1 ( 1 + δ k ) u k - ( 1 + δ k ) u k - 1 (1) u k - δ k u k - 1 + δ k u k - u k - 1 2 r k . (2) (3.3)

From (3.1) and (3.2), we have

1 - 1 4 u 2 r 2 k 1 , d k 0 r k 2 ( 1 - 1 4 u ) 2 k 1 , d k 0 r k 2 g k 2 (1) = 1 - 1 4 u 2 k 1 , d k 0 g k 4 d k 2 k 1 , d k 0 ( g k T d k ) 2 d k 2 < + , (2)

then

k 1 , d k 0 | | r k | | 2 < + .

By (3.3), we have

k 2 | | u k - u k - 1 | | 2 < + .

Lemma 3.3 Suppose Assumption (H) holds. Consider the method (1.2)-(1.3), where β k = β k VLS , and αk satisfies the general line searches. If (3.2) holds, we have

β k VLS ρ , (3.4)

where ρ = L ξ ( 1 - 1 4 u ) r 2 ( r ̃ + u L ξ max { σ 1 , σ 2 } ) .

Proof. Define sk-1 = xk - xk-1. From (2.2), we have

| g k T d k - 1 | | g k - 1 T d k - 1 | max { σ 1 , σ 2 } . (3.5)

By (1.4), (2.3), (2.4)-(2.7) and (3.5), we have

| β k VLS | = | β k LS - u | | y k - 1 | | 2 ( g k - 1 T d k - 1 ) 2 g k T d k - 1 | | β k LS | + u | | y k - 1 | | 2 ( g k - 1 T d k - 1 ) 2 | g k T d k - 1 | (1)  | | g k - g k - 1 | | | g k - 1 T d k - 1 | ( | | g k | | + u | | g k - g k - 1 | | | g k - 1 T d k - 1 | | g k T d k - 1 | ) (2)  L | | x k - x k - 1 | | ( 1 - 1 4 u ) | | g k - 1 | | 2 r ̃ + u L | | x k - x k - 1 | | | g k T d k - 1 | | g k - 1 T d k - 1 | (3)  L | | s k - 1 | | ( 1 - 1 4 u ) r 2 ( r ̃ + u L | | s k - 1 | | max { σ 1 , σ 2 } ) (4)  L ξ ( 1 - 1 4 u ) r 2 ( r ̃ + u L ξ max { σ 1 , σ 2 } ) = ρ . (5)  (6) 

Theorem 3.1 Suppose Assumption (H) holds. Consider the method (1.2)-(1.3), where β k = β k VLS and αk satisfies the general line searches, then either gk = 0 for some k or

liminf k + | | g k | | = 0 .

Proof. If gk = 0 for some k, we have the conclusion. In the following, we suppose that gk ≠ 0 for all k, then (3.2) holds, and we can obtain a contradiction.

We also define u i = d i | | d i | | , then for any l, k Z +, and l > k, we have

x l - x k - 1 = i = k l | | x i - x i - 1 | | u i - 1 = i = k l | | s i - 1 | | u k - 1 + i = k l | | s i - 1 | | ( u i - 1 - u k - 1 ) .

By the triangle inequality, we have

i = k l | | s i - 1 | | | | x l - x k - 1 | | + i = k l | | s i - 1 | | | | u i - 1 - u k - 1 | | ξ + i = k l | | s i - 1 | | | | u i - 1 - u k - 1 | | . (3.6)

Let Δ be a positive integer, chosen large enough that

Δ 4 ρ , (3.7)

where ξ and ρ appear in Lemma 3.3.

Since the conclusion of Lemma 3.2, there exits a k0 large enough such that

i k 0 | | u i - u i - 1 | | 2 < 1 4 Δ . (3.8)

If ∀i ∈ [k + 1, k + Δ], then by (3.7) and the Cauchy-Schwarz inequality, we have

| | u i - 1 - u k - 1 | | j = k i - 1 | | u j - u j - 1 | | ( i - k ) 1 2 j = k i - 1 | | u j - u j - 1 | | 2 1 2 Δ 1 2 1 4 Δ 1 2 = 1 2 .

Combining this inequality and (3.6), we have

i = k l | | s i - 1 | | ξ + 1 2 i = k l | | s i - 1 | | ,

then

i = k l | | s i - 1 | | < 2 ξ . (3.9)

when ∀l ∈ [k + 1, k + Δ].

Define λ = L ( 1 - 1 4 u ) r 2 ( r ̃ + u L max { σ 1 , σ 2 } ) . From Lemma 3.3, we have

β k VLS λ | | s k - 1 | | .

Define Si = 2λ2 ||si||2. By (1.3) and (2.6), for ∀l k0 + 1, we have

| | d l | | 2 = | | - g l + β l d l - 1 | | 2 2 | | g l | | 2 + 2 β l 2 | | d l - 1 | | 2 (1) 2 r ̃ 2 + 2 λ 2 | | s l - 1 | | 2 | | d l - 1 | | 2 2 r ̃ 2 + S l - 1 | | d l - 1 | | 2 . (2) (3)

From the inequality above, we have

| | d l | | 2 2 r ̃ 2 i = k 0 + 1 l ̃ j = i l - 1 S j + | | d k 0 | | 2 j = i l - 1 S j . (3.10)

By the inequality above, the product is defined to be one whenever the index range is vacuous. Let us consider a product of Δ consecutive Si, where k k0. Combining (2.5), (3.7) and (3.9), by the arithmetic-geometric mean inequality, we have

j = k k + Δ - 1 S j = j = k k + Δ - 1 2 λ 2 | | s i | | 2 = j = k k + Δ - 1 2 λ | | s i | | 2 j = k k + Δ - 1 2 λ | | s i | | Δ 2 Δ 2 2 λ ξ Δ 2 Δ 2 2 ρ Δ 2 Δ 1 2 Δ ,

then the sum in (3.10) is bounded, independent of l.

From Lemma 3.2 and (3.2), we have

1 - 1 4 u 2 r 4 k 1 , d k 0 1 | | d k | | 2 1 - 1 4 u 2 k 1 , d k 0 | | g k | | 4 | | d k | | 2 k 1 , d k 0 ( g k T d k ) 2 | | d k | | 2 < + ,

which contradicts the result of the Theorem 3.1 that the bound for ||dl||, independent of l > k0. Hence,

liminf k + | | g k | | = 0 .

4 Numerical results

In this section, we compare the modified conjugate gradient method, denoted VLS method, to the PRP method, CG-DESCENT (η = 0.01) method [12] and DSP-CG (C = 0.5) method [17] in the average performance and the CPU time performance under the general Wolfe line search where δ = 0.01, σ1 = σ2 = 0.1 and u = 0.5. The tested 78 problems come from [18], and the termination condition of the experiments is ||gk|| ≤ 10-6, or It-max > 9999 where It-max denotes the maximal number of iterations. All codes were written in Mat lab 7.0 and run on a PC with 2.0 GHz CPU processor and 512 MB memory and Windows XP operation system.

The numerical results of our tests are reported in Table 1. The first column "N" represents the problem's index which corresponds to "N" in Table 2. The detailed numerical results are listed in the form NI/NF/NG/CPU, where NI, NF, NG, CPU denote the number of iterations, function evaluations, gradient evaluations and the time of the CPU in seconds, respectively. "Dim" denotes the dimension of the test problem. If the limit of iteration was exceeded the run was stopped, this is indicated by NaN. In the Table 2, "Problem" represents the problem's name in [18].

Table 1. The numerical results of the VLS, DSP-CG, PRP and CG-DESCENT methods

Table 2. The list of the tested problems

Firstly, in order to rank the average performance of all above conjugate gradient methods, one can compute the total number of function and gradient evaluation by the formula

N tota 1 = N F + l * N G , (4.1)

where l is some integer. According to the results on automatic differentiation [20,21], the value of l can be set to 5, i.e.

N tota 1 = N F + 5 * N G . (4.2)

That is to say, one gradient evaluation is equivalent to five function evaluations if automatic differentiation is used.

By making used of (4.2), we compare the VLS method with DSP-CG method, PRP method and CG-DESCENT method as follows: for the ith problem, compute the total numbers of function evaluations and gradient evaluations required by the VLS method, DSP-CG method, PRP method and CG-DESCENT method by formula (4.2), and denote them by Ntotal,i (VLS), Ntotal,i (DSP-CG), Ntotal,i (PRP) and Ntotal,i (CG-DESCENT), respectively. Then we calculate the ratio

γ i ( DSP - CG ) = N total , i ( DSP - CG ) N total , i ( VLS ) , γ i ( PRP ) = N total , i ( PRP ) N total , i ( VLS ) , γ i ( CG - DESCENT ) = N total , i ( CG - DESCENT ) N total , i ( VLS ) .

If the i0th problem is not run by the method, we use a constant λ = max{γi (method)|i S1} instead of γ i 0 (the method), where S1 denotes the set of the test problems which can be run by the method. The geometric mean of these ratios for VLS method over all the test problems is defined by

γ ( DSP - CG ) = i S γ i ( DSP - CG ) 1 | S | , (1) γ ( P R P ) = i S γ i ( P R P ) 1 | S | , (2) γ ( CG - DESCENT ) = i S γ i ( CG - DESCENT ) 1 | S | , (3) (4)

where S denotes the set of the test problems, and |S| denotes the number of elements in S. One advantage of the above rule is that, the comparison is relative and hence does not be dominated by a few problems for which the method requires a great deal of function evaluations and gradient functions.

According to the above rule, it is clear that γ (VLS) = 1. The values of γ (DSP-CG), γ (PRP) and γ (CG-DESCENT) are listed in Table 3.

Table 3. Relative efficiency of the VLS, DSP-CG, PRP and CG-DESCENT methods

Secondly, we adopt the performance profiles by Dolan and Moré [22] to compare the VLS method to the DSP-CG method, PRP method and CG-DESCENT method in the CPU time performance (see Figure 1) In Figure 1,

thumbnailFigure 1. Performance profiles of the conjugate gradient methods with respect to CPU time

X = τ , Y = P { log 2 ( r p , s ) τ : 1 s n s } .

That is, for each method, we plot the fraction P of problems for which the method is within a factor τ of the best time. The left side of the figure gives the percentage of the test problems for which a method is fastest; the right side gives the percentage of the test problems that were successfully solved by each of the methods. The top curve is the method that solved the most problems in a time that was within a factor τ of the best time. Since the top curve in Figure 1 corresponds to VLS method, this method is clearly fastest for this set for 78 test problems. In particular, the VLS method is fastest for about 60% of the test problems, and it ultimately solves 100% of the test problems.

From Table 3 and Figure 1, it is clear that the VLS method performs better in the average performance and the CPU time performance, which implies that the proposed modified method is computationally efficient.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

Jinkui Liu carried out the new method studies, designed all the steps of proof in this research and drafted the manuscript. Shaoheng Wang participated in writing the all codes of the algorithm and suggested many good ideas that made this paper possible. All authors read and approved the final manuscript.

Acknowledgements

The authors wish to express their heart felt thanks to the referees and Professor K. Teo for their detailed and helpful suggestions for revising the manuscript. At the same time, we are grateful for the suggestions of Lijuan Zhang. This work was supported by The Nature Science Foundation of Chongqing Education Committee (KJ091104, KJ101108) and Chongqing Three Gorges University (09ZZ-060).

References

  1. Liu, Y, Storey, C: Efficient generalized conjugate gradient algorithms. Part 1: theory. J Optim Theory Appl. 69, 129–137 (1992)

  2. Polak, E, Ribire, G: Note sur la convergence de directions conjugees. Rev Francaise Informat Recherche Operatinelle 3e Annee. 16, 35–43 (1969)

  3. Polak, BT: The conjugate gradient method in extreme problems. USSR Comput Math Math Phys. 9, 94–112 (1969). Publisher Full Text OpenURL

  4. Gaohang, Yu, Yanlin, Zhao, Zengxin, Wei: A descent nonlinear conjugate gradient method for large-scale unconstrained optimization. Appl Math Comput. 187, 636–643 (2007). Publisher Full Text OpenURL

  5. Jinkui, Liu, Xianglin, Du, Kairong, Wang: Convergence of descent methods with variable parameters. Acta Math Appl Sin. 33, 222–230 (in Chinese) (2010)

  6. Hager, WW, Zhang, H: A survey of nonlinear conjugate gradient methods. Pac J Optim. 2, 35–58 (2006)

  7. Powell, MJD: Nonconvex minimization calculations and the conjugate gradient method. Numerical Analysis (Dundee, 1983). Lecture Notes in Mathematics, pp. 122–141. Springer, Berlin (1984)

  8. Andrei, N: Scaled conjugate gradient algorithms for unconstrained optimization. Comput Optim Appl. 38, 401–416 (2007). Publisher Full Text OpenURL

  9. Andrei, N: Another nonlinear conjugate gradient algorithm for unconstrained optimization. Optim Methods Softw. 24, 89–104 (2009). Publisher Full Text OpenURL

  10. Birgin, EG, Martínez, JM: A spectral conjugate gradient method for unconstrained optimization. Appl Math Optim. 43, 117–128 (2001). Publisher Full Text OpenURL

  11. Dai, Y-H, Liao, L-Z: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl Math Optim. 43, 87–101 (2001). Publisher Full Text OpenURL

  12. Hager, WW, Zhang, H: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J Optim. 16, 170–192 (2005). Publisher Full Text OpenURL

  13. Li, G, Tang, C, Wei, Z: New conjugacy condition and related new conjugate gradient methods for unconstrained optimization. J Comput Appl Math. 202, 523–539 (2007). Publisher Full Text OpenURL

  14. Wei, Z, Li, G, Qi, L: New quasi-Newton methods for unconstrained optimization problems. Appl Math Comput. 175, 1156–1188 (2006). Publisher Full Text OpenURL

  15. Zhang, L, Zhou, W, Li, D-H: A descent modified Polak-Ribiére-Polyak conjugate gradient method and its global convergence. IMA J Numer Anal. 26, 629–640 (2006). Publisher Full Text OpenURL

  16. Yuan, G: modified nonlinear conjugate gradient method with sufficient descent property for large-scale optimization problems. Optim Lett. 3, 11–21 (2009). Publisher Full Text OpenURL

  17. Gaohang, Yu, Lutai, Guan, Wufan, Chen: Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization. Optim Methods Softw. 23, 275–293 (2008). Publisher Full Text OpenURL

  18. More, JJ, Garbow, BS, Hillstrome, KE: Testing unconstrained optimization software. ACM Trans Math Softw. 7, 17–41 (1981). Publisher Full Text OpenURL

  19. Zoutendijk, G: Nonlinear programming, computational methods. In: Abadie J (ed.) Integer and Nonlinear Programming, pp. 37–86. North-Holland, Amsterdam (1970)

  20. Dai, Y, Ni, Q: Testing different conjugate gradient methods for large-scale unconstrained optimization. J Comput Math. 213, 11–320 (2003)

  21. Griewank, A: On automatic differentiation. In: Iri M, Tannabe K (eds.) Mathematical Programming: Recent Developments and Applications, pp. 84–108. Kluwer Academic Publishers, Dordrecht (1989)

  22. Dolan, ED, Moré, JJ: Benchmarking optimization software with performance profiles. Math Program. 91, 201–213 (2001)