Open Access Review Article

Super-Relaxed ( )-Proximal Point Algorithms, Relaxed ( )-Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

RaviP Agarwal12* and RamU Verma13

Author Affiliations

1 Department of Mathematical Sciences, Florida Institute of Technology, Melbourne, FL 32901, USA

2 Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia

3 International Publications (USA), 12085 Lake Cypress Circle, Suite I109, Orlando, FL 32828, USA

For all author emails, please log on.

Fixed Point Theory and Applications 2009, 2009:957407  doi:10.1155/2009/957407


The electronic version of this article is the complete one and can be found online at: http://www.fixedpointtheoryandapplications.com/content/2009/1/957407


Received: 26 June 2009
Accepted: 30 August 2009
Published: 27 September 2009

© 2009 The Author(s).

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We glance at recent advances to the general theory of maximal (set-valued) monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ()-proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ()-monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976), while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976) to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992). Even for the linear convergence analysis for the overrelaxed (or super-relaxed) ()-proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ()-monotonicity, and then applying to first-order evolution equations/inclusions.

1. Introduction and Preliminaries

We begin with a real Hilbert space with the norm and the inner product . We consider the general variational inclusion problem of the following form. Find a solution to

(11)

where is a set-valued mapping on .

In the first part, Rockafellar [1] introduced the proximal point algorithm, and examined the general convergence and rate of convergence analysis, while solving (1.1) by showing when is maximal monotone, that the sequence generated for an initial point by

(12)

converges weakly to a solution of (1.1), provided that the approximation is made sufficiently accurate as the iteration proceeds, where for a sequence of positive real numbers that is bounded away from zero, and in second part using the first part and further amending the proximal point algorithm succeeded in achieving the linear convergence. It follows from (1.2) that is an approximate solution to inclusion problem

(13)

As a matter of fact, Rockafellar did demonstrate the weak convergence and strong convergence separately in two theorems, but for the strong convergence a further imposition of the Lipschitz continuity of at 0 plays the crucial part. Let us recall these results.

Theorem 1.1 (see [1]).

Let be a real Hilbert space. Let be maximal monotone, and let be a zero of . Let the sequence be generated by the iterative procedure

(14)

such that

(15)

where , , and is bounded away from zero. Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then the sequence converges weakly to for with

(16)

Remark 1.2.

Note that Rockafellar [1] in Theorem 1.1, pointed out by a counterexample that the condition

(17)

is crucial; otherwise we may end up getting a nonconvergent sequence even with having just and one dimensional. Consider any maximal monotone mapping such that the set , that is known always to be convex and contains more than one element. Then it turns out that contains a nonconvergent sequence such that

(18)

while

(19)

The situation changes when if the convex function attains its minimum nonuniquely.

Next we look, unlike Theorem 1.1, at [1, Theorem  2] in which Rockafellar achieved a linear convergence of the sequence by considering the Lipschitz continuity of at 0 instead.

Theorem 1.3 (see [1]).

Let be a real Hilbert space. Let be maximal monotone, and let be a zero of . Let the sequence be generated by the iterative procedure

(110)

such that

(111)

where , , and is bounded away from zero. Suppose that the sequence is bounded in the sense that there exists at least one solution to . In addition, let be Lipschitz continuous at 0 with modulus , and

(112)

Then the sequence converges linearly to for with

(113)

where

(114)

Later on Rockafellar [1] applied Theorem 1.1 to a minimization problem regarding function , where is lower semicontinuous convex and proper by taking . It is well known that in this situation is maximal monotone, and further

(115)

or

(116)

As a specialization, we have

(117)

That means, the proximal point algorithm for is a minimizing method for .

There is an abundance of literature on proximal point algorithms with applications mostly followed by the work of Rockafellar [1], but we focus greatly on the work of Eckstein and Bertsekas [2], where they have relaxed the proximal point algorithm in the following form and applied to the Douglas-Rachford splitting method. Now let us have a look at the relaxed proximal point algorithm introduced and studied in [2].

Algorithm 1.4.

Let be a set-valued maximal monotone mapping on with , and let the sequence be generated by the iterative procedure

(118)

where is such that

(119)

are scalar sequences.

As a matter of fact, Eckstein and Bertsekas [2] applied Algorithm 1.4 to approximate a weak solution to (1.1). In other words, they established Theorem 1.1 using the relaxed proximal point algorithm instead.

Theorem 1.5 (see [2, Theorem  3]).

Let be a set-valued maximal monotone mapping on with , and let the sequence be generated by Algorithm 1.4. If the scalar sequences , and satisfy

(120)

then the sequence converges weakly to a zero of .

Convergence analysis for Algorithm 1.4 is achieved using the notion of the firm nonexpansiveness of the resolvent operator . Somehow, they have not considered applying Algorithm 1.4 to Theorem 1.3 to the case of the linear convergence. The nonexpansiveness of the resolvent operator poses the prime difficulty to algorithmic convergence, and may be, this could have been the real steering for Rockafellar to the Lipschitz continuity of instead. That is why the Yosida approximation turned out to be more effective in this scenario, because the Yosida approximation

(121)

takes care of the Lipschitz continuity issue.

As we look back into the literature, general maximal monotonicity has played a greater role to studying convex programming as well as variational inequalities/inclusions. Later it turned out that one of the most fundamental algorithms applied to solve these problems was the proximal point algorithm. In [2], Eckstein and Bertsekas have shown that much of the theory of the relaxed proximal point algorithm and related algorithms can be passed along to the Douglas-Rachford splitting method and its specializations, for instance, the alternating direction method of multipliers.

Just recently, Verma [3] generalized the relaxed proximal point algorithm and applied to the approximation solvability of variational inclusion problems of the form (1.1). Recently, a great deal of research on the solvability of inclusion problems is carried out using resolvent operator techniques, that have applications to other problems such as equilibria problems in economics, optimization and control theory, operations research, and mathematical programming.

In this survey, we first discuss in detail the history of proximal point algorithms with their applications to general nonlinear variational inclusion problems, and then we recall some significant developments, especially the relaxation of proximal point algorithms with applications to the Douglas-Rachford splitting method. At the second stage, we turn our attention to over-relaxed proximal point algorithms and their contribution to the linear convergence. We start with some introductory materials to the over-relaxed -proximal point algorithm based on the notion of maximal -monotonicity, and recall some investigations on approximation solvability of a general class of nonlinear inclusion problems involving maximal -monotone mappings in a Hilbert space setting. As a matter fact, we examine the convergence analysis of the over-relaxed -proximal point algorithm for solving a class of nonlinear inclusions. Also, several results on the generalized firm nonexpansiveness and generalized resolvent mapping are given. Furthermore, we explore the real impact of recently obtained results on the celebrated work of Rockafellar, most importantly in the case of over-relaxed (or super-relaxed) proximal point algorithms. For more details, we refer the reader [155].

We note that the solution set for (1.1) turns out to be the same as of the Yosida inclusion

(122)

where is the Yosida regularization of , while there is an equivalent form , that is characterized as the Yosida approximation of with parameter . It seems in certain ways that it is easier to solve the Yosida inclusion than (1.1). In other words, provides better solvability conditions under right choice for than itself. To prove this assertion, let us recall the following existence theorem.

Theorem 1.6.

Let be a set-valued maximal monotone mapping on . Then the following statements are equivalent.

(i)An element is a solution to .

(ii).

Assume that is a solution to . Then we have

(123)

On the other hand, has also been applied to first-order evolution equations/inclusions in Hilbert space as well as in Banach space settings. As in our present situation, resolvent operator is empowered by -maximal monotonicity, the Yosida approximation can be generalized in the context of solving first-order evolution equations/inclusions. In Zeidler [52, Lemma  31.7], it is shown that the Yosida approximation is -Lipschitz continuous, that is,

(124)

where this inequality is based on the nonexpansiveness of the resolvent operator , though the result does not seem to be much application oriented, while if we apply the firm nonexpansiveness of the resolvent operator , we can achieve, as applied in [5], more application-oriented results as follows:

(125)

where the Lipschitz constant is .

Proof.

For any , we have

(126)

Based on this equality and the firm nonexpansiveness of , we derive

(127)

Thus, we have

(128)

This completes the proof.

We note that from applications' point of view, it seems that the result

(129)

that is, is -cocoercive, is relatively more useful than that of the nonexpansive form

(130)

It is well known when is maximal monotone, the resolvent operator is single valued and Lipschitz continuous globally with the best constant . Furthermore, the inverse resolvent identity is satisfied

(131)

Indeed, the Yosida approximation and its equivalent form are related to this identity. Let us consider

(132)

Suppose that , then we have

(133)

On the other hand, we have the inverse resolvent identity that lays the foundation of the Yosida approximation.

Lemma 1.7 (see [26, Lemma  12.14]).

All mappings satisfy

(134)

Proof.

We include the proof, though its similar to that of the above identity. Assume that then we have

(135)

which is the required assertion.

Note that when is maximal monotone, mappings

(136)

are single valued, in fact maximal monotone and nonexpansive.

The contents for the paper are organized as follows. Section 1 deals with a general historical development of the relaxed proximal point algorithm and its variants in conjunction with maximal -monotonicity, and with the approximation solvability of a class of nonlinear inclusion problems using the convergence analysis for the proximal point algorithm as well as for the relaxed proximal point algorithm. Section 2 introduces and derives some results on unifying maximal -monotonicity and generalized firm nonexpansiveness of the generalized resolvent operator. In Section 3, the role of the over-relaxed -proximal point algorithm is examined in detail in terms of its applications to approximating the solution of the inclusion problem (1.1). Finally, Section 4 deals with some important specializations that connect the results on general maximal monotonicity, especially to several aspects of the linear convergence.

2. General Maximal η-Monotonicity

In this section we discus some results based on basic properties of maximal -monotonicity, and then we derive some results involving -monotonicity and the generalized firm nonexpansiveness. Let denote a real Hilbert space with the norm and inner product . Let be a multivalued mapping on . We will denote both the map and its graph by , that is, the set . This is equivalent to stating that a mapping is any subset of , and . If is single valued, we will still use to represent the unique such that rather than the singleton set . This interpretation will much depend on the context. The domain of a map is defined (as its projection onto the first argument) by

(21)

will denote the full domain of , and the range of is defined by

(22)

The inverse of is . For a real number and a mapping , let . If and are any mappings, we define

(23)

Definition 2.1.

Let be a multivalued mapping on . The map is said to be

(i)monotone if

(24)

(ii)-strongly monotone if there exists a positive constant such that

(25)

(iii)strongly monotone if

(26)

(iv)-strongly pseudomonotone if

(27)

implies

(28)

(v)pseudomonotone if

(29)

implies

(210)

(vi)-relaxed monotone if there exists a positive constant such that

(211)

(vii)cocoercive if

(212)

(viii)-cocoercive if there is a positive constant such that

(213)

Definition 2.2.

Let be a mapping on . The map is said to be

(i)nonexpansive if

(214)

(ii)firmly nonexpansive if

(215)

(iii)-firmly nonexpansive if there exists a constant such that

(216)

In light of Definitions 2.1(vii) and 2.2(ii), notions of cocoerciveness and firm nonexpansiveness coincide, but differ in applications much depending on the context.

Definition 2.3.

A map is said to be

(i)monotone if

(217)

(ii)-strongly monotone if there exists a positive constant such that

(218)

(iii)strongly monotone if

(219)

(iv)-Lipschitz continuous if there exists a positive constant such that

(220)

Definition 2.4.

Let be a multivalued mapping on , and let be another mapping. The map is said to be

(i)-monotone if

(221)

(ii)-strongly monotone if there exists a positive constant such that

(222)

(iii)-strongly monotone if

(223)

(iv)-strongly pseudomonotone if

(224)

implies

(225)

(v)-pseudomonotone if

(226)

implies

(227)

(vi)-relaxed monotone if there exists a positive constant such that

(228)

(vii)-cocoercive if there is a positive constant such that

(229)

Definition 2.5.

A map is said to be maximal -monotone if

(1) is -monotone,

(2) for .

Proposition 2.6.

Let be a -strongly monotone mapping, and let be a maximal -monotone mapping. Then is maximal -monotone for , where is the identity mapping.

Proof.

The proof follows on applying Definition 2.5.

Proposition 2.7 (see [4]).

Let be -strongly monotone, and let be maximal -monotone. Then generalized resolvent operator is single valued, where is the identity mapping.

Proof.

For a given , consider for . Since is maximal -monotone, we have

(230)

Now using the -monotonicity of , it follows that

(231)

Since is -strongly monotone, it implies . Thus, is single valued.

Definition 2.8.

Let be -strongly monotone, and let be maximal -monotone. Then the generalized resolvent operator is defined by

(232)

Proposition 2.9 (see [4]).

Let be a real Hilbert space, let be maximal -monotone, and let be -strongly monotone. Then the resolvent operator associated with and defined by

(233)

satisfies the following:

(234)

Proof.

For any , it follows from the definition of the resolvent operator that

(235)

Since is -monotone, we have

(236)

In light of (2.36), we have

(237)

Proposition 2.10 (see [4]).

Let be a real Hilbert space, let be maximal -monotone, and let be -strongly monotone.

If, in addition, (for )

(238)

then, for , one has (for )

(239)

where

(240)

Proof.

We include the proof for the sake of the completeness. To prove (2.39), we apply (2.38) to Proposition 2.9, and we get

(241)

It further follows that

(242)

When and in Proposition 2.10, we have the following.

Proposition 2.11.

Let be a real Hilbert space, let be maximal -monotone, and let be -strongly monotone.

If, in addition, one supposes that

(243)

then, for , one has (for )

(244)

where

(245)

For and in Proposition 2.10, we find a result of interest as follows.

Proposition 2.12.

Let be a real Hilbert space, let be maximal -monotone, and let be strongly monotone.

If, in addition, one supposes (for ) that

(246)

then, for , one has

(247)

where

(248)

For in Proposition 2.10, we have the following result.

Proposition 2.13.

Let be a real Hilbert space, let be maximal -monotone, and let be strongly monotone.

If, in addition, one assumes that

(249)

then, for , one has

(250)

where

(251)

3. The Over-Relaxed (η)-Proximal Point Algorithm

This section deals with the over-relaxed -proximal point algorithm and its application to approximation solvability of the inclusion problem (1.1) based on the maximal -monotonicity. Furthermore, some results connecting the -monotonicity and corresponding resolvent operator are established, that generalize the results on the firm nonexpansiveness [2], while the auxiliary results on maximal -monotonicity and general maximal monotonicity are obtained.

Theorem 3.1.

Let be a real Hilbert space, and let be maximal -monotone. Then the following statements are mutually equivalent.

(i)An element is a solution to (1.1).

(ii)For an , one has

(31)

where

(32)

Proof.

It follows from the definition of the generalized resolvent operator corresponding to .

Note that Theorem 3.1 generalizes [2, Lemma  2] to the case of a maximal -monotone mapping.

Next, we present a generalization to the relaxed proximal point algorithm [3] based on the maximal -monotonicity.

Algorithm 3.2 (see [4]).

Let be a set-valued maximal -monotone mapping on with , and let the sequence be generated by the iterative procedure

(33)

and satisfies

(34)

where , and

(35)

Here

(36)

are scalar sequences such that .

Algorithm 3.3.

Let be a set-valued maximal -monotone mapping on with , and let the sequence be generated by the iterative procedure

(37)

and satisfies

(38)

where , and

(39)

are scalar sequences such that .

For in Algorithm 3.2, we have the following.

Algorithm 3.4.

Let be a set-valued maximal -monotone mapping on with , and let the sequence be generated by the iterative procedure

(310)

and satisfies

(311)

where , and

(312)

Here

(313)

are scalar sequences.

In the following result [4], we observe that Theorems 1.1 and 1.3 are unified and are generalized to the case of the -maximal monotonicity and super-relaxed proximal point algorithm. Also, we notice that this result in certain respects demonstrates the importance of the firm nonexpansiveness rather than of the nonexpansiveness.

Theorem 3.5 (see [4]).

Let be a real Hilbert space. Let be maximal -monotone, and let be a zero of . Let be -strongly monotone. Furthermore, assume (for )

(314)

Let the sequence be generated by the iterative procedure

(315)

and satisfies

(316)

where , , , , and .

Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then one has (for )

(317)

where and

(318)

In addition, suppose that the sequence is generated by Algorithm 3.2 as well, and that is -Lipschitz continuous at 0, that is, there exists a unique solution to (equivalently, ) and for constants and , one has

(319)

Here

(320)

are scalar sequences such that and .

Then the sequence converges linearly to a unique solution with rate

(321)

where , , and sequences and satisfy , , , and .

Proof.

Suppose that is a zero of . For all , we set

(322)

Therefore, . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of , and hence a zero of .

Next, the proof of (3.17) follows from a regular manipulation, and the following equality:

(323)

Before we start establishing linear convergence of the sequence , we express in light of Algorithm 3.2 as

(324)

Now we begin verifying the boundedness of the sequence leading to .

Next, we estimate using Proposition 2.10 (for )

(325)

Since under the assumptions , it follows that

(326)

where .

Moreover,

(327)

Now we find the estimate leading to the boundedness of the sequence ,

(328)

Thus, the sequence is bounded.

We further examine the estimate

(329)

where .

Since is summable, so is , and hence . As , we have that

(330)

that is, .

Now we turn our attention (using the previous argument) to linear convergence of the sequence . Since , it implies for large that . Moreover, for and . Therefore, in light of (3.19), by taking and , we have

(331)

Applying (3.17), we arrive at

(332)

where .

Since , we estimate using (3.32) and that

(333)

where .

Hence, we have

(334)

where

(335)

for and .

Since Algorithm 3.2 ensures

(336)

we have

(337)

It follows that

(338)

where

(339)

for setting .

Theorem 3.6.

Let be a real Hilbert space, and let be maximal -monotone. Let be -strongly monotone. For an arbitrarily chosen initial point , let the sequence be bounded (in the sense that there exists at least one solution to ) and generated by Algorithm 3.3 as

(340)

with

(341)

where , and sequences

(342)

satisfy , , , and .

In addition, one assumes (for )

(343)

Then the sequence converges weakly to a solution of (1.1).

Proof.

The proof is similar to that of the first part of Theorem 3.5 on applying the generalized representation lemma.

Theorem 3.7.

Let be a real Hilbert space. Let be maximal -monotone, and let be a zero of . Let be -strongly monotone. Let the sequence be generated by the iterative procedure

(344)

and satisfies

(345)

where , , , , and .

Furthermore, assume (for )

(346)

Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then (for )

(347)

where

(348)

In addition, assume that the sequence is generated by Algorithm 3.4 as well, and that is -Lipschitz continuous at 0, that is, there exists a unique solution to (equivalently, ) and for constants and , one has

(349)

Here

(350)

are scalar sequences such that and .

Then the sequence converges linearly to a unique solution with rate

(351)

where , , and sequences and satisfy , , and .

Proof.

The proof is similar to that of Theorem 3.5.

4. Some Specializations

Finally, we examine some significant specializations of Theorem 3.5 in this section. Let us start with and and applying Proposition 2.11.

Theorem 4.1.

Let be a real Hilbert space. Let be maximal -monotone, and let be a zero of . Let be -strongly monotone. Furthermore, assume

(41)

Let the sequence be generated by the iterative procedure

(42)

and satisfies

(43)

where , , , , and .

Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then one has (for )

(44)

where and

(45)

In addition, suppose that the sequence is generated by Algorithm 3.2 as well, and that is -Lipschitz continuous at 0, that is, there exists a unique solution to (equivalently, ) and for constants and , one has

(46)

Here

(47)

are scalar sequences such that and .

Then the sequence converges linearly to a unique solution with rate

(48)

where , , and sequences and satisfy , , and .

Proof.

We need to include the proof for the sake of the completeness. Suppose that is a zero of . For all , we set

(49)

Therefore, . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of , and hence a zero of .

Next, the proof of (4.4) follows from a regular manipulation, and the following equality:

(410)

Before we start establishing linear convergence of the sequence , we express in light of Algorithm 3.2 as

(411)

Now we begin verifying the boundedness of the sequence leading to .

Next, we estimate using Proposition 2.10 (for )

(412)

Since under the assumptions , it follows that

(413)

where .

Moreover,

(414)

Now we find the estimate leading to the boundedness of the sequence ,

(415)

Thus, the sequence is bounded.

We further examine the estimate

(416)

where .

Since is summable, so is , and hence . As , we have that

(417)

that is, .

Now we turn our attention (using the previous argument) to linear convergence of the sequence . Since , it implies for large that . Moreover, for and . Therefore, in light of (4.6), by taking and , we have

(418)

Applying (4.4), we arrive at

(419)

where .

Since , we estimate using (4.1) and that

(420)

where .

Hence, we have

(421)

where

(422)

for and .

Since Algorithm 3.2 ensures

(423)

we have

(424)

It follows that

(425)

where

(426)

for setting .

Second we examine Theorem 3.5 when and , but in this case there is no need to include the proof.

Theorem 4.2.

Let be a real Hilbert space. Let be maximal -monotone, and let be a zero of . Let be strongly monotone. Furthermore, assume (for )

(427)

Let the sequence be generated by the iterative procedure

(428)

and satisfies

(429)

where , , , , and .

Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then one has

(430)

where and

(431)

In addition, suppose that the sequence is generated by Algorithm 3.2 as well, and that is -Lipschitz continuous at 0, that is, there exists a unique solution to (equivalently, ) and for constants and , one has

(432)

Here

(433)

are scalar sequences such that and .

Then the sequence converges linearly to a unique solution with rate

(434)

where , , and sequences and satisfy , , and .

Finally, we consider the case when in Theorem 3.5, especially using Proposition 2.13. In this situation, the inclusion of the complete proof seems to be appropriate.

Theorem 4.3.

Let be a real Hilbert space. Let be maximal -monotone, and let be a zero of . Let be strongly monotone. Furthermore, assume

(435)

Let the sequence be generated by the iterative procedure

(436)

and satisfies

(437)

where , , , , and .

Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then one has

(438)

where

(439)

In addition, suppose that the sequence is generated by Algorithm 3.2 as well, and that is -Lipschitz continuous at 0, that is, there exists a unique solution to (equivalently, ) and for constants and , one has

(440)

Here

(441)

are scalar sequences such that and .

Then the sequence converges linearly to a unique solution with rate

(442)

where , , and sequences and satisfy , , , and .

Proof.

We need to include the proof for the sake of the completeness. Suppose that is a zero of . For all , we set

(443)

Therefore, . Then, in light of Theorem 3.1, any solution to (1.1) is a fixed point of , and hence a zero of .

Next, the proof of (4.38) follows from a regular manipulation, and the following equality:

(444)

Before we start establishing linear convergence of the sequence , we express in light of Algorithm 3.2 as

(445)

Now we begin examining the boundedness of the sequence leading to .

Next, we estimate using Proposition 2.13 that

(446)

Since under the assumptions , it follows that

(447)

Moreover,

(448)

Now we find the estimate leading to the boundedness of the sequence ,

(449)

Thus, the sequence is bounded.

We further examine the estimate

(450)

where .

Since is summable, so is , and hence . As , we have that

(451)

that is, . Now we find the estimate leading to the boundedness of the sequence ,

(452)

Thus, the sequence is bounded.

We further examine the estimate

(453)

where .

Since is summable, so is , and hence . As , we have that

(454)

that is, .

Now we turn our attention (using the previous argument) to linear convergence of the sequence . Since , it implies for large that . Moreover, for and . Therefore, in light of (4.40), by taking and , we have

(455)

Applying (4.38), we arrive at

(456)

where .

Since , we estimate using (4.35) and that

(457)

Hence, we have

(458)

where

(459)

for and .

Since Algorithm 3.2 ensures

(460)

we have

(461)

It follows that

(462)

where

(463)

for setting .

Note that if we set in Theorem 4.3, we get a result connecting [2] to the case of a linear convergence setting, but the algorithm remains overrelaxed (or superrelaxed). In this context, we state the following results before we start examining Theorem 4.7, the main result on linear convergence in the maximal monotone setting. Note that based on Proposition 4.6, notions of cocoercivity and firm nonexpansiveness coincide, though it is well known that they may differ in usage much depending on the context.

Theorem 4.4.

Let be a real Hilbert space, and let be maximal monotone. Then the following statements are mutually equivalent.

(i)An element is a solution to (1.1).

(ii)For an , one has

(464)

where

(465)

Proof.

It follows from the definition of the generalized resolvent operator corresponding to .

Next, we present the super-relaxed Proximal point algorithm based on the maximal monotonicity.

Algorithm 4.5.

Let be a set-valued maximal monotone mapping on with , and let the sequence be generated by the iterative procedure

(466)

and satisfies

(467)

where , and

(468)

Here

(469)

are scalar sequences such that .

Proposition 4.6.

Let be a real Hilbert space, and let be maximal monotone. Then, for , one has

(470)

where

(471)

Theorem 4.7.

Let be a real Hilbert space. Let be maximal monotone, and let be a zero of . Let the sequence be generated by the iterative procedure

(472)

and satisfies

(473)

where , , , , , and .

Suppose that the sequence is bounded in the sense that there exists at least one solution to .

Then one has

(474)

where

(475)

In addition, suppose that the sequence is generated by Algorithm 4.5, and that is -Lipschitz continuous at 0, that is, there exists a unique solution to (equivalently, ) and for constants and , one has

(476)

Here

(477)

are scalar sequences such that and .

Then the sequence converges linearly to a unique solution with rate

(478)

where , , and sequences and satisfy , , , and .

Proof.

We need to include the proof for the sake of the completeness. Suppose that is a zero of . For all , we set

(479)

Therefore, . Then, in light of Theorem 4.4, any solution to (1.1) is a fixed point of , and hence a zero of .

Next, the proof of (4.74) follows from applying the regular manipulation, and the following equality:

(480)

Before we start establishing linear convergence of the sequence , we express in light of Algorithm 4.5 as

(481)

Now we begin examining the boundedness of the sequence leading to .

Next, we estimate using Proposition 4.6 that

(482)

Since under the assumptions , it follows that

(483)

Moreover,

(484)

Now we find the estimate leading to the boundedness of the sequence ,

(485)

Therefore, the sequence is bounded.

We further examine the estimate

(486)

where .

Since is summable, so is , and hence . As , we have that

(487)

that is, . Now we find the estimate leading to the boundedness of the sequence ,

(488)

Thus, the sequence is bounded.

We further examine the estimate

(489)

where .

Since is summable, so is , and hence . As , we have that

(490)

that is, .

Now we turn our attention (using the previous argument) to linear convergence of the sequence . Since , it implies for large that . Moreover, for and . Therefore, in light of (4.76), by taking and , we have

(491)

Applying (4.74), we arrive at

(492)

where .

Since , we estimate (for ) that

(493)

Hence, we have

(494)

where

(495)

for and .

Since Algorithm 4.5 ensures

(496)

we have

(497)

It follows that

(498)

where

(499)

for setting .

References

  1. Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM Journal on Control and Optimization. 14(5), 877–898 (1976). Publisher Full Text OpenURL

  2. Eckstein, J, Bertsekas, DP: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming. 55(3), 293–318 (1992). Publisher Full Text OpenURL

  3. Verma, RU: On the generalized proximal point algorithm with applications to inclusion problems. Journal of Industrial and Management Optimization. 5(2), 381–390 (2009)

  4. Agarwal, RP, Verma, RU: The over-relaxed proximal point algorithm and nonlinear variational inclusion problems. Nonlinear Functional Analysis and Applications. 14(4), (2009)

  5. Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces,p. 352. Nordhoff, Leyden, The Nethedands (1976)

  6. Boikanyo, OA, Morosanu, G: Modified Rockafellar's algorithms. Mathematical Sciences Research Journal. In press

  7. Bertsekas, DP: Necessary and sufficient condition for a penalty method to be exact. Mathematical Programming. 9(1), 87–99 (1975). Publisher Full Text OpenURL

  8. Bertsekas, DP: Constrained Optimization and Lagrange Multiplier Methods, Computer Science and Applied Mathematics,p. xiii+395. Academic Press, New York, NY, USA (1982)

  9. Douglas, J Jr.., Rachford, HH Jr..: On the numerical solution of heat conduction problems in two and three space variables. Transactions of the American Mathematical Society. 82, 421–439 (1956). Publisher Full Text OpenURL

  10. Eckstein, J: Splitting methods for monotone operators with applications to parallel optimization, Doctoral dissertation, Department of Civil Engineering, Massachusetts Institute of Technology, Cambridge, Mass, USA (1989)

  11. Eckstein, J: Nonlinear proximal point algorithms using Bregman functions, with applications to convex programming. Mathematics of Operations Research. 18(1), 202–226 (1993). Publisher Full Text OpenURL

  12. Eckstein, J: Approximate iterations in Bregman-function-based proximal algorithms. Mathematical Programming. 83(1), 113–123 (1998). Publisher Full Text OpenURL

  13. Eckstein, J, Ferris, MC: Smooth methods of multipliers for complementarity problems. Mathematical Programming. 86(1), 65–90 (1999). Publisher Full Text OpenURL

  14. Ferris, MC: Finite termination of the proximal point algorithm. Mathematical Programming. 50(3), 359–366 (1991). Publisher Full Text OpenURL

  15. Güler, O: On the convergence of the proximal point algorithm for convex minimization. SIAM Journal on Control and Optimization. 29(2), 403–419 (1991). Publisher Full Text OpenURL

  16. Martinet, B: Régularisation d'inéquations variationnelles par approximations successives. Revue Française d'Informatique et de Recherche Opérationnelle, Série Rouge. 4(3), 154–158 (1970)

  17. Minty, GJ: Monotone (nonlinear) operators in Hilbert space. Duke Mathematical Journal. 29, 341–346 (1962). Publisher Full Text OpenURL

  18. Moroşanu, G: Nonlinear Evolution Equations and Applications, Mathematics and Its Applications (East European Series),p. xii+340. D. Reidel, Dordrecht, The Netherlands (1988)

  19. Moudafi, A: Mixed equilibrium problems: sensitivity analysis and algorithmic aspect. Computers & Mathematics with Applications. 44(8-9), 1099–1108 (2002). PubMed Abstract | Publisher Full Text OpenURL

  20. Moudafi, A, Théra, M: Finding a zero of the sum of two maximal monotone operators. Journal of Optimization Theory and Applications. 94(2), 425–448 (1997). Publisher Full Text OpenURL

  21. Pang, J-S: Complementarity problems. In: Horst R, Pardalos P (eds.) Handbook of Global Optimization, Nonconvex Optimization and Its Applications, vol. 2, pp. 271–338. Kluwer Academic Publishers, Dordrecht, The Netherlands (1995)

  22. Robinson, SM: Composition duality and maximal monotonicity. Mathematical Programming. 85(1), 1–13 (1999). Publisher Full Text OpenURL

  23. Robinson, SM: Linear convergence of epsilon-subgradient descent methods for a class of convex functions. Mathematical Programming. 86, 41–50 (1999). Publisher Full Text OpenURL

  24. Rockafellar, RT: On the maximal monotonicity of subdifferential mappings. Pacific Journal of Mathematics. 33, 209–216 (1970)

  25. Rockafellar, RT: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Mathematics of Operations Research. 1(2), 97–116 (1976). Publisher Full Text OpenURL

  26. Rockafellar, RT, Wets, RJ-B: Variational Analysis, Springer, Berlin, Germany (2004)

  27. Solodov, MV, Svaiter, BF: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Mathematics of Operations Research. 25(2), 214–230 (2000). Publisher Full Text OpenURL

  28. Solodov, MV, Svaiter, BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Mathematical Programming. 87(1), 189–202 (2000)

  29. Takahashi, W: Approximating solutions of accretive operators by viscosity approximation methods in Banach spaces. Applied Functional Analysis, pp. 225–243. Yokohama, Yokohama, Japan (2007)

  30. Tossings, P: The perturbed proximal point algorithm and some of its applications. Applied Mathematics and Optimization. 29(2), 125–159 (1994). Publisher Full Text OpenURL

  31. Tseng, P: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM Journal on Control and Optimization. 29(1), 119–138 (1991). Publisher Full Text OpenURL

  32. Tseng, P: Alternating projection-proximal methods for convex programming and variational inequalities. SIAM Journal on Optimization. 7(4), 951–965 (1997). Publisher Full Text OpenURL

  33. Tseng, P: A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization. 38(2), 431–446 (2000). Publisher Full Text OpenURL

  34. Verma, RU: A fixed-point theorem involving Lipschitzian generalised pseudo-contractions. Proceedings of the Royal Irish Academy. Section A. 97(1), 83–86 (1997)

  35. Verma, RU: New class of nonlinear -monotone mixed variational inclusion problems and resolvent operator technique. Journal of Computational Analysis and Applications. 8(3), 275–285 (2006)

  36. Verma, RU: Nonlinear -monotone variational inclusions systems and the resolvent operator technique. Journal of Applied Functional Analysis. 1(2), 183–189 (2006)

  37. Verma, RU: -monotonicity and its role in nonlinear variational inclusions. Journal of Optimization Theory and Applications. 129(3), 457–467 (2006). Publisher Full Text OpenURL

  38. Verma, RU: -monotone nonlinear relaxed cocoercive variational inclusions. Central European Journal of Mathematics. 5(2), 386–396 (2007). Publisher Full Text OpenURL

  39. Verma, RU: Approximation solvability of a class of nonlinear set-valued variational inclusions involving -monotone mappings. Journal of Mathematical Analysis and Applications. 337(2), 969–975 (2008). Publisher Full Text OpenURL

  40. Verma, RU: Nonlinear Approximation Solvability Involving Regular and Demiregular Convergence, International Publications (USA), Orlando, Fla, USA (1994)

  41. Verma, RU: General projection systems and relaxed cocoercive nonlinear variational inequalities. The ANZIAM Journal. 49(2), 205–212 (2007). Publisher Full Text OpenURL

  42. Verma, RU: General proximal point algorithmic models and nonlinear variational inclusions involving RMM mappings. accepted to Journal of Informatics and Mathematical Sciences

  43. Verma, RU: General proximal point algorithm involving -maximal accretiveness framework in Banach spaces. Positivity. 13(4), 771–782 (2009). Publisher Full Text OpenURL

  44. Verma, RU: The generalized relaxed proximal point algorithm involving -maximal-relaxed accretive mappings with applications to Banach spaces. Mathematical and Computer Modelling. 50(7-8), 1026–1032 (2009). Publisher Full Text OpenURL

  45. Yosida, K: Functional Analysis, Springer, Berlin, Germany (1965)

  46. Yosida, K: On the differentiability and representation of one-parameter semigroups of linear operators. Journal of Mathematical Society of Japan. 1, 15–21 (1948). Publisher Full Text OpenURL

  47. Xu, H-K: Iterative algorithms for nonlinear operators. Journal of the London Mathematical Society. 66(1), 240–256 (2002). Publisher Full Text OpenURL

  48. Zeidler, E: The Ljusternik-Schnirelman theory for indefinite and not necessarily odd nonlinear operators and its applications. Nonlinear Analysis: Theory, Methods & Applications. 4(3), 451–489 (1980). PubMed Abstract | Publisher Full Text OpenURL

  49. Zeidler, E: Ljusternik-Schnirelman theory on general level sets. Mathematische Nachrichten. 129, 235–259 (1986). Publisher Full Text OpenURL

  50. Zeidler, E: Nonlinear Functional Analysis and Its Applications—Part 1: Fixed-Point Theorems,p. xxi+897. Springer, New York, NY, USA (1986).

  51. Zeidler, E: Nonlinear Functional Analysis and Its Applications—Part 2 A: Linear Monotone Operators,p. xviii+467. Springer, New York, NY, USA (1990).

  52. Zeidler, E: Nonlinear Functional Analysis and Its Applications—Part 2 B: Nonlinear Monotone Operators, Springer, New York, NY, USA (1990).

  53. Zeidler, E: Nonlinear Functional Analysis and Its Applications—Part 3: Variational Methods and Optimization,p. xxii+662. Springer, New York, NY, USA (1985).

  54. Zolezzi, T: Continuity of generalized gradients and multipliers under perturbations. Mathematics of Operations Research. 10(4), 664–673 (1985). Publisher Full Text OpenURL

  55. Zoretti, L: Un théorème de la théorie des ensembles. Bulletin de la Société Mathématique de France. 37, 116–119 (1909)