Zero-Order Stochastic Optimization: Keifer-Wolfowitz

We want to optimize the expected value of some random function. This is the problem we solved with Stochastic Gradient Descent. However, we assume that we no longer have access to unbiased estimate of the gradient. We only can obtain estimates of the function itself. In this case we can apply the Kiefer-Wolfowitz procedure.

The idea here is to replace the random gradient estimate used in stochastic gradient descent with a finite difference. If the increments used for these finite differences are sufficiently small, then over time convergence can be achieved. The approximation error for the finite difference has some impact on the rate of convergence.

The Problem Setting

Suppose that we have some function F : \mathbb R^p \rightarrow \mathbb R

that we wish to minimize over \theta \in \mathbb R^p. Here U is some random variable.

We suppose that we do not have direct access to the distribution of U nor f(\theta;U). But we can sample U_1,U_2,\dots and thus we can sample f(\theta_1;U_1),f(\theta_2;U_2),... for different values of \theta_t.

We can think of f(\theta, U) as representing the random reward from a simulator configured under a set of parameters \theta. We want to maximize the expect reward from the simulator but can only use the information of rewards from different simulation runs.

The Kiefer-Wolfowitz Algorithm

If the problem is reasonably smooth then we can use the Keifer-Wolfowitz proceedure to optimize F(\theta). Here we use a finite difference approximation of the gradient of G(\theta).

A finite difference approximation. We want to approximate the gradient of F(\theta), which we denote by G(\theta) . That is

We define for \gamma \in \mathbb R

where e_i is the ith unit vector. We can then define the finite difference approximation to the gradient G(\theta) by

It is not hard to show that for a well-behaved function F we have that

(Also note that higher-order bounds can also be used here for an improved rate of convergence.)

The Algorithm

The Kiefer-Wolfowitz Algorithm performs the update

I.e. we replace the finite difference estimate of F( \theta) with the analogous random finite difference in f( \theta,U ).

We need to choose \alpha_t and \gamma_t so that we converge to the optimum.

Assumptions

We make the following assumptions on G(\theta) = \nabla F(\theta)\,. There exists \theta_\star and there exists a constant c and f_{\max} such that Notice that if F(\theta) is strongly convex with its optimum at \theta_\star then the two statements above follow.

Screenshot 2022-10-28 at 10.43.28

We also place assumptions on the sequences \alpha_n and \gamma_n specifically we assume that

Screenshot 2022-10-28 at 10.44.28

Main Result

THEOREM. For the Kiefer-Wolfowitz procedure , if (2-7) hold then

Specifically if we take \alpha_t = 1/t and \gamma_t = t^{-1/6} then

Screenshot 2022-10-28 at 10.46.26The proof of Theorem relies on the following proposition.

PROPOSITION. Given \xi_n is a positive sequence, b_n and e_n are positive non-increasing sequences and a_n are such that if

then

We prove this Proposition at the end of this section. Notice that we proved similar bounds to the above for Robbins-Monro.

Proof of Theorem. We can rewrite the Kiefer-Wolfowitz recursion as

where Setting

notice that

Screenshot 2022-10-28 at 10.48.20

We now bound the four terms above.

We can bound (10) as follows

Screenshot 2022-10-28 at 10.54.36

For the term (11), by Cauchey-Schwartz

Screenshot 2022-10-28 at 10.55.31
In the final inequality above we note that from Assumption (4) that:

Screenshot 2022-10-28 at 10.55.42

For term (12), we have

Screenshot 2022-10-28 at 10.57.09

because \mathbb E[ \epsilon_t | \theta_{t} ]=0.

For term (13)

Screenshot 2022-10-28 at 11.01.17
Applying (14), (15), (16) and (17) to (10), (11) , (12) and (13) gives

Screenshot 2022-10-28 at 11.01.53

By the proposition above with a_t = 2 \kappa \alpha_t , b_t = c\gamma^2_t/\kappa and e_t =\frac{f_{\max}^2}{4} \frac{\alpha_t}{\gamma^2_t} we see that

Screenshot 2022-10-28 at 11.02.43

From this we see that the required result holds.

For \alpha_t = {1}/{t} and \gamma_t = {1}/{t^{1/6}}, we have

which gives the final required expression. QED.


We now prove Proposition. We do so by proving Lemmas 1 and 2.

LEMMA 1. If \xi_n is a positive sequence such that

and

then

PROOF. Rearranging gives

If \xi_n > B/A + \epsilon for some \epsilon>0 then

So \xi_n is decreasing when \xi_n > B/A + \epsilon holds and, since \sum_n \alpha_n = \infty, there exists N s.t. \xi_N \leq B/A +\epsilon. Let N_0 be the first value of N where \xi_N \leq B/A +\epsilon occurs.

Notice, \xi_n can only increase when \xi_n \leq B/A + \epsilon, and since \xi_n is a positive then

Thus, we see that

Therefore

Since \epsilon is arbitrary the results holds. QED.

LEMMA 2. If \xi_n is a positive sequence such that

and

with A>C then

PROOF. Since \lim_{n\rightarrow\infty} \alpha_n =0, take N such that \alpha_n < \delta for all n \geq N.

Now defining \xi'_n = \xi_n/\beta_n for n\geq N gives

where we define A' = A-C+\delta and B'=(1+C\delta)B. Applying Lemma 1 gives

which recalling the definitions of \xi_n', A',B' and recalling that \delta is arbitrary gives the result. QED.

Proof of Proposition. By the inequality

we have

Now the results follow by applying Lemma 2. With \beta_n = b^2_n + e_n, A= 1/2, B=1 and C=0. (Notice we can take C=0 because b_n and e_n are decreasing.) QED.

References

The Kiefer-Wolfowitz procedure was first proposed by Kiefer and Wolfowitz (1952). Finite time bound similar to those in the Theorem above are given by Broadie et al. (2011). The analysis here largely follows from the arguments in Fabian (1967). Fabian and Broadie et al. both use results from Chung (1954), which is the basis for Lemmas 1 and 2 above.

Kiefer, J., & Wolfowitz, J. (1952). Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, 462-466.

Broadie, M., Cicek, D., & Zeevi, A. (2011). General bounds and finite-time improvement for the Kiefer-Wolfowitz stochastic approximation algorithm. Operations Research, 59(5), 1211-1224.

Fabian, V. (1967). Stochastic approximation of minima with improved asymptotic speed. The Annals of Mathematical Statistics, 191-200.

Chung, K. L. (1954). On a stochastic approximation method. The Annals of Mathematical Statistics, 463-483.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: