An Optimal Stopping Problem is an Markov Decision Process where there are two actions: meaning to stop, and
meaning to continue. Here there are two types of costs
This defines a stopping problem.
Assuming that time is finite, the Bellman equation is
for and
.
Def [OLSA rule] In the one step lookahead (OSLA) rule we stop when ever where
We call the stopping set. In words, you stop whenever it is better stop now rather than continue one step further and then stop.
Def [Closed Stopping Set] We say the set is closed, it once inside that said you cannot leave, i.e.
Prop 1. If, for the finite time stopping problem, the set given by the one step lookahead rule is closed then the one step lookahead rule is an optimal policy.
Proof. Given the set is closed, we argue that if
for
then
:If
then since
is closed
. In otherwords
. Therefore, in this case, Bellman’s equation becomes
The last inequality above follows by the definition of .
We now proceed by induction. The OSLA rule is optimal for steps, since OSLA is exactly the optimal policy for one step.
Suppose that the result is holds for upto steps. Now consider the Optimal Stopping Problem with
steps. If
then
. So it is better to stop. If
, then clearly it’s better to continue.
Ex. [The Secretary Problem]. There are candidates for a secretary job. You interview candidates sequentially. After each interview, you must either accept or reject the candidate. We assume each candidate has the rank:
And arrive for interview uniformly at random. Find the policy that maximises the probability that you hire the best candidate.
Ans. All that matters at each time is if the current candidate is the best so far. At time let
Since is uniform random where the best candidate is
Under the chosen policy, we let
be our reward function. Now
Thus the Bellman equation for the above problem is
Notice that
. Let
be the smallest
such that
. Starting from
note that so long as $latex R_{t+1}<\frac{t}{N}$ holds in second case in the above expression, we have that
Thus
Thus our condition for the optimal is to take the smallest
such that
In other words, the optimal policy is to interview the first candidates and then accept the next best candidate.
Ex [The Secretary Problem, continued] Argue that as , the optimal policy is to interview
of the candidates and then to accept the next best candidate.
Ans. From [[OS:Secretary]], the optimal condition is
We know that as
Thus for
.
Optimal stopping in infinite time
We now give conditions for the one step look ahead rule to be optimal for infinite time stopping problems.
Prop. If the following two conditions hold
then the One-Step-Lookahead-Rule is optimal.
Proof. Suppose that the optimal policy stops at time
then
Therefore if we follow optimal policy but for the
time horizon problem and stop at
if
then
Thus .
As before (for the finite time problem), it is no optimal to stop if and for the finite time problem
for all
. Therefore, since
, we have that
for all
and there for it is optimal to stop for
.
The one step lookahead rule is not always the correct solution to an optimal stopping problem.
Def. [Concave Majorant] For a function a concave majorant is a function
such that
.
Prop 3 [Stopping a Random Walk] Let be a symmetric random walk on
where the process is automatically stopped at
and
. For each
, there is a positive reward of
for stopping. We are asked to maximize
where
is our chosen stopping time. The optimal value function
is the minimal concave majorant, and that it is optimal to stop whenever
.
Proof. The Bellman equation is
with and
. Thus the optimal value function is a concave majorant.
We will show that the optimal policy is the minimal concave majorant of . We do so by, essentially applying induction on value iteration. First
for any concave majorant of
. Now suppose that
, the function reached after
value iterations, satisfies
for all
, then
Since value iteration converges , where
satisfies
, as required.
Finally observe that from the Bellman equation the optimal stopping rule is to stop whenever for the minimal concave majorant.