Discrete time Dynamic Programming was given in the post Dynamic Programming. We now consider the continuous time analogue.
Time is continuous ;
is the state at time
;
is the action at time
; Given function
, the state evolves according to a differential equation
This is called the Plant Equation. A policy chooses an action
at each time
. The (instantaneous) reward for taking action
in state
at time
is
and
is the reward for terminating in state
at time
.
Def [Dynamic Program] Given initial state , a dynamic program is the optimization
Further, let (Resp.
) be the objective (Resp. optimal objective) for when the summation is started from
, rather than
.
When a minimization problem where we minimize loss given the costs incurred is replaced with a maximization problem where we maximize winnings given the rewards received. The functions ,
and
are replaced with notation
,
and
.
Def [Hamilton-Jacobi-Bellman Equation] For a continuous-time dynamic program , the equation
is called the Hamilton-Jacobi-Bellman equation. It is the continuous time analogoue of the Bellman equation.
A Heuristic Derivation of the HJB Equation
We now argue why the Hamiliton-Jacobi-Bellman equation is a good candidate for the Bellman equation in continuous time.
A good approximation to the plant equation is
for small, and a good approximation for the above objective is
This follows from the definition of the Riemann Integral and we further use the fact that
as
.
The Bellman equation for the discrete time dynamic program with objective and plant equation is
If we minus from each side in this Bellman equation and then divide by
and let
we get that
where here we note that, by the Chain rule,
Thus we derive the HJB equation as described above.
The following result shows that if we solve the HJB equation then we have an optimal policy.
Thrm 1 [Optimality of HJB] Suppose that a policy has a value function
that satisfies the HJB-equation for all
and
then,
is an optimal policy.
Proof. Using shorthand :
The inequality holds since the term in the square brackets is the objective of the HJB equation, which is not maximized by
.
Linear Quadratic Regularization
Def. [LQ problem] We consider a dynamic program of the form
Here and
.
and
are matrices.
and
symmetric positive definite matrices. This an Linear-Quadratic problem (LQ problem).
Def [Riccarti Equation] The differential equation with
is called the Riccarti equation.
Thrm 2. For each time , the optimal action for the LQ problem is
where is the solution to the Riccarti equation.
Proof. The HJB equation for an LQ problem is
We now “guess” that the solution to above HJB equation is of the form for some symmetric matrix
. Therefore
Substituting into the Bellman equation gives
Differentiating with respect to gives the optimality condition
which implies
Finally substituting into the Bellman equation, above, gives the expression
Thus the solution to the Riccarti equation has a cost function that solves the Bellman equation and thus by Theorem 1 the policy is optimal.
One thought on “Continuous Time Dynamic Programming”