Stochastic Control 2020

Another year of MATH69122! — aka Stochastic Control.

This year, I will try to keep updating PDFs with slides and notes for each lecture. I’ll keep notes for the course in the “PDF” tab above. These are also here:

Stochastic Control 2020 [pdf]

Here is a rough plan for each week of lectures:

  1. Dynamic Programming (DP)
  2. DP examples & Markov Chains
  3. Markov Decision Processes (MDP)
  4. Infinite Time MDP
  5. Algorithms for MDPs
  6. Optimal Stopping and/or Kalman Filter
  7. Continuous Time Control and LQR
  8. Diffusion Control & Merton Portfolios
  9. More Merton Portfolios
  10. Stochastic Approximation and Linear Regression
  11. Reinforcement Learning, Q-learning and TD methods
  12. Linear Function Approximation for TD and Optimal Stopping.

These notes are something of a never ending work-in-progress. Typos, comments and corrections are alway welcome. I’m alway looking to improve them. Nonetheless, I’d recommend supplementing these notes with some textbook references:

  • Bertsekas, D. P. (2018). Dynamic programming and optimal control (Vol. 1). Athena scientific.
  • Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming.
  • Rogers, L. C. (2013). Optimal investment.
  • Kushner, H., & Yin, G. G. (2003). Stochastic approximation and recursive algorithms and applications.
  • Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction.
  • Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming.

I’m currently looking to code up much more of the algorithms in the notes. I’ll add a link to that once ready.

Leave a comment