Here are the notes for the Stochastic Control course for 2022:
Stochastic_Control_Jan29
Here is a rough plan for each week of lectures:
- Dynamic Programming (DP)
- DP examples & Markov Chains
- Markov Decision Processes (MDP)
- Infinite Time MDP
- Algorithms for MDPs
- Optimal Stopping and/or Kalman Filter
- Continuous Time Control and LQR
- Diffusion Control & Merton Portfolios
- More Merton Portfolios
- Stochastic Approximation and Linear Regression
- Reinforcement Learning, Q-learning and TD methods
- Linear Function Approximation for TD and Optimal Stopping.
These notes are something of a never ending work-in-progress. Typos, comments and corrections are always welcome. I’m alway looking to improve them. Nonetheless, I’d recommend supplementing these notes with some textbook references:
- Bertsekas, D. P. (2018). Dynamic programming and optimal control (Vol. 1). Athena scientific.
- Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming.
- Rogers, L. C. (2013). Optimal investment.
- Kushner, H., & Yin, G. G. (2003). Stochastic approximation and recursive algorithms and applications.
- Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction.
- Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming.
I’m currently looking to code up much more of the algorithms in the notes. I’ll add a link to that once ready.