Tuan Anh Le

Planning by dynamic programming for reinforcement learning

08 November 2016

These are notes of lecture 3 of David Silver’s course on reinforcement learning.

Dynamic programming (DP) methods for reinforcement learning (RL) are pretty useless but provide a foundation for understanding the state-of-the-art methods.

We use this notation and vector representation of functions of finite domain (e.g. is represented by a -dimensional vector).

Policy evaluation

Goal: Given an MDP and a policy , find .

Algorithm:

  1. Initialize .
  2. For : \begin{align} v_k \leftarrow \mathcal R^{\pi} + \gamma \mathcal P^{\pi} v_{k - 1}. \end{align}

This comes from the Bellman expectation equation. Can be proved to converge to .

Policy iteration

Goal: Given an MDP , find the optimal policy .

Algorithm:

  1. Initialize .
  2. For : \begin{align} v_k &\leftarrow \mathrm{IterativePolicyEvaluation}(\pi_{k - 1}, v_{k - 1}) = v_{\pi_{k - 1}} \\ \pi_k &\leftarrow \mathrm{Greedy}(v_k). \end{align}

is an algorithm (like the one from the previous section) to obtain the optimal value function given the MDP and the policy from previous iteration . means \begin{align} \pi_k(a | s) = \begin{cases} 1 & \text{if } a = \argmax_{a’} \left\{ q_k(s, a’) = \mathcal R_s^{a’} + \gamma \sum_{s’ \in \mathcal S} \mathcal P_{ss’}^{a’} v_k(s’) \right\} \\ 0 & \text{otherwise.}\\ \end{cases} \end{align}

Proven to converge to via Contraction Mapping Theorem.

Policy improvement

Goal: Prove why value function monotonically increases in policy iteration.

\begin{align} v_{k - 1}(s) = \cdots \leq \cdots = v_k(s) \end{align}

Value iteration

Goal: Given an MDP , find the optimal policy .

Algorithm:

  1. Initialize .
  2. For : \begin{align} v_k \leftarrow \max_{a \in \mathcal A} \left(\mathcal R^a + \gamma \mathcal P^a v_{k - 1}\right) \end{align}

Summary

It’s proven that Policy iteration will give us optimal policy . Doesn’t work in general state spaces. Also, we are only finding one out of many possible optimal policies given this reward function.

[back]