Markov Decision Process
A Markov Decision Process (MDP) is a mathematical abstraction for sequential decision-making. Specifically, a Finite Markov Decision Process effectively characterizes the mathematical properties of reinforcement learning, where "finite" refers to the state, action, and reward sets all being finite. We say a system possesses the Markov property if its next state depends solely on the current state; all historical information is implicitly contained within the current state.
In reinforcement learning, we distinguish between the agent and the environment. The agent is the decision-maker, and everything else constitutes the environment. The agent receives a state from the environment and makes a decision (an action). Consequently, it receives a reward and the next state.
Definition of Markov Decision Process
In a Finite Markov Decision Process, the state and reward follow a fixed probability distribution based on the previous state and action:
where , , and . Here, characterizes the dynamics of this Markov Decision Process.
From the four-argument dynamics function above, other variables describing the environment can be derived.
For example, the state transition probability distribution is:
The expected reward for a state-action pair is:
The expected reward given the resulting state is:
Reward and Return
In reinforcement learning, the agent's objective is to maximize cumulative reward. We define cumulative reward as the return. In its simplest form:
However, the return defined above is only suitable for episodic tasks, i.e., tasks with a natural terminal state. It is not applicable to continuous tasks. Generally, for continuous tasks, a discount factor is introduced, meaning future rewards are discounted more heavily.
where is the discount rate. The above equation can be written in a recursive form:
Both forms can be unified into one equation (assuming or for episodic tasks ending at step ):
Policy and Value Function
Almost all reinforcement learning algorithms involve the evaluation of value functions. Value functions measure the expected return for a given state or a given state-action pair. The value function is related to a specific sequence of decision actions, and the corresponding distribution of decision actions is called the policy. A policy can be viewed as a probability distribution over actions given a state.
The definition of the state-value function is given as:
The definition of the action-value function is similar:
State-value and action-value functions can be estimated from experience. One only needs to maintain the average return for each state (or state-action pair) under a specific policy. According to the law of large numbers, this average will eventually converge to the expected value. This is the main idea behind Monte Carlo methods. However, when the number of states is very large, this approach may not be feasible. In such cases, we need to use function approximation to fit the value function, and the effectiveness depends on the choice of the function. Deep learning falls into this category.
Corresponding to the recursive nature of the return function, the state-value function can also be written in a recursive form, known as the Bellman equation.
A similar Bellman equation can be derived for the action-value function.
Matrix-vector Form
If we define,
then we have:
Suppose that the states are indexed as with , where .
Let , , and with . We get the following matrix-vector form:
where is the unknown to be solved, and are known.
Optimal Policy and Optimal Value Function
If a policy yields a higher expected value than any other policy for all states, we consider it an optimal policy. It is defined as follows, for all :
There might be multiple optimal policies, but there is only one optimal value function (otherwise, it would violate the definition of an optimal policy).
Correspondingly, there is also an optimal action-value function.
They have the following relationship:
Based on the properties of the optimal policy, the Bellman optimality equation can be derived:
Similarly, the Bellman optimality equation for the action-value function can be obtained:
Mathematically, it can be shown that the Bellman optimality equation has a unique solution. It amounts to one equation for each state. If there are states, there are equations. When the dynamics function is known, we can use any method for solving systems of non-linear equations.
Deriving the optimal policy from the optimal state-value function requires only a one-step search, i.e., finding the action that maximizes the expression in the Bellman optimality equation. For the optimal action-value function, it is even more convenient: the action corresponding to the maximum value for a given state is the best action.
Although the Bellman equation has a theoretical solution, it can rarely be computed directly. Direct computation relies on the following three conditions:
- Accurate knowledge of the environment's dynamics function.
- Sufficient computational resources.
- Satisfaction of the Markov property.
Conversely, many reinforcement learning methods provide corresponding approximate solutions.
Partially Observable MDPs
MDPs assume that the agent always knows exactly what state it is in --- the problem is fully-observable. However, this is not valid for many tasks. The agent might not perceive all aspects of the environment's state, only a partial observation.
Partially-observable MDPs (POMDPs) relax the assumption of full-observability.
The sensor model allows the agent to observe the environment. If an agent executes an action , it has probability of observing state .
Solving POMDPs is similar to solving MDPs. In fact, the same algorithms can be applied. The only difference is that we case the POMDP problem as a standard MDP problem with a new state space: each state is a probability distribution over the set . Thus, each state of the POMDP is a belief state, which defined the probability of being in each state . This leads to an exponentially-larger state space, so POMDPs are typically harder problems to solve.
Like MDPs, solutions are policies that map belief states into actions. Optimal policies maximise the expected reward.