Why use 2 networks, train once every episode and update target network every N episode, when we can use 1 network and train it ONCE every N episode! there is literally no difference!
What you are describing is not Double DQN. The periodically updated target network is a core feature of the original DQN algorithm (and all of its derivatives). DeepMind's classic paper explains why it is crucial to have two networks:
The second modification to online Q-learning aimed at further improving the
stability of our method with neural networks is to use a separate network for generating the targets y_j in the Q-learning update. More precisely, every C updates we
clone the network Q to obtain a target network Q^ and use Q^ for generating the
Q-learning targets y_j for the following C updates to Q. This modification makes the algorithm more stable compared to standard online Q-learning, where an update
that increases Q(s_t, a_t) often also increases Q(s_{t+1}, a) for all a and hence also increases the target y_j, possibly leading to oscillations or divergence of the policy. Generating the targets using an older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets y_j, making divergence or oscillations much more unlikely.
Related
I want to know about the convergence time of Deep Q-learning vs Q-learning when run on same problem. Can anyone give me an idea about the pattern between them? It will be better if it is explained with a graph.
In short, the more complicated the state is, the better DQN is over Q-Learning (by complicated, I mean the number of all possible states). When the state is too complicated, Q-learning becomes nearly impossible to converge due to time and hardware limitation.
note that DQN is in fact a kind of Q-Learning, it uses a neural network to act like a q table, both Q-network and Q-table are used to output a Q value with the state as input. I will continue using Q-learning to refer the simple version with Q-table, DQN with the neural network version
You can't tell convergence time without specifying a specific problem, because it really depends on what you are doing:
For example, if you are doing a simple environment like FrozenLake:https://gym.openai.com/envs/FrozenLake-v0/
Q-learning will converge faster than DQN as long as you have a reasonable reward function.
This is because FrozenLake has only 16 states, Q-Learning's algorithm is just very simple and efficient, so it runs a lot faster than training a neural network.
However, if you are doing something like atari:https://gym.openai.com/envs/Assault-v0/
there are millions of states (note that even a single pixel difference is considered totally new state), Q-Learning requires enumerating all states in Q-table to actually converge (so it will probably require a very large memory plus a very long training time to be able to enumerate and explore all possible states). In fact, I am not sure if it is ever going to converge in some more complicated game, simply because of so many states.
Here is when DQN becomes useful. Neural networks can generalize the states and find a function between state and action (or more precisely state and Q-value). It no longer needs to enumerate, it instead learns information implied in states. Even if you have never explored a certain state in training, as long as your neural network has been trained to learn the relationship on other similar states, it can still generalize and output the Q-value. And therefore it is a lot easier to converge.
I'm using rllib for the first time, and trying to traini a custom multi-agent RL environment, and would like to train a couple of PPO agents on it. The implementation hiccup I need to figure out is how to alter the training for one special agent such that this one only takes an action every X timesteps. Is it best to only call compute_action() every X timesteps? Or, on the other steps, to mask the policy selection such that they have to re-sample an action until a No-Op is called? Or to modify the action that gets fed into the environment + the previous actions in the training batches to be No-Ops?
What's the easiest way to implement this that still takes advantage of rllib's training features? Do I need to create a custom training loop for this, or is there a way to configure PPOTrainer to do this?
Thanks
Let t:=timesteps so far. Give the special agent this feature: t (mod X), and don't process its actions in the environment when t (mod X) != 0. This accomplishes:
the agent in effect is only taking actions every X timesteps because you are ignoring all the other ones
the agent can learn that only the actions taken every X timesteps will affect the future rewards
I am confused why dqn with experience replay algorithm would perform gradient descent step for every step in a given episode? This will fit only one step, right? This would make it extremely slow. Why not after each episode ends or every time the model is cloned?
In the original paper, the author pushes one sample to the experience replay buffer and randomly samples 32 transitions to train the model in the minibatch fashion. The samples took from interacting with the environment is not directly feeding to the model. To increase the speed of training, the author store samples every step but updates the model every four steps.
Use OpenAI's Baseline project; this single-process method can master easy games like Atari Pong (Pong-v4) about 2.5 hours using a single GPU. Of course, training in this kind of single process way makes multi-core, multi-GPU (or single-GPU) system's resource underutilised. So in new publications had decoupled action-selection and model optimisation. They use multiple "Actors" to interact with environments simultaneously and a single GPU "Leaner" to optimise the model or multiple Leaners with multiple models on various GPUs. The multi-actor-single-learner is described in Deepmind's Apex-DQN (Distributed Prioritized Experience Replay, D. Horgan et al., 2018) method and the multi-actor-multi-learner described in (Accelerated Methods for Deep Reinforcement Learning, Stooke and Abbeel, 2018). When using multiple learners, the parameter sharing across processes becomes essential. The old trail described in Deepmind's PDQN (Massively Parallel Methods for Deep Reinforcement Learning, Nair et al., 2015) which was proposed in the period between DQN and A3C. However, the work was performed entirely on CPUs, so it looks using massive resources, the result can be easy outperformed by PPAC's batched action-selection on GPU method.
You can't optimise on each episode end, because the episode length isn't fixed, the better the model usually results in the longer episode steps. The model's learning capability will decrease when they perform a little better. The learning progress will be instable.
We also don't train the model only on target model clone, because the introduction of the target is to stabilise the training process by keeping an older set of parameters. If you update only on parameter clones, the target model's parameters will be the same as the model and this cause instability. Because, if we use the same parameters, one model update will cause the next state to have a higher value.
In Deepmind's 2015 Nature paper, it states that:
The second modification to online Q-learning aimed at further improving the stability of our method with neural networks is to use a separate network for generating the target yj in the Q-learning update. More precisely, every C updates we clone the network Q to obtain a target network Q' and use Q' for generating the Q-learning targets yj for the following C updates to Q.
This modification makes the algorithm more stable compared to standard online Q-learning, where an update that increases Q(st,at) often also increases Q(st+1, a) for all a and hence also increases the target yj, possibly leading to oscillations or divergence of the policy. Generating the targets using the older set of parameters adds a delay between the time an update to Q is made and the time the update affects the targets yj, making divergence or oscillations much more unlikely.
Human-level control through deep reinforcement
learning, Mnih et al., 2015
I have a concern in understanding why a target network is necessary in DQN? I’m reading paper on “human-level control through deep reinforcement learning”
I understand Q-learning. Q-learning is value-based reinforcement learning algorithm that learns “optimal” probability distribution between state-action that will maximize it’s long term discounted reward over a sequence of timesteps.
The Q-learning is updated using the bellman equation, and a single step of the q-learning update is given by
Q(S, A) = Q(S, A) + $\alpha$[R_(t+1) + $\gamma$ (Q(s’,a;’) - Q(s,a)]
Where alpha and gamma are learning and discount factors.
I can understand that the reinforcement learning algorithm will become unstable and diverge.
The experience replay buffer is used so that we do not forget past experiences and to de-correlate datasets provided to learn the probability distribution.
This is where I fail.
Let me break the paragraph from the paper down here for discussion
The fact that small updates to $Q$ may significantly change the policy and therefore change the data distribution — understood this part. Changes to Q-network periodically may lead to unstability and changes in distribution. For example, if we always take a left turn or something like this.
and the correlations between the action-values (Q) and the target values r + $gamma$ (argmax(Q(s’,a’)) — This says that the reward + gamma * my prediction of the return given that I take what I think is the best action in the current state and follow my policy from then on.
We used an iterative update that adjusts the action-values (Q) towards target values that are only periodically updated, thereby reducing correlations with the target.
So, in summary a target network required because the network keeps changing at each timestep and the “target values” are being updated at each timestep?
But I do not understand how it is going to solve it?
So, in summary a target network required because the network keeps changing at each timestep and the “target values” are being updated at each timestep?
The difference between Q-learning and DQN is that you have replaced an exact value function with a function approximator. With Q-learning you are updating exactly one state/action value at each timestep, whereas with DQN you are updating many, which you understand. The problem this causes is that you can affect the action values for the very next state you will be in instead of guaranteeing them to be stable as they are in Q-learning.
This happens basically all the time with DQN when using a standard deep network (bunch of layers of the same size fully connected). The effect you typically see with this is referred to as "catastrophic forgetting" and it can be quite spectacular. If you are doing something like moon lander with this sort of network (the simple one, not the pixel one) and track the rolling average score over the last 100 games or so, you will likely see a nice curve up in score, then all of a sudden it completely craps out starts making awful decisions again even as your alpha gets small. This cycle will continue endlessly regardless of how long you let it run.
Using a stable target network as your error measure is one way of combating this effect. Conceptually it's like saying, "I have an idea of how to play this well, I'm going to try it out for a bit until I find something better" as opposed to saying "I'm going to retrain myself how to play this entire game after every move". By giving your network more time to consider many actions that have taken place recently instead of updating all the time, it hopefully finds a more robust model before you start using it to make actions.
On a side note, DQN is essentially obsolete at this point, but the themes from that paper were the fuse leading up to the RL explosion of the last few years.
I am learning about the approach employed in Reinforcement Learning for robotics and I came across the concept of Evolutionary Strategies. But I couldn't understand how RL and ES are different. Can anyone please explain?
To my understanding, I know of two main ones.
1) Reinforcement learning uses the concept of one agent, and the agent learns by interacting with the environment in different ways. In evolutionary algorithms, they usually start with many "agents" and only the "strong ones survive" (the agents with characteristics that yield the lowest loss).
2) Reinforcement learning agent(s) learns both positive and negative actions, but evolutionary algorithms only learns the optimal, and the negative or suboptimal solution information are discarded and lost.
Example
You want to build an algorithm to regulate the temperature in the room.
The room is 15 °C, and you want it to be 23 °C.
Using Reinforcement learning, the agent will try a bunch of different actions to increase and decrease the temperature. Eventually, it learns that increasing the temperature yields a good reward. But it also learns that reducing the temperature will yield a bad reward.
For evolutionary algorithms, it initiates with a bunch of random agents that all have a preprogrammed set of actions it is going to do. Then the agents that has the "increase temperature" action survives, and moves onto the next generation. Eventually, only agents that increase the temperature survive and are deemed the best solution. However, the algorithm does not know what happens if you decrease the temperature.
TL;DR: RL is usually one agent, trying different actions, and learning and remembering all info (positive or negative). EM uses many agents that guess many actions, only the agents that have the optimal actions survive. Basically a brute force way to solve a problem.
I think the biggest difference between Evolutionary Strategies and Reinforcement Learning is that ES is a global optimization technique while RL is a local optimization technique. So RL can converge to a local optima converging faster while ES converges slower to a global minima.
Evolution Strategies optimization happens on a population level. An evolution strategy algorithm in an iterative fashion (i) samples a batch of candidate solutions from the search space (ii) evaluates them and (iii) discards the ones with low fitness values. The sampling for a new iteration (or generation) happens around the mean of the best scoring candidate solutions from the previous iteration. Doing so enables evolution strategies to direct the search towards a promising location in the search space.
Reinforcement learning requires the problem to be formulated as a Markov Decision Process (MDP). An RL agent optimizes its behavior (or policy) by maximizing a cumulative reward signal received on a transition from one state to another. Since the problem is abstracted as an MDP learning can happen on a step or episode level. Learning per step (or N steps) is done via temporal-Difference learning (TD) and per episode is done via Monte Carlo methods. So far I am talking about learning via action-value functions (learning the values of actions). Another way of learning is by optimizing the parameters of a neural network representing the policy of the agent directly via gradient ascent. This approach is introduced in the REINFORCE algorithm and the general approach known as policy-based RL.
For a comprehensive comparison check out this paper https://arxiv.org/pdf/2110.01411.pdf