I have a concern in understanding why a target network is necessary in DQN? I’m reading paper on “human-level control through deep reinforcement learning”
I understand Q-learning. Q-learning is value-based reinforcement learning algorithm that learns “optimal” probability distribution between state-action that will maximize it’s long term discounted reward over a sequence of timesteps.
The Q-learning is updated using the bellman equation, and a single step of the q-learning update is given by
Q(S, A) = Q(S, A) + $\alpha$[R_(t+1) + $\gamma$ (Q(s’,a;’) - Q(s,a)]
Where alpha and gamma are learning and discount factors.
I can understand that the reinforcement learning algorithm will become unstable and diverge.
The experience replay buffer is used so that we do not forget past experiences and to de-correlate datasets provided to learn the probability distribution.
This is where I fail.
Let me break the paragraph from the paper down here for discussion
The fact that small updates to $Q$ may significantly change the policy and therefore change the data distribution — understood this part. Changes to Q-network periodically may lead to unstability and changes in distribution. For example, if we always take a left turn or something like this.
and the correlations between the action-values (Q) and the target values r + $gamma$ (argmax(Q(s’,a’)) — This says that the reward + gamma * my prediction of the return given that I take what I think is the best action in the current state and follow my policy from then on.
We used an iterative update that adjusts the action-values (Q) towards target values that are only periodically updated, thereby reducing correlations with the target.
So, in summary a target network required because the network keeps changing at each timestep and the “target values” are being updated at each timestep?
But I do not understand how it is going to solve it?
So, in summary a target network required because the network keeps changing at each timestep and the “target values” are being updated at each timestep?
The difference between Q-learning and DQN is that you have replaced an exact value function with a function approximator. With Q-learning you are updating exactly one state/action value at each timestep, whereas with DQN you are updating many, which you understand. The problem this causes is that you can affect the action values for the very next state you will be in instead of guaranteeing them to be stable as they are in Q-learning.
This happens basically all the time with DQN when using a standard deep network (bunch of layers of the same size fully connected). The effect you typically see with this is referred to as "catastrophic forgetting" and it can be quite spectacular. If you are doing something like moon lander with this sort of network (the simple one, not the pixel one) and track the rolling average score over the last 100 games or so, you will likely see a nice curve up in score, then all of a sudden it completely craps out starts making awful decisions again even as your alpha gets small. This cycle will continue endlessly regardless of how long you let it run.
Using a stable target network as your error measure is one way of combating this effect. Conceptually it's like saying, "I have an idea of how to play this well, I'm going to try it out for a bit until I find something better" as opposed to saying "I'm going to retrain myself how to play this entire game after every move". By giving your network more time to consider many actions that have taken place recently instead of updating all the time, it hopefully finds a more robust model before you start using it to make actions.
On a side note, DQN is essentially obsolete at this point, but the themes from that paper were the fuse leading up to the RL explosion of the last few years.
Related
I am trying to train a DQN Agent to solve AI Gym's Cartpole-v0 environment. I have started with this person's implementation just to get some hands-on experience. What I noticed is that during training, after many episodes the agent finds the solution and is able to keep the pole upright for the maximum amount of timesteps. However, after further training, the policy looks like it becomes more stochastic and it can't keep the pole upright anymore and goes in and out of a good policy. I'm pretty confused by this why wouldn't further training and experience help the agent? At episodes my epsilon for random action becomes very low, so it should be operating on just making the next prediction. So why does it on some training episodes fail to keep the pole upright and on others it succeeds?
Here is a picture of my reward-episode curve during the training process of the above linked implementation.
This actually looks fairly normal to me, in fact I guessed your results were from CartPole before reading the whole question.
I have a few suggestions:
When you're plotting results, you should plot averages over a few random seeds. Not only is this generally good practice (it shows how sensitive your algo is to seeds), it'll smooth out your graphs and give you a better understanding of the "skill" of your agent. Don't forget, the environment and the policy are stochastic, so it's not completely crazy that your agent exhibits this type of behavior.
Assuming you're implementing e-greedy exploration, what's your epsilon value? Are you decaying it over time? The issue could also be that your agent is still exploring a lot even after it found a good policy.
Have you played around with hyperparameters, like learning rate, epsilon, network size, replay buffer size, etc? Those can also be the culprit.
Call for experts in deep learning.
Hey, I am recently working on training images using tensorflow in python for tone mapping. To get the better result, I focused on using perceptual loss introduced from this paper by Justin Johnson.
In my implementation, I made the use of all 3 parts of loss: a feature loss that extracted from vgg16; a L2 pixel-level loss from the transferred image and the ground true image; and the total variation loss. I summed them up as the loss for back propagation.
From the function
yˆ=argminλcloss_content(y,yc)+λsloss_style(y,ys)+λTVloss_TV(y)
in the paper, we can see that there are 3 weights of the losses, the λ's, to balance them. The value of three λs are probably fixed throughout the training.
My question is that does it make sense if I dynamically change the λ's in every epoch(or several epochs) to adjust the importance of these losses?
For instance, the perceptual loss converges drastically in the first several epochs yet the pixel-level l2 loss converges fairly slow. So maybe the weight λs should be higher for the content loss, let's say 0.9, but lower for others. As the time passes, the pixel-level loss will be increasingly important to smooth up the image and to minimize the artifacts. So it might be better to adjust it higher a bit. Just like changing the learning rate according to the different epochs.
The postdoc supervises me straightly opposes my idea. He thought it is dynamically changing the training model and could cause the inconsistency of the training.
So, pro and cons, I need some ideas...
Thanks!
It's hard to answer this without knowing more about the data you're using, but in short, dynamic loss should not really have that much effect and may have opposite effect altogether.
If you are using Keras, you could simply run a hyperparameter tuner similar to the following in order to see if there is any effect (change the loss accordingly):
https://towardsdatascience.com/hyperparameter-optimization-with-keras-b82e6364ca53
I've only done this on smaller models (way too time consuming) but in essence, it's best to keep it constant and also avoid angering off your supervisor too :D
If you are running a different ML or DL library, there are optimizer for each, just Google them. It may be best to run these on a cluster and overnight, but they usually give you a good enough optimized version of your model.
Hope that helps and good luck!
In Reinforcement Learning, why should we select actions according to an ϵ-greedy approach rather than always selecting the optimal action ?
We use an epsilon-greedy method for exploration during training. This means that when an action is selected by training, it is either chosen as the action with the highest Q-value, or a random action by some factor (epsilon).
Choosing between these two is random and based on the value of epsilon. initially, lots of random actions are taken which means we start by exploring the space, but as training progresses, more actions with the maximum q-values are taken and we gradually start giving less attention to actions with low Q-value.
During testing, we use this epsilon-greedy method, but with epsilon at a very low value, such that there is a strong bias towards exploitation over exploration, favoring choosing the action with the highest q-value over a random action. However, random actions are still sometimes chosen.
All this is because we want to eliminate the negative effects of over-fitting or under-fitting.
Using epsilon of 0 (always choosing the optimal action) is a fully exploitative choice. For example, consider a labyrinth game where the agent’s current Q-estimates are converged to the optimal policy except for one grid, where it greedily chooses to move toward a boundary (which is currently the optimal policy) that results in it remaining in the same grid, If the agent reaches any such state, and it is choosing the maximum Q-action, it will be stuck there. However, keeping a small epsilon factor in its policy allows it to get out of such states.
There wouldn't be much learning happening if you already knew what the best action was, right ? :)
ϵ-greedy is "on-policy" learning, meaning that you are learning the optimal ϵ-greedy policy, while exploring with an ϵ-greedy policy. You can also learn "off-policy" by selecting moves that are not aligned to the policy that you are learning, an example is exploring always randomly (same as ϵ=1).
I know this can be confusing at first, how can you learn anything if you just move randomly? The key bit of knowledge here is that the policy that you learn is not defined by how you explore, but by how you calculate the sum of future rewards (in the case of regular Q-Learning it's the max(Q[next_state]) piece in the Q-Value update).
This all works assuming you are exploring enough, if you don't try out new actions the agents will never be able to figure out which ones are the best ones in the first place.
I have implemented a custom openai gym environment for a game similar to http://curvefever.io/, but with discreet actions instead of continuous. So my agent can in each step go in one of four directions, left/up/right/down. However one of these actions will always lead to the agent crashing into itself, since it cant "reverse".
Currently I just let the agent take any move, and just let it die if it makes an invalid move, hoping that it will eventually learn to not take that action in that state. I have however read that one can set the probabilities for making an illegal move zero, and then sample an action. Is there any other way to tackle this problem?
You can try to solve this by 2 changes:
1: give current direction as an input and give reward of maybe +0.1 if it takes the move which does not make it crash, and give -0.7 if it make a backward move which directly make it crash.
2: If you are using neural network and Softmax function as activation function of last layer, multiply all outputs of neural network with a positive integer ( confidence ) before giving it to Softmax function. it can be in range of 0 to 100 as i have experience more than 100 will not affect much. more the integer is the more confidence the agent will have to take action for a given state.
If you are not using neural network or say, deep learning, I suggest you to learn concepts of deep learning as your environment of game seems complex and a neural network will give best results.
Note: It will take huge amount of time. so you have to wait enough to train the algorithm. i suggest you not to hurry and let it train. and i played the game, its really interesting :) my wishes to make AI for the game :)
I am busy coding reinforcement learning agents for the game Pac-Man and came across Berkeley's CS course's Pac-Man Projects, specifically the reinforcement learning section.
For the approximate Q-learning agent, feature approximation is used. A simple extractor is implemented in this code. What I am curious about is why, before the features are returned, they are scaled down by 10? By running the solution without the factor of 10 you can notice that Pac-Man does significantly worse, but why?
After running multiple tests it turns out that the optimal Q-value can diverge wildly away. In fact, the features can all become negative, even the one which would usually incline PacMan to eat pills. So he just stands there and eventually tries to run from ghosts but never tries to finish a level.
I speculate that this happens when he loses in training, that the negative reward is propagated through the system and since the potential number of ghosts can be greater than one, this has a heavy bearing on the weights, causing everything to become very negative and the system can't "recover" from this.
I confirmed this by adjusting the feature extractor to only scale the #-of-ghosts-one-step-away feature and then PacMan manages to get a much better result
In retrospect this question is now more mathsy and might fit better on another stackexchange.