Caffe loss layer - backward function - deep-learning

I have a question about backward function in Caffe's loss layer.
I have seen implementation of a Euclidean loss layer at :
https://github.com/BVLC/caffe/blob/master/examples/pycaffe/layers/pyloss.py
At the end, the backward function is defined as follow :
def backward(self, top, propagate_down, bottom):
for i in range(2):
if not propagate_down[i]:
continue
if i == 0:
sign = 1
else:
sign = -1
bottom[i].diff[...] = sign * self.diff / bottom[i].num
I know that bottom[0] indicates the predicted value and bottom[1] is target values (ground truth).
Can you help me understand why sign is +1 for prediction and -1 for targets?
I thought that we didn't need to assign any values to bottom[1].diff for targets.
If I have multi-label problem, how can I update the code ?

When bottom[1] is ground truth you should expect propagate_down[1] to be False thus no gradient is propagated to the labels.
However, if you use this loss layer to compare two trainable blobs you would like to propagate gradients to both bottoms.
The sign is only due to the derivative of this particular loss, write the derivative and you'll see where it comes from.
The one thing I am missing here is top[0].diff. If I understand correctly it should hold the loss_weight defined for this layer...

Related

PyTorch find keypoints: output nodes to be in a range and negative loss

I am beginner in deep learning.
I am using this dataset and I want my network to detect keypoints of a hand.
How can I make my output layer's nodes to be in range [-1, 1] (range of normalized 2D points)?
Another problem is when I train for more than 1 epoch the loss gets negative values
criterion: torch.nn.MultiLabelSoftMarginLoss() and optimizer: torch.optim.SGD()
Here u can find my repo
net = nnModel.Net()
net = net.to(device)
criterion = nn.MultiLabelSoftMarginLoss()
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=optimizer, gamma=decay_rate)
You can use the Tanh activation function, since the image of the function lies in [-1, 1].
The problem of predicting key-points in an image is more of a regression problem than a classification problem (especially if you're making your model outputs + targets fall within a continuous interval). Therefore, I suggest you use the L2 Loss.
In fact, it could be a good exercise for you to determine which loss function that is appropriate for regression problems provides the lowest expected generalization error using cross-validation. There's several such functions available in PyTorch.
One way I can think of is to use torch.nn.Sigmoid which produces outputs in [0,1] range and scale outputs to [-1,1] using 2*x-1 transformation.

In DQN, hwo to perform gradient descent when each record in experience buffer corresponds to only one action?

The DQN algorithm below
Source
At the gradient descent line, there's something I don't quite understand.
For example, if I have 8 actions, then the output Q is a vector of 8 components, right?
But for each record in D, the return y_i is only a scalar with respect to a given action. How can I perform gradient descent on (y_i - Q)^2 ? I think it's not guaranteed that within a minibatch I have all actions' returns for a state.
You need to calculate the loss only on the Q-value which its action is selected. In your example, assume for a given row in your mini-batch, the action is 3. Then, you obtain the corresponding target, y_3, and then the loss is (Q(s,3) - y_3)^2, and basically you set the loss value of other actions to zero. You can implement this by using gather_nd in tensorflow or by obtaining one-hot-encode version of actions and then multiplying that one-hot-encode vector to Q-value vector. Using a one-hot-encode vector you can write:
action_input = tf.placeholder("float",[None,action_len])
QValue_batch = tf.reduce_sum(tf.multiply(T_Q_value,action_input), reduction_indices = 1)
in which action_input = np.eye(nb_classes)[your_action (e.g. 3)]. Same procedure can be followed by gather_nd:
https://www.tensorflow.org/api_docs/python/tf/gather_nd
I hope this resolves your confusion.

Backpropagation on Two Layered Networks

i have been following cs231n lectures of Stanford and trying to complete assignments on my own and sharing these solutions both on github and my blog. But i'm having a hard time on understanding how to modelize backpropagation. I mean i can code modular forward and backward passes but what bothers me is that if i have the model below : Two Layered Neural Network
Lets assume that our loss function here is a softmax loss function. In my modular softmax_loss() function i am calculating loss and gradient with respect to scores (dSoft = dL/dY). After that, when i'am following backwards lets say for b2, db2 would be equal to dSoft*1 or dW2 would be equal to dSoft*dX2(outputs of relu gate). What's the chain rule here ? Why isnt dSoft equal to 1 ? Because dL/dL would be 1 ?
The softmax function is outputs a number given an input x.
What dSoft means is that you're computing the derivative of the function softmax(x) with respect to the input x. Then to calculate the derivative with respect to W of the last layer you use the chain rule i.e. dL/dW = dsoftmax/dx * dx/dW. Note that x = W*x_prev + b where x_prev is the input to the last node. Therefore dx/dW is just x and dx/db is just 1, which means that dL/dW or simply dW is dsoftmax/dx * x_prev and dL/db or simply db is dsoftmax/dx * 1. Note that here dsoftmax/dx is dSoft we defined earlier.

caffe softmax with loss layer for semantic segmentation loss calculation

The caffe documentation on the softmax_loss_layer.hpp file seems to be targeted towards classification tasks and not semantic segmentation. However, I have seen this layer being used for the latter.
What would be the dimensions of the input blobs and output blob in the case where you're classifying each pixel (semantic segmentation)?
More importantly, how are the equations for calculating the loss applied to these blobs? Like, in what form are the matrices/blobs arranged and the eventual "loss value" that's output, what is the equation for that?
Thank you.
edits:
I have referenced this page for understanding concepts of loss equation, just don't know how it's applied to the blobs, which axis, etc.: http://cs231n.github.io/linear-classify/
Here is the documentation from caffe:
Firstly, the input blobs should be of the form data NxKxHxW and label Nx1XHxW where each value in the label blob is an integer from [0-K]. I think there's an error in the caffe documentation where it doesn't consider the case for semantic segmentation, and I'm not sure what K = CHW means. The output blob is of the shape 1x1x1x1 which is the loss.
Secondly, the loss function is as follows, from softmax_loss_layer.cpp:
loss -= log(std::max(prob_data[i * dim + label_value * inner_num_ + j], Dtype(FLT_MIN)));
Breaking that line down (for semantic segmentation):
std::max is just to ensure there's no invalid input like nan
prob_data is the output from the softmax, as explained in the caffe tutorials, softmax loss layer can be decomposed into a softmax layer followed by multinomial logistic loss
i * dim specifies the Nth image in your batch where the batch shape is like so NxKxHxW where K is the number of classes
label_value * inner_num_ specifies the Kth image, because at this stage, each one of your classes have their own "image" of probabilities, so to speak
Finally, j is the index for each pixel
Basically, you want prob_data[i * dim + label_value * inner_num_ + j] for each pixel to be as close to 1 as possible. This means that the negative log of that will be close to 0. Here the log is to base e. And then you do the stochastic gradient descent for that loss.

Keras LSTM input - Predicting a parabolic trajectory

I want to predict the trajectory of a ball falling. That trajectory is parabolic. I know that LSTM may be too much for this (i.e. a simpler method could suffice).
I thought that we can do this with 2 LSTM layers and a Dense layer at the end.
The end result that I want is to give the model 3 heights h0,h1,h2 and let it predict h3. Then, I want to give it h1, h2, and the h3 it outputted previously to predict h4, and so on, until I can predict the whole trajectory.
Firstly, what would the input shape be for the first LSTM layer ?
Would it be input_shape = (3,1) ?
Secondly, would the LSTM be able to predict a parabolic path ?
I am getting almost a flat line, not a parabola, and I want to rule out the possibility that I am misunderstanding how to feed and shape input.
Thank you
The input shape is in the form (samples, timeSteps, features).
Your only feature is "height", so features = 1.
And since you're going to input sequences with different lengths, you can use timeSteps = None.
So, your input_shape could be (None, 1).
Since we're going to use a stateful=True layer below, we can use batch_input_shape=(1,None,1). Choose the amount of "samples" you want.
Your model can predict the trajectory indeed, but maybe it will need more than one layer. (The exact answer about how many layers and cells depend on knowing how the match inside LSTM works).
Training:
Now, first you need to train your network (only then it will be able to start predicting good things).
For training, suppose you have a sequence of [h1,h2,h3,h4,h5,h6...], true values in the correct sequence. (I suggest you have actually many sequences (samples), so your model learns better).
For this sequence, you want an output predicting the next step, then your target would be [h2,h3,h4,h5,h6,h7...]
So, suppose you have a data array with shape (manySequences, steps, 1), you make:
x_train = data[:,:-1,:]
y_train = data[:,1:,:]
Now, your layers should be using return_sequences=True. (Every input step produces an output step). And you train the model with this data.
A this point, whether you're using stateful=True or stateful=False is not very relevant. (But if true, you always need model.reset_state() before every single epoch and sequence)
Predicting:
For predicting, you can use stateful=True in the model. This means that when you input h1, it will produce h2. And when you input h2 it will remember the "current speed" (the state of the model) to predict the correct h3.
(In the training phase, it's not important to have this, because you're inputting the entire sequences at once. So the speed will be understood between steps of the long sequences).
You can se the method reset_states() as set_current_speed_to(0). You will use it whenever the step you're going to input is the first step in a sequence.
Then you can do loops like this:
model.reset_states() #make speed = 0
nextH = someValueWithShape((1,1,1))
predictions = [nextH]
for i in range(steps):
nextH = model.predict(nextH)
predictions.append(nextH)
There is an example here, but using two features. There is a difference that I use two models, one for training, one for predicting, but you can use only one with return_sequences=True and stateful=True (don't forget to reset_states() at the beginning of every epoch in training).