At the end of every YOLOv5 training epoch, you get an output like this:
Class Images Instances P R mAP50 mAP50-95: 100%|██████████| 9/9 [00:04<00:00, 1.93it/s]
all 262 175 0.861 0.686 0.735 0.317
I assume that P and R mean precision and recall. But my question is, at what IoU threshold and what confidence threshold? Don't these parameters have to be set to properly define precision and recall?
The answer can be found on this link: https://github.com/ultralytics/yolov5/issues/9904
Related
I want to create a custom deep learning layer that takes as input a 1X1 neuron and uses it to scale a constant, predefined NXN matrix. I do not understand how to calculate the gradient for this layer.
I understand that in this case dLdZ is NXN and dLdX should be 1X1, and I don't understand what dZdX should be to satisfy that, it's obviously not a simple chained derivative where dLdX = dLdZ*dZdX since the dimensions don't match.
The question is not really language depenedent, I write here in Matlab.
%M is the constant NXN matrix
%X is 1X1X1Xb
Z = zeros(N,N,1,b);
for i = 1:b
Z(:,:,:,i) = squeeze(X(:,:,1,i))*M;
end
==============================
edit: the answer I got was very helpful. I now perform the calculation as follows:
dLdX = zeros(1,1,1,b);
for i = 1:b
dLdX(:,:,:,i) =sum(sum(dLdZ(:,:,:,i).*M)));
end
This works perfectly. Thanks!!
I think ur question is a little unclear. I will assume ur goal is to propagate the gradients through ur above defined layer to the batch of scalar values. Let me answer according to how I understand it.
U have parameter X, which is a scalar and of dimension b (b: batch_size). This is used to scale a constant matrix Z, which is of dimension NxN. Lets assume u calculate some scalar loss L immediately from the scaled matrix Z' = Z*X, where Z' is of dimension bxNxN.
Then u can calculate the gradients in X according to:
dL/dX = dL/dZ * dZ/dX --> Note that the dimensions of this product indeed match (unlike ur initial impression) since dL/dZ' is bxNxN and dZ'/dX is bxNxN. Summing over the correct indeces yields dL/dX which is of dimension b.
Did I understand u correct?
Cheers
UPDATE: It was a mistake in the logic generating new characters. See answer below.
ORIGINAL QUESTION: I built an LSTM for character-level text generation with Pytorch. The model trains well (loss decreases reasonably etc.) but the trained model ends up outputting the last handful of words of the input repeated over and over again (e.g. Input: "She told her to come back later, but she never did"; Output: ", but she never did, but she never did, but she never did" and so on).
I have played around with the hyperparameters a bit, and the problem persists. I'm currently using:
Loss function: BCE
Optimizer: Adam
Learning rate: 0.001
Sequence length: 64
Batch size: 32
Embedding dim: 128
Hidden dim: 512
LSTM layers: 2
I also tried not always choosing the top choice, but this only introduces incorrect words and doesn't break the loop. I've been looking at countless tutorials, and I can't quite figure out what I'm doing differently/wrong.
The following is the code for training the model. training_data is one long string and I'm looping over it predicting the next character for each substring of length SEQ_LEN. I'm not sure if my mistake is here or elsewhere but any comment or direction is highly appreciated!
loss_dict = dict()
for e in range(EPOCHS):
print("------ EPOCH {} OF {} ------".format(e+1, EPOCHS))
lstm.reset_cell()
for i in range(0, DATA_LEN, BATCH_SIZE):
if i % 50000 == 0:
print(i/float(DATA_LEN))
optimizer.zero_grad()
input_vector = torch.tensor([[
vocab.get(char, len(vocab))
for char in training_data[i+b:i+b+SEQ_LEN]
] for b in range(BATCH_SIZE)])
if USE_CUDA and torch.cuda.is_available():
input_vector = input_vector.cuda()
output_vector = lstm(input_vector)
target_vector = torch.zeros(output_vector.shape)
if USE_CUDA and torch.cuda.is_available():
target_vector = target_vector.cuda()
for b in range(BATCH_SIZE):
target_vector[b][vocab.get(training_data[i+b+SEQ_LEN])] = 1
error = loss(output_vector, target_vector)
error.backward()
optimizer.step()
loss_dict[(e, int(i/BATCH_SIZE))] = error.detach().item()
ANSWER: I had made a stupid mistake when producing the characters with the trained model: I got confused with the batch size and assumed that at each step the network would predict an entire batch of new characters when in fact it only predicts a single one… That's why it simply repeated the end of the input. Yikes!
Anyways, if you run into this problem DOUBLE CHECK that you have the right logic for producing new output with the trained model (especially if you're using batches). If it's not that and the problem persists, you can try fine-tuning the following:
sequence length
greediness (e.g. probabilistic choice vs. top choice for next character)
batch size
epochs
During backpropagation, will these cases have different effect:-
sum up loss over all pixels then backpropagate.
average loss over all pixels then backpropagate
backpropagate individuallyover all pixels.
My main doubts in regarding the numerical value but the effect all these would be having.
The difference between no 1 and 2 is basically : since sum will result in bigger than mean, the magnitude of gradients from sum operation will be bigger, but direction will be same.
Here's a little demonstration, lets first declare necessary variables:
x = torch.tensor([4,1,3,7],dtype=torch.float32,requires_grad=True)
target = torch.tensor([4,2,5,4],dtype=torch.float32)
Now lets compute gradient for x using L2 loss with sum:
loss = ((x-target)**2).sum()
loss.backward()
print(x.grad)
This outputs: tensor([ 0., -2., -4., 6.])
Now using mean: (after resetting x grad)
loss = ((x-target)**2).mean()
loss.backward()
print(x.grad)
And this outputs: tensor([ 0.0000, -0.5000, -1.0000, 1.5000])
Notice how later gradients are exactly 1/4th of that of sum, that's because the tensors here contain 4 elements.
About third option, if I understand you correctly, that's not possible. You can not backpropagate before aggregating individual pixel errors to a scalar, using sum, mean or anything else.
I am using Pylearn2 OR Caffe to build a deep network. My target is ordered nominal. I am trying to find a proper loss function but cannot find any in Pylearn2 or Caffe.
I read a paper "Loss Functions for Preference Levels: Regression with Discrete Ordered Labels" . I get the general idea - but I am not sure I understand what will the thresholds be, if my final layer is a SoftMax over Logistic Regression (outputting probabilities).
Can some help me by pointing to any implementation of such a loss function ?
Thanks
Regards
For both pylearn2 and caffe, your labels will need to be 0-4 instead of 1-5...it's just the way they work. The output layer will be 5 units, each is a essentially a logistic unit...and the softmax can be thought of as an adaptor that normalizes the final outputs. But "softmax" is commonly used as an output type. When training, the value of any individual unit is rarely ever exactly 0.0 or 1.0...it's always a distribution across your units - which log-loss can be calculated on. This loss is used to compare against the "perfect" case and the error is back-propped to update your network weights. Note that a raw output from PL2 or Caffe is not a specific digit 0,1,2,3, or 5...it's 5 number, each associated to the likelihood of each of the 5 classes. When classifying, one just takes the class with the highest value as the 'winner'.
I'll try to give an example...
say I have a 3 class problem, I train a network with a 3 unit softmax.
the first unit represents the first class, second the second and third, third.
Say I feed a test case through and get...
0.25, 0.5, 0.25 ...0.5 is the highest, so a classifier would say "2". this is the softmax output...it makes sure the sum of the output units is one.
You should have a look at ordinal (logistic) regression. This is the formal solution to the problem setup you describe ( do not use plain regression as the distance measures of errors are wrong).
https://stats.stackexchange.com/questions/140061/how-to-set-up-neural-network-to-output-ordinal-data
In particular I recommend looking at Coral ordinal regression implementation at
https://github.com/ck37/coral-ordinal/issues.
I'm trying to develop a forecaster for electric consumption. So I want to perform a regression using daily data for an entire year. My dataset has several features. Googling I've found that my problem is a Multiple regression problem (Correct me please if I am mistaken).
What I want to do is train a svm for regression with several independent variables and one dependent variable with n lagged days. Here's a sample of my independent variables, I actually have around 10. (We used PCA to determine which variables had some correlation to our problem)
Day Indep1 Indep2 Indep3
1 1.53 2.33 3.81
2 1.71 2.36 3.76
3 1.83 2.81 3.64
... ... ... ...
363 1.5 2.65 3.25
364 1.46 2.46 3.27
365 1.61 2.72 3.13
And the independendant variable 1 is actually my dependant variable in the future. So for example, with a p=2 (lagged days) I would expect my svm to train with the first 2 time series of all three independant variables.
Indep1 Indep2 Indep3
1.53 2.33 3.81
1.71 2.36 3.76
And the output value of the dependent variable would be "1.83" (Indep variable 1 on time 3).
My main problem is that I don't know how to train properly. What I was doing is just putting all features-p in an array for my "x" variables and for my "y" variables I'm just putting my independent variable on p+1 in case I want to predict next day's power consumption.
Example of training.
x with p = 2 and 3 independent variables y for next day
[1.53, 2.33, 3.81, 1.71, 2.36, 3.76] [1.83]
I tried with x being a two dimensional array but when you combine it for several days it becomes a 3d array and libsvm says it can't be.
Perhaps I should change from libsvm to another tool or maybe it's just that I'm training incorrectly.
Thanks for your help,
Aldo.
Let me answer with the python / numpy notation.
Assume the original time series data matrix with columns (Indep1, Indep2, Indep3, ...) is a numpy array data with shape (n_samples, n_variables). Let's generate it randomly for this example:
>>> import numpy as np
>>> n_samples = 100, n_variables = 5
>>> data = np.random.randn(n_samples, n_variables)
>>> data.shape
(100, 5)
If you want to use a window size of 2 time-steps, then the training set can be built as follows:
>>> targets = data[2:, 0] # shape is (n_samples - 2,)
>>> targets.shape
(98,)
>>> features = np.hstack([data[0:-2, :], data[1:-1, :]]) # shape is (n_samples - 2, n_variables * 2)
>>> features.shape
(98, 10)
Now you have your 2D input array + 1D targes that you can feed to libsvm or scikit-learn.
Edit: it might very well be the case that extracting more time-series oriented features such as moving average, moving min, moving max, moving differences (time based derivatives of the signal) or STFT might help your SVM mode make better predictions.