I have this idea to use CNN to learn a Belote Bidder.
We have 32 cards from which the bidder guy has only 5. I put them in an "image" 4x8, arr[r][s] = 1 if he has the card from suit s and rank r, and 0 if he doesnt.
Only 5 ones in the image.
Then I simulate N times the possible bids with Monte Carlo algorithm and make a vector with the probabilities for each bid. They sum to 1.
After this I try to learn the CNN but no matter what I do, it doesnt learn well and it doesnt generalize. Maybe the input data is not useful for CNN, perhaps I need to use something else, but the most important thing is - I need it to generalize well, because the computation of the output is very expensive, and I want to do it only once, save the CNN and then use it.
Related
I have trained a CNN model whose forward-prop is like:
*Part1*: learnable preprocess​
*Part2*: Mixup which does not need to calculate gradient
*Part3*: CNN backbone and classifier head
Both part1 and part3 need to calculate the gradient and need update weights when back-prop, but part2 is just a simple mixup and don't need gradient, so I tried wrapped this Mixup with torch.no_grad() to save computational resource and speed up training, which it indeed speed my training a lot, but the model`s prediction accuracy drops a lot.
I'm wondering if Mixup does not need to calculate the gradient, why wrap it with torch.no_grad() hurt the model`s ability so much, is it due to loss of the learned weights of Part1, or something like break the chain between Part1 and Part2?
Edit:
Thanks #Ivan for your reply and it sounds reasonable, I also have the same thought but don't know how to prove it.
In my experiment when I apply torch.no_grad() on Part2, the GPU memory consumption drops a lot, and training is much faster, so I guess this Part2 still needs gradient even it does not have learnable parameters.
So can we conclude that torch.no_grad() should not be applied between 2 or more learnable blocks, otherwise it would drop the learning ability of blocks before this no_grad() part?
but part2 is just simple mixup and don't need gradient
It actually does! In order to compute the gradient flow and backpropagate successfully to part1 of your model (which is learnable, according to you) you need to compute the gradients on part2 as well. Even though there are no learnable parameters on part2 of your model.
What I'm assuming happened when you applied torch.no_grad() on part2 is that only part3 of your model was able to learn while part1 stayed untouched.
Edit
So can we conclude that torch.no_grad() should not be applied between 2 or more learnable blocks, otherwise it would drop the learning ability of blocks before this no_grad() part?
The reasoning is simple: to compute the gradient on part1 you need to compute the gradient on intermediate results, irrespective of the fact that you won't use those gradients to update the tensors on part2. So indeed, you are correct.
I have been working on a super-resolution task. I have this question about determining loss function, So in the case of the task at hand I felt like going with SSIM as a loss function to train my model. I did get a good set of results. Recently I come across perceptual loss function where we compare how a pretrained model looks at the Ground truth(GT) Images and the Super Resolution(SR) Image(Image generated by the model). My question is, I am thinking of using both ((1-SSIM(SR,GT))+Perceptual loss(SR,GT)) loss for backpropagation, so should I use a trade-off parameter between these two losses? if so how can I set up these trade-off parameters? or should I add these losses with equal weights.
PS: the perceptual loss is calculated by finding SSIMs between the feature maps of GT and SR images from the pre-trained model
I'm trying to develop a model to recognize new gestures with the Myo Armband. (It's an armband that possesses 8 electrical sensors and can recognize 5 hand gestures). I'd like to record the sensors' raw data for a new gesture and feed it to a model so it can recognize it.
I'm new to machine/deep learning and I'm using CNTK. I'm wondering what would be the best way to do it.
I'm struggling to understand how to create the trainer. The input data looks like something like that I'm thinking about using 20 sets of these 8 values (they're between -127 and 127). So one label is the output of 20 sets of values.
I don't really know how to do that, I've seen tutorials where images are linked with their label but it's not the same idea. And even after the training is done, how can I avoid the model to recognize this one gesture whatever I do since it's the only one it's been trained for.
An easy way to get you started would be to create 161 columns (8 columns for each of the 20 time steps + the designated label). You would rearrange the columns like
emg1_t01, emg2_t01, emg3_t01, ..., emg8_t20, gesture_id
This will give you the right 2D format to use different algorithms in sklearn as well as a feed forward neural network in CNTK. You would use the first 160 columns to predict the 161th one.
Once you have that working you can model your data to better represent the natural time series order it contains. You would move away from a 2D shape and instead create a 3D array to represent your data.
The first axis shows the number of samples
The second axis shows the number of time steps (20)
The thirst axis shows the number of sensors (8)
With this shape you're all set to use a 1D convolutional model (CNN) in CNTK that traverses the time axis to learn local patterns from one step to the next.
You might also want to look into RNNs which are often used to work with time series data. However, RNNs are sometimes hard to train and a recent paper suggests that CNNs should be the natural starting point to work with sequence data.
I have data with integer target class in the range 1-5 where one is the lowest and five the highest. In this case, should I consider it as regression problem and have one node in the output layer?
My way of handling it is:
1- first I convert the labels to binary class matrix
labels = to_categorical(np.asarray(labels))
2- in the output layer, I have five nodes
main_output = Dense(5, activation='sigmoid', name='main_output')(x)
3- I use 'categorical_crossentropy with mean_squared_error when compiling
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['mean_squared_error'],loss_weights=[0.2])
Also, can anyone tells me: what is the difference between using categorical_accuracy and 'mean_squared_error in this case?
Regression and classification are vastly different things. If you reimagine this as a regression task than the difference of predicting 2 when the ground truth is 4 will be rated more than if you predict 3 instead of 4. If you have class like car, animal, person you do not care for the ranking between those classes. Predicting car is just as wrong as animal, iff the image shows a person.
Metrics do not impact your learning at all. It is just something that is computed additionally to the loss to show the performance of the model. Here the accuracy makes sense, because this is mostly the metric that we care about. Mean squared error does not tell you how well your model performs. If you get something like 0.0015 mean squared error it sounds good, but it is hard to visualize just how well this performs. In contrast using accuracy and achieving 95% accuracy for example is meaningful.
One last thing you should use softmax instead of sigmoid as your final output to get a probability distribution in your final layer. Softmax will output percentages for every class that sum up to 1. Then crossentropy calculates the difference of the probability distribution of your network output and the ground truth.
I have to do some work with Q Learning, about a guy that has to move furniture around a house (it's basically that). If the house is small enough, I can just have a matrix that represents actions/rewards, but as the house size grows bigger that will not be enough. So I have to use some kind of generalization function for it, instead. My teacher suggests I use not just one, but several ones, so I could compare them and so. What you guys recommend?
I heard that for this situation people are using Support Vector Machines, also Neural Networks. I'm not really inside the field so I can't tell. I had in the past some experience with Neural Networks, but SVM seem a lot harder subject to grasp. Are there any other methods that I should look for? I know there must be like a zillion of them, but I need something just to start.
Thanks
Just as a refresher of terminology, in Q-learning, you are trying to learn the Q-functions, which depend on the state and action:
Q(S,A) = ????
The standard version of Q-learning as taught in most classes tells you that you for each S and A, you need to learn a separate value in a table and tells you how to perform Bellman updates in order to converge to the optimal values.
Now, lets say that instead of table you use a different function approximator. For example, lets try linear functions. Take your (S,A) pair and think of a bunch of features you can extract from them. One example of a feature is "Am I next to a wall," another is "Will the action place the object next to a wall," etc. Number these features f1(S,A), f2(S,A), ...
Now, try to learn the Q function as a linear function of those features
Q(S,A) = w1 * f1(S,A) + w2*f2(S,A) ... + wN*fN(S,A)
How should you learn the weights w? Well, since this is a homework, I'll let you think about it on your own.
However, as a hint, lets say that you have K possible states and M possible actions in each state. Lets say you define K*M features, each of which is an indicator of whether you are in a particular state and are going to take a particular action. So
Q(S,A) = w11 * (S==1 && A == 1) + w12 * (S == 1 && A == 2) + w21 * (S==2 && A==3) ...
Now, notice that for any state/action pair, only one feature will be 1 and the rest will be 0, so Q(S,A) will be equal to the corresponding w and you are essentially learning a table. So, you can think of the standard, table Q-learning as a special case of learning with these linear functions. So, think of what the normal Q-learning algorithm does, and what you should do.
Hopefully you can find a small basis of features, much fewer than K*M, that will allow you to represent your space well.