I have a simple encoder-decoder network. The encoder has several layers of conv1d with linear at the end and Relu between them, the decoder is consists of conv1d layers and Relu between them(no batch norm or dropout).
Using this model I try to overfit one example,I work with batch size=1 and always give the same input and same desired output, however no success. The loss indeed goes down until some threshold, but no matter what I do I can't get the loss lower than this bound and the output is useless. I tried more sophisticated encoder/decoder, change hyperparameters, make different preprocessing on my data, but I never can't get the loss lower than that threshold.
Just for the protocol, if I give it as input the desired output(so it will learn the id function) the network works, but that doesn't help me.
I will appreciate any help with it with any idea what might be the problem.
Try more number of epochs, with a lower learning rate.
Try increasing the size of your Dense layers.
Try avoiding any Dropout layers.
These can make the model more vulnerable to overfitting, if this is what you wanted.
Related
I have used the Transformer model to train the time series dataset, but there is always a gap between training and validation in my loss curve. I have tried using different learning rates, batch sizes, dropout, heads, dim_feedforward, and layers, but they don't work. Can anyone give me some ideas on reducing the gap between them?
I also tried to ask the question on the Pytorch forum but didn't get any reply.
How to design a decoder for time series regression in Transformer?
Since you are overfitting your model here
1.Try using more data.
2.Try to add dropOut layers
3. Try using lasso or Ridge
I'm currently working on a personal implementation of the Transformer architecture. The code I've written as here.
The problem that I'm facing is that I believe my model isn't training properly and I'm not sure what kind of measures I should take to fix that. I've come to this conclusion after using Weights & Biases to visualize the model's gradient histograms and they look something like this:
The gradients seem to quickly converge to zero. There is a portion of code that contains a feedforward neural network that uses ReLU activation, and I changed this to Leaky ReLU under the suspicion that dying ReLU's may be the problem. However, using Leaky ReLU's doesn't help and just prolongs the zero-convergence.
Any feedback on what else I may try is appreciated.
I am asking this question because I noticed in competitions people tend to minimize the loss to 0. I have an "image binary classification " problem and I already achieved the binary_crossentropy_loss to 0.003 with a "train from scratch" transfer learning model. How can I further reduce it to 0? Should I fine-tune the model again or should I go back to do image feature engineering?
Additionally, according to the picture here, I suppose I encountered "vanished gradient" instead of "overfitting". If so, what should I do on the next step?
Thank you!
Since you are trying to perform image binary classification, if you can minimize both your training and validation loss to 0, that basically means your network is 'perfectly' trained to recognize all the validation images by using just the training images. When this happens, I think it's better for you to get 'harder' data for your network to learn.
From your image, I think you should continue training your model for more epochs, since val_loss does not seem to converge yet; as a result, there are no indications of 'overfitting'.
Regarding 'vanished gradient', it's not possible to tell from your picture since the common sign of vanishing gradients is weights dying down to 0. To check for this problem, I think you should keep track of the weights distribution of your model in addition to the losses.
I have a dataset of around 6K chemical formulas which I am preprocessing via Keras' tokenization to perform binary classification. I am currently using a 1D convolutional neural network with dropouts and am obtaining an accuracy of 82% and validation accuracy of 80% after only two epochs. No matter what I try, the model just plateaus there and doesn't seem to be improving at all. Those same exact accuracies are reached with a vanilla LSTM too. What else can I try to improve my accuracies? Losses only have a difference of 0.04... Anyone have any ideas? Both models use an embedding layer and changing the output dimension isn't having an effect either.
According to your answer, I believe your model has a high bias and low variance (see this link for further details). Thus, your model is not fitting your data very well and it is causing underfitting. So, I suggest you 3 things:
Train your model a little longer: I believe two epoch are too few to give a chance to your model understand the patterns in the data. Try to minimize learning rate and increase the number of epochs.
Try a different architecture: you may change the amount of convolutions, filters and layers, You can also use different activation functions and other layers like max pooling.
Make an error analysis: once you finished your training, apply your model to test set and take a look into the errors. How much false positives and false negatives do you have? Is your model better to classify one class than the other? You can see a pattern in the errors that may be related to your data?
Finally, if none of these suggestions helped you, you may also try to increase the number of features, if possible.
I am trying to build a 11 class image classifier with 13000 training images and 3000 validation images. I am using deep neural network which is being trained using mxnet. Training accuracy is increasing and reached above 80% but validation accuracy is coming in range of 54-57% and its not increasing.
What can be the issue here? Should I increase the no of images?
The issue here is that your network stop learning useful general features at some point and start adapting to peculiarities of your training set (overfitting it in result). You want to 'force' your network to keep learning useful features and you have few options here:
Use weight regularization. It tries to keep weights low which very often leads to better generalization. Experiment with different regularization coefficients. Try 0.1, 0.01, 0.001 and see what impact they have on accuracy.
Corrupt your input (e.g., randomly substitute some pixels with black or white). This way you remove information from your input and 'force' the network to pick up on important general features. Experiment with noising coefficients which determines how much of your input should be corrupted. Research shows that anything in the range of 15% - 45% works well.
Expand your training set. Since you're dealing with images you can expand your set by rotating / scaling etc. your existing images (as suggested). You could also experiment with pre-processing your images (e.g., mapping them to black and white, grayscale etc. but the effectiveness of this technique will depend on your exact images and classes)
Pre-train your layers with denoising critera. Here you pre-train each layer of your network individually before fine tuning the entire network. Pre-training 'forces' layers to pick up on important general features that are useful for reconstructing the input signal. Look into auto-encoders for example (they've been applied to image classification in the past).
Experiment with network architecture. Your network might not have sufficient learning capacity. Experiment with different neuron types, number of layers, and number of hidden neurons. Make sure to try compressing architectures (less neurons than inputs) and sparse architectures (more neurons than inputs).
Unfortunately the process of training network that generalizes well involves a lot of experimentation and almost brute force exploration of parameter space with a bit of human supervision (you'll see many research works employing this approach). It's good to try 3-5 values for each parameter and see if it leads you somewhere.
When you experiment plot accuracy / cost / f1 as a function of number of iterations and see how it behaves. Often you'll notice a peak in accuracy for your test set, and after that a continuous drop. So apart from good architecture, regularization, corruption etc. you're also looking for a good number of iterations that yields best results.
One more hint: make sure each training epochs randomize the order of images.
This clearly looks like a case where the model is overfitting the Training set, as the validation accuracy was improving step by step till it got fixed at a particular value. If the learning rate was a bit more high, you would have ended up seeing validation accuracy decreasing, with increasing accuracy for training set.
Increasing the number of training set is the best solution to this problem. You could also try applying different transformations (flipping, cropping random portions from a slightly bigger image)to the existing image set and see if the model is learning better.