A flat validation loss and a decreasing training loss can be considered a symptom of overfitting? - deep-learning

My questions are about underfitting/overfitting and it's related with the following results :
here
In this scenario, a flat validation loss and a decreasing training loss can be considered a symptom of overfitting? I'd have expected a validation loss that starts to increase.
Moreover, at the end, the training loss was flattening, so is it correct to say that the model can't learn more with these hyperparameters? Is this, instead, a symptom of underfitting?
I'm working on this dataset (here).
I implemented a convolutional neural network with 7 conv layers and 2 FC (similar to VGG, 64-P-128-128-P-256-256-P-512-512, a hidden FC of 256 neurons and the last for classifcation), clearly not to obtain a state-of-the-art score (currently about 75%).
It seems strange to me to talk about underfitting and overfitting in the same training process, so I'm pretty sure there's something I'm missing. Could you help me to understand these results?
Thanks for your attention
I've found a similar question but it didn't help (here).

Related

NN converges very quick but performs well. Should I be concerned?

I have LSTM model I'm using for time series predictions. In training it converges already after 3 epochs. The model performs quite well on the test data, but should I still be concerned about the fast convergence or should performance on test set be the overruling factor to decide if a model is good or not?
There is plenty of data points(100k) and two hidden layers with 124 and 64 nodes, so I don't think the model lacks complexity or data.

Data augmentation stops overfitting by preventing learning entirely?

I am training a network to classify psychosis (binary classification as either healthy or psychosis) given an MRI scan of a subject. My dataset is 500 items, where I am using 350 for training and 150 for validation. Around 44% of the dataset is healthy, and ~56% has psychosis.
When I train the network without data augmentation, the training loss begins decreasing immediately while validation loss never changes. The red line in the accuracy graph below is the dominant class percentage (56%).
When I re-train using data augmentation 80% of the time (random affine, blur, noise, flip), overfitting is prevented, but now nothing is learned at all.
So I suppose my question is: What are some ideas for how to get the validation accuracy to increase? i.e. get the network to learn things without overfitting...

training small amount of data on the large capacity network

Currently I am using the convolutional neural networks to solve the binary classification problem. The data I use is 2D-images and the number of training data is only about 20,000-30,000. In deep learning, it is generally known that overfitting problems can arise if the model is too complex relative to the amount of the training data. So, to prevent overfitting, the simplified model or transfer learning is used.
Previous developers in the same field did not use high-capacity models (high-capacity means a large number of model parameters) due to the small amount of training data. Most of them used small-capacity models and transfer learning.
But, when I was trying to train the data on high-capacity models (based on ResNet50, InceptionV3, DenseNet101) from scratch, which have about 10 million to 20 million parameters in, I got a high accuracy in the test set.
(Note that the training set and the test set were exclusively separated, and I used early stopping to prevent overfitting)
In the ImageNet image classification task, the training data is about 10 million. So, I also think that the amount of my training data is very small compared to the model capacity.
Here I have two questions.
1) Even though I got high accuracy, is there any reason why I should not use a small amount of data on the high-capacity model?
2) Why does it perform well? Even if there is a (very) large gap between the amount of data and the number of model parameters, the techniques like early stopping overcome the problems?
1) You're completely right that small amounts of training data can be problematic when working with a large model. Given that your ultimate goal is to achieve a "high accuracy" this theoretical limitation shouldn't bother you too much if the practical performance is satisfactory for you. Of course, you might always do better but I don't see a problem with your workflow if the score on the test data is legit and you're happy with it.
2) First of all, I believe ImageNet consists of 1.X million images so that puts you a little closer in terms of data. Here are a few ideas I can think of:
Your problem is easier to solve than ImageNet
You use image augmentation to synthetically increase your image data
Your test data is very similar to the training data
Also, don't forget that 30,000 samples means (30,000 * 224 * 224 * 3 =) 4.5 billion values. That should make it quite hard for a 10 million parameter network to simply memorize your data.
3) Welcome to StackOverflow

Predicting rare events and their strength with LSTM autoencoder

I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.
In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).
The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.
What do you think about this approach?
This is the better architecture for predicting rare events? Which one would be better?
Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?
Should the LSTM that predicts be just one? And only use MSE loss?
Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?
Thanks,

Large number of training steps results in poor performance in transfer learning

I have a question. I have used transfer learning to retrain googlenet on my image classification problem. I have 80,000 images which belong to 14 categories. I set number of training steps equal to 200,000. I think the code provided by Tensorflow should have drop out implimented and it trains based on random shuffling of dataset and cross validation approach, and and I do not see any overfiting in training and classification curves, and I get high cross validation accuracy and high test accuracy but when I apply my model to new dataset then I get poor classification result. Anybodey know what is going on?Thanks!