Predicting rare events and their strength with LSTM autoencoder - deep-learning

I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.
In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).
The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.
What do you think about this approach?
This is the better architecture for predicting rare events? Which one would be better?
Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?
Should the LSTM that predicts be just one? And only use MSE loss?
Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?
Thanks,

Related

Ways to prevent underfitting and overfitting to when using data augmentation to train a transposed CNN

I'm training a CNN (one using a series of ConvTranspose2D in pytorch) that uses input data from JSON to constitute an image. Unlike natural language, the input data can be in any order, as it contains info about various sprites in a scene.
In my first attempts to train the model, I didn't change the order of the input data (meaning, on each epoch, each sprite was represented in the same place in the input data). The model learned for about 10 epochs, but then there started to be divergence between the training loss (which continued to go down) and the test loss. So classic overfitting.
I tried to solve this by doing a form of data augmentation where the output data (in this case an image) stayed the same but I shuffled the order of the input data. As I have around 400 sprites, the maximum shuffling is 400!, so theoretically this can vastly expand the amount of training data. For example, instead of 100k JSON documents corresponding to 100K images, by shuffling the order of sprites in the input data, you have 400!*100000 training data points. In practice of course this amount of data is impractical, so I went with around 2m data points for an initial test. The issue I ran into here was that the model was not learning at all - after getting to a certain loss very quickly (after the first few mini-batches), it didn't learn at all for around 4 epochs. So classic underfitting.
Like Goldilocks, I'd like to find "just right" between the initial overfitting and subsequent underfitting. I'm wondering other strategies I could try out. One idea I had was letting the model train on a predetermined order of sprites (the overfitting case) and then, once overfitting starts (ie two straight epochs with divergence between the test and training loss) shuffling the data. I can also play with changing the model, although it can only be so big because of constraints with the hardware and the fact that inference needs to happen in under 20ms.
Are there any papers or techniques that are recommended in this scenario where data augmentation can lead to vastly more data points but results in a model ceasing to learn? Thanks in advance for any tips!

NN converges very quick but performs well. Should I be concerned?

I have LSTM model I'm using for time series predictions. In training it converges already after 3 epochs. The model performs quite well on the test data, but should I still be concerned about the fast convergence or should performance on test set be the overruling factor to decide if a model is good or not?
There is plenty of data points(100k) and two hidden layers with 124 and 64 nodes, so I don't think the model lacks complexity or data.

Is normalization of word embeddings important?

I am doing actor-critic reinforcement learning for an environment that is best represented as a "bag-of-words". For this reason, I have opted to use a single body, multi-head approach for the net architecture. I use a linear pre-processing layer to generate n word embeddings of dimension d. Then I run the (batch,n,d) words through a stack of (2) nn.TransformerEncoder layers for the body and each head is another encoder layer followed by a linear logit layer.
Since this is RL and I have limited compute, it is also difficult to evaluate training as its happening. I decided to try looking a the mean cosine similarity of the latent words after the encoder body. My intuition tells me if the net is learning a proper latent representation of the environment then dis-similar words should have low cosine similarity.
However even though the net is clearly improving somewhat the mean cosine sim. remains very high, > .99
Thinking about it more, I don't think theres any reason to believe my first intuition, especially since I am not even normalizing the words after encoder body stack. But even if I did normalize, I'm not sure that would encourage lower cosine sim. as I am using 256 dimensions per word. All normalizing does is reduce the dimension of the output space by 1, which should hardly matter here.
Does this make sense? Also any general advice about my net is welcome

What activation layers learn?

I am trying to figure out what CNN architecture after every activation layers. Therefore, I have written a code to visualize some activation layers in my model. I used LeakyReLU as my activation layer. This is the figureLeakyRelu after Conv2d + BatchNorm
As can be seen from the figure, there are quite purple frames, which shows nothing. So my question is what does it mean. Does my model learn anything?
Generally speaking, activation layers (AL) don't learn.
The purpose of AL is to add non-linearity into the model, hence they usually apply a certain, fixed, function regardless of the data, without adapting with the data. As an example:
Max Pool: take the highest number in the region
Sigmoid/Tanh: put the all the numbers through a fixed computation
ReLU: takes the max between the numbers and 0
I tried to simplify the math, so pardon my inaccuracies.
As a closure, your purple frames are probably filters that didn't learn just yet, train the model to convergence and unless your model is highly bloated (too big for your data) your will see 'structures' in your filters.

loss function in LSTM neural network

I do not understand what is being minimized in these networks.
Can someone please explain what is going on mathematically when the loss gets smaller in LSTM network?
model.compile(loss='categorical_crossentropy', optimizer='adam')
From the keras documentation, categorical_crossentropy is just the multiclass logloss. Math and theoretical explanation for log loss here.
Basically, the LSTM is assigning labels to words (or characters, depending on your model), and optimizing the model by penalizing incorrect labels in word (or character) sequences. The model takes an input word or character vector, and tries to guess the next "best" word, based on training examples. Categorical crossentropy is a quantitative way of measuring how good the guess is. As the model iterates over the training set, it makes less mistakes in guessing the next best word (or character).