I'm using a 1D CNN on temporal data. Let's say that I have two features A and B. The ratio between A and B (i.e. A/B) is important - let's call this feature C. I'm wondering if I need to explicitly calculate and include feature C, or can the CNN theoretically infer feature C from the given features A and B?
I understand that in deep learning, it's best to exclude highly-correlated features (such as feature C), but I don't understand why.
The short answer is NO. Using the standard DNN layers will not automatically capture this A/B relationship, because standard layers like Conv/Dense will only perform the matrix multiplication operations.
To simplify the discussion, let us assume that your input feature is two-dimensional, where the first dimension is A and the second is B. Applying a Conv layer to this feature simply learns a weight matrix w and bias b
y = w * [f_A, f_B] + b = w_A * f_A + w_B * f_B + b
As you can see, there is no way for this representation to mimic or even approximate the ratio operation between A and B.
You don't have to use the feature C in the same way as feature A and B. Instead, it may be a better idea to keep feature C as an individual input, because its dynamic range may be very different from those of A and B. This means that you can have a multiple-input network, where each input has its own feature extraction layers and the resulting features from both inputs can be concatenated together to predict your target.
Related
loosly speaking, I would write an unconditional model as follow y=f(z) with y in R^n (or R^(n,m) or R^(n,m,d)) and z in R^q with z drawn from some probability distribution. For Generative Adversarial Networks (GANs) and Variational Auto Encoders (VAEs) I was able to find examples how to design a conditional variant of these types of models. The conditional model could then be written as follows - y = f(u,z) with y and z as above and u in R^p. Remark - I am interested in continuous conditions. But how can such a conditional model be designed for diffusion or normalizing flow based or score based models? Lets take diffusion models for example. Would we add to the input u at each step gradually some Gaussian noise and "forget" the condition more and more this way? Or would we keep the full condition?
I am trying to implement nn.MultiheadAttention in my network. According to the docs,
embed_dim – total dimension of the model.
However, according to the source file,
embed_dim must be divisible by num_heads
and
self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
If I understand properly, this means each head takes only a part of features of each query, as the matrix is quadratic. Is it a bug of realization or is my understanding wrong?
Each head uses a different part of the projected query vector. You can imagine it as if the query gets split into num_heads vectors that are independently used to compute the scaled dot-product attention. So, each head operates on a different linear combination of the features in queries (and keys and values, too). This linear projection is done using the self.q_proj_weight matrix and the projected queries are passed to F.multi_head_attention_forward function.
In F.multi_head_attention_forward, it is implemented by reshaping and transposing the query vector, so that the independent attentions for individual heads can be computed efficiently by matrix multiplication.
The attention head sizes are a design decision of PyTorch. In theory, you could have a different head size, so the projection matrix would have a shape of embedding_dim × num_heads * head_dims. Some implementations of transformers (such as C++-based Marian for machine translation, or Huggingface's Transformers) allow that.
I am exploring deep learning methods especially LSTM to predict next word. Suppose, My data set is like this: Each data point consists of 7 features (7 different words)(A-G here) of different length.
Group1 Group2............ Group 38
A B F
E C A
B E G
C D G
C F F
D G G
. . .
. . .
I used one hot encoding as an Input layer. Here is the model
main_input= Input(shape=(None,action_count),name='main_input')
lstm_out= LSTM(units=64,activation='tanh')(main_input)
lstm_out=Dropout(0.2)(lstm_out)
lstm_out=Dense(action_count)(lstm_out)
main_output=Activation('softmax')(lstm_out)
model=Model(inputs=[main_input],outputs=main_output)
print(model.summary())
Using this model. I got an accuracy of about 60%.
My question is how can I use embedding layer for my problem. Actually, I do not know much about embedding (why, when and how it works)[I only know one hot vector does not carry much information]. I am wondering if embedding can improve accuracy. If someone can provide me guidance in these regards, it will be greatly beneficial for me. (At least whether uses of embedding is logical or not for my case)
What are Embedding layers?
They are layers which converts positive integers ( maybe word counts ) into fixed size dense vectors. They learn the so called embeddings for a particular text dataset ( in NLP tasks ).
Why are they useful?
Embedding layers slowly learn the relationships between words. Hence, if you have a large enough corpus ( which probably contains all possible English words ), then vectors for words like "king" and "queen" will show some similarity in the mutidimensional space of the embedding.
How are used in Keras?
The keras.layers.Embedding has the following configurations:
keras.layers.Embedding(input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None)
Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
This layer can only be used as the first layer in a model.
When the input_dim is the vocabulary size + 1. Vocabulary is the corpus of all the words used in the dataset. The input_length is the length of the input sequences whereas output_dim is the dimensionality of the output vectors ( the dimensions for the vector of a particular word ).
The layer can also be used wih pretrained word embeddings like Word2Vec or GloVE.
Are they suitable for my use case?
Absolutely, yes. For sentiment analysis, if we could generate a context ( embedding ) for a particular word then we could definitely increase its efficiency.
How can I use them in my use case?
Follow the steps:
You need to tokenize the sentences. Maybe with keras.preprocessing.text.Tokenizer.
Pad the sequences to a fixed length using keras.preprocessing.sequence.pad_sequences. This will be the input_length parameter for the Embedding layer.
Initialize the model with Embedding layer as the first layer.
Hope this helps.
i have been following cs231n lectures of Stanford and trying to complete assignments on my own and sharing these solutions both on github and my blog. But i'm having a hard time on understanding how to modelize backpropagation. I mean i can code modular forward and backward passes but what bothers me is that if i have the model below : Two Layered Neural Network
Lets assume that our loss function here is a softmax loss function. In my modular softmax_loss() function i am calculating loss and gradient with respect to scores (dSoft = dL/dY). After that, when i'am following backwards lets say for b2, db2 would be equal to dSoft*1 or dW2 would be equal to dSoft*dX2(outputs of relu gate). What's the chain rule here ? Why isnt dSoft equal to 1 ? Because dL/dL would be 1 ?
The softmax function is outputs a number given an input x.
What dSoft means is that you're computing the derivative of the function softmax(x) with respect to the input x. Then to calculate the derivative with respect to W of the last layer you use the chain rule i.e. dL/dW = dsoftmax/dx * dx/dW. Note that x = W*x_prev + b where x_prev is the input to the last node. Therefore dx/dW is just x and dx/db is just 1, which means that dL/dW or simply dW is dsoftmax/dx * x_prev and dL/db or simply db is dsoftmax/dx * 1. Note that here dsoftmax/dx is dSoft we defined earlier.
I have a good (or at least a self-consistent) calibration set and have applied PCA and recently PLS regression on n.i.r. spectrum of known mixtures of water and additive to predict the percentage of additive by volume. I thus far have done self-calibration and now want to predict the concentration from the n.i.r.spectrum blindly. Octave returns XLOADINGS, YLOADINGS, XSCORES, YSCORES, COEFFICIENTS, and FITTED with the plsregress command. The "fitted" is the estimate of concentration. Octave uses the SIMPLS approach.
How do I use these returned variables to predict concentration give a new samples spectrum?
Scores are usually denoted by T and loadings by P and X=TP'+E where E is the residual. I am stuck.
Note that T and P are X scores and loadings, respectively. Unlike PCA, PLS has scores and loadings for Y as well (usually denoted U and Q).
While the documentation of plsregress is sketchy at best, the paper it refers to Sijmen de Jong: SIMPLS: an alternativ approach to partial least squares regression Chemom Intell Lab Syst, 1993, 18, 251-263, DOI: 10.1016/0169-7439(93)85002-X
discusses prediction with equations (36) and (37), which give:
Yhat0 = X0 B
Note that this uses centered data X0 to predict centered y-values. B are the COEFFICIENTS.
I recommend that as a first step you predict your training spectra and make sure you get the correct results (FITTED).