I am implementing Algorithm 1 from this paper. The core of the algorithm is a deep learning optimization procedure (the loss is not important). Each of my data points is a matrix of dimension 2 by N, with n such data points. I have two options for the architecture of the Neural Network:
Make the NN a function from R^2, i.e. 2 input neurons. When I apply it to a 2 by N matrix, I simply apply it to each individual column of the matrix. This makes sense from the physical point of view underlying the problem (each data point is a collection of N interacting particles in 2-dimensional space).
Make the NN a function from R^{2 x N}, i.e. 2 x N input neurons. When I apply it to a 2 by N matrix, I flatten the matrix, and apply the NN to the resulting vector. This makes sense from the deep learning point of view, since the data is "really" 2xN-dimensional.
I have implemented the former architecture. But is the second architecture more "correct"? Can I expect better accuracy from the second approach? Mathematically speaking it can approximate a strictly larger family of functions (functions with interdependence of columns of the 2xN input matrix).
I know that in image classification an image matrix gets flattened and fed into the NN as a vector. This suggests using the latter approach over the former. However, images lack the physical structure that my problem possesses: The columns of each 2XN matrix are positions of N physical particles in 2-dimensional space.
Related
I am trying to implement nn.MultiheadAttention in my network. According to the docs,
embed_dim – total dimension of the model.
However, according to the source file,
embed_dim must be divisible by num_heads
and
self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
If I understand properly, this means each head takes only a part of features of each query, as the matrix is quadratic. Is it a bug of realization or is my understanding wrong?
Each head uses a different part of the projected query vector. You can imagine it as if the query gets split into num_heads vectors that are independently used to compute the scaled dot-product attention. So, each head operates on a different linear combination of the features in queries (and keys and values, too). This linear projection is done using the self.q_proj_weight matrix and the projected queries are passed to F.multi_head_attention_forward function.
In F.multi_head_attention_forward, it is implemented by reshaping and transposing the query vector, so that the independent attentions for individual heads can be computed efficiently by matrix multiplication.
The attention head sizes are a design decision of PyTorch. In theory, you could have a different head size, so the projection matrix would have a shape of embedding_dim × num_heads * head_dims. Some implementations of transformers (such as C++-based Marian for machine translation, or Huggingface's Transformers) allow that.
Suppose, we have dataset X (2D array), and we divide it into batches X_1, ..., X_k.
Then for each batch we do normalization, then each i-th component of batch element we multiply by parameter gamma_i and add to them beta_i.
Batch normalization layer can be repeated several times and I didn't found anything about how it is implemented deeper in network.
In next BN layers do we use the same division to batches as in the beginning (using the same rows in X as in the firsh BN layer), just adding new gamma and beta parameters, or we do it from scratch for every layers's input?
Hope, my question is clear.
I am getting started with deep learning and have a basic question on CNN's.
I understand how gradients are adjusted using backpropagation according to a loss function.
But I thought the values of the convolving filter matrices (in CNN's) needs to be determined by us.
I'm using Keras and this is how (from a tutorial) the convolution layer was defined:
classifier = Sequential()
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
There are 32 filter matrices with dimensions 3x3 is used.
But, how are the values for these 32x3x3 matrices are determined?
It's not the gradients that are adjusted, the gradient calculated with the backpropagation algorithm is just the group of partial derivatives with respect to each weight in the network, and these components are in turn used to adjust the network weights in order to minimize the loss.
Take a look at this introductive guide.
The weights in the convolution layer in your example will be initialized to random values (according to a specific method), and then tweaked during training, using the gradient at each iteration to adjust each individual weight. Same goes for weights in a fully connected layer, or any other layer with weights.
EDIT: I'm adding some more details about the answer above.
Let's say you have a neural network with a single layer, which has some weights W. Now, during the forward pass, you calculate your output yHat for your network, compare it with your expected output y for your training samples, and compute some cost C (for example, using the quadratic cost function).
Now, you're interested in making the network more accurate, ie. you'd like to minimize C as much as possible. Imagine you want to find the minimum value for simple function like f(x)=x^2. You can start at some random point (as you did with your network), then compute the slope of the function at that point (ie, the derivative) and move down that direction, until you reach a minimum value (a local minimum at least).
With a neural network it's the same idea, with the difference that your inputs are fixed (the training samples), and you can see your cost function C as having n variables, where n is the number of weights in your network. To minimize C, you need the slope of the cost function C in each direction (ie. with respect to each variable, each weight w), and that vector of partial derivatives is the gradient.
Once you have the gradient, the part where you "move a bit following the slope" is the weights update part, where you update each network weight according to its partial derivative (in general, you subtract some learning rate multiplied by the partial derivative with respect to that weight).
A trained network is just a network whose weights have been adjusted over many iterations in such a way that the value of the cost function C over the training dataset is as small as possible.
This is the same for a convolutional layer too: you first initialize the weights at random (ie. you place yourself on a random position on the plot for the cost function C), then compute the gradients, then "move downhill", ie. you adjust each weight following the gradient in order to minimize C.
The only difference between a fully connected layer and a convolutional layer is how they calculate their outputs, and how the gradient is in turn computed, but the part where you update each weight with the gradient is the same for every weight in the network.
So, to answer your question, those filters in the convolutional kernels are initially random and are later adjusted with the backpropagation algorithm, as described above.
Hope this helps!
Sergio0694 states ,"The weights in the convolution layer in your example will be initialized to random values". So if they are random and say I want 10 filters. Every execution algorithm could find different filter. Also say I have Mnist data set. Numbers are formed of edges and curves. Is it guaranteed that there will be a edge filter or curve filter in 10?
I mean is first 10 filters most meaningful most distinctive filters we can find.
best
As I understood, the simple word2vec approach uses two matrices like the following:
Assuming that the corpus consists of N words.
Weighted input matrix (WI) with dimensions NxF (F is number of features).
Weighted output matrix (WO) with dimensions FxN.
We multiply one hot vector 1xN with WI and get a neurone 1xF.
Then we multiply the neurone with WO and get an output vector 1xN.
We apply softmax function and choose the highest entry (probability) in the vector.
Question: how is this illustrated when using the Hierarchical Softmax model?
What will be multiplied with which matrix to get the 2 dimensional vector that will lead to branch left or right?
P.S. I do understand the idea of the Hierarchical Softmax model using a binary tree and so on, but I don't know how the multiplications are done mathematically.
Thanks
To make things easy, assume that N is a power of 2. The binary tree will then have N-1 inner nodes. These nodes hook to WO with dimensions Fx(N-1).
Once you have computed a value for each inner node, calculate left and right branch values. Use something like a sigmoid function to assign to (say) the left branch. The right branch is just 1 minus the left.
To predict, find the maximum probability path starting from the root to a leaf.
To train, identify the correct leaf and identify the path of inner nodes to the root. Backpropagate starting with those log(N) nodes.
Link to paper
I'm trying to understand the region proposal network in faster rcnn. I understand what it's doing, but I still don't understand how training exactly works, especially the details.
Let's assume we're using VGG16's last layer with shape 14x14x512 (before maxpool and with 228x228 images) and k=9 different anchors. At inference time I want to predict 9*2 class labels and 9*4 bounding box coordinates. My intermediate layer is a 512 dimensional vector.
(image shows 256 from ZF network)
In the paper they write
"we randomly sample 256 anchors in an image to compute the loss
function of a mini-batch, where the sampled positive and negative
anchors have a ratio of up to 1:1"
That's the part I'm not sure about. Does this mean that for each one of the 9(k) anchor types the particular classifier and regressor are trained with minibatches that only contain positive and negative anchors of that type?
Such that I basically train k different networks with shared weights in the intermediate layer? Therefore each minibatch would consist of the training data x=the 3x3x512 sliding window of the conv feature map and y=the ground truth for that specific anchor type.
And at inference time I put them all together.
I appreciate your help.
Not exactly. From what I understand, the RPN predicts WHk bounding boxes per feature map, and then 256 are randomly sampled per the 1:1 criteria, and these are used as part of the computation for the loss function of that particular mini-batch. You're still only training one network, not k, since the 256 random samples are not of any particular type.
Disclaimer: I only started learning about CNNs a month ago, so I may not understand what I think I understand.