How to normalise/standardise multiple training images for CNN? - deep-learning

I'm building an image translation network that takes a singe image and outputs multiple translated images. The training data would therefore be (x, Y), x (3, H, W) is the input the image, Y (3*N, H, W) is the concatenated tensor of N target images ys.
I'm wondering how to normalise the data. Should I normalise x and each y individually, or should I use the the overall mean/standard deviation to normalise them collectively?

Related

What should be the input to torch.nn.MultiheadAttention if I have an RGB image?

I have a PyTorch Tensor that's a batch of B images of dimensions 3xHxW. So the Tensor's shape is (B, 3, H, W).
I would like to reshape this vector to be an input to the nn.MultiheadAttention module from the torch library.
In the official documentation for torch.nn.MultiheadAttention, the input and output tensors' shapes are determined according to batch_first:
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False (seq, batch, feature).
What does seq and feature exactly mean here? And how can I get them from my image.
(This will also help me determine the parameters of nn.MultiheadAttention: embed_dim and num_heads.
This is my current initialization:
self.attention = torch.nn.MultiheadAttention(embed_dim= 256 * 4, num_heads= 4)
And in my forward function:
x = self.attention(x, x, x)
What should I reshape X to?
For each image extract image patches and flatten them. Your batch size is the number of images. Your sequence length is the number of patches per image. Your feature size is the length of a flattened patch.

Questions about convolution (in CNN)

I suddenly came up with a question about convolution and just wanted to be clear if I'm missing something. The question is whether if the two operations below are identical.
Case1)
Suppose we have a feature map C^2 x H x W. And, we have a K x K x C^2 Conv weight with stride S. (To be clear, C^2 is the channel dimension but just wanted to make it as a square number, K is the kernel size).
Case2)
Suppose we have a feature map 1 x CH x CW. And, we have a CK x CK x 1 Conv weight with stride CS.
So, basically Case2 is a pixel-upshuffled version of case1 (both feature-map and Conv weight.) As convolutions are simply element-wise multiplication, both operations seem identical to me.
# given a feature map and a conv_weight, namely f_map, conv_weight
#case1)
convLayer = Conv(conv_weight)
result = convLayer(f_map, stride=1)
#case2)
f_map = pixelshuffle(f_map, scale=C)
conv_weight = pixelshuffle(f_map, scale=C)
result = convLayer(f_map, stride=C)
But this means that, (for example) given a 256xHxW feature-map with a 3x3 Conv (as in many deep learning models), performing a convolution was simply doing a HUUUGE 48x48 Conv to a 1 x 16*H x 16*W Feature map.
But this doesn't meet my basic intuition of CNNs, stacking multiple of layers with the smallest 3x3 Conv, resulting in somewhat large receptive field, and each channel having different (possibly redundant) information.
You can, in a sense, think of "folding" spatial information into the channel dimension. This is the rationale behind ResNet's trade-off between spatial resolution and feature dimension. In the ResNet case whenever they sample x2 in space they increase feature space x2. However, since you have two spatial dimensions and you sample x2 in both you effectively reduce the "volume" of the feature map by x0.5.

Why do we do batch matrix-matrix product?

I'm following Pytorch seq2seq tutorial and ittorch.bmm method is used like below:
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
I understand why we need to multiply attention weight and encoder outputs.
What I don't quite understand is the reason why we need bmm method here.
torch.bmm document says
Performs a batch matrix-matrix product of matrices stored in batch1 and batch2.
batch1 and batch2 must be 3-D tensors each containing the same number of matrices.
If batch1 is a (b×n×m) tensor, batch2 is a (b×m×p) tensor, out will be a (b×n×p) tensor.
In the seq2seq model, the encoder encodes the input sequences given in as mini-batches. Say for example, the input is B x S x d where B is the batch size, S is the maximum sequence length and d is the word embedding dimension. Then the encoder's output is B x S x h where h is the hidden state size of the encoder (which is an RNN).
Now while decoding (during training)
the input sequences are given one at a time, so the input is B x 1 x d and the decoder produces a tensor of shape B x 1 x h. Now to compute the context vector, we need to compare this decoder hidden state with the encoder's encoded states.
So, consider you have two tensors of shape T1 = B x S x h and T2 = B x 1 x h. So if you can do batch matrix multiplication as follows.
out = torch.bmm(T1, T2.transpose(1, 2))
Essentially you are multiplying a tensor of shape B x S x h with a tensor of shape B x h x 1 and it will result in B x S x 1 which is the attention weight for each batch.
Here, the attention weights B x S x 1 represent a similarity score between the decoder's current hidden state and encoder's all the hidden states. Now you can take the attention weights to multiply with the encoder's hidden state B x S x h by transposing first and it will result in a tensor of shape B x h x 1. And if you perform squeeze at dim=2, you will get a tensor of shape B x h which is your context vector.
This context vector (B x h) is usually concatenated to decoder's hidden state (B x 1 x h, squeeze dim=1) to predict the next token.
The operations depicted in the above figure happens on the Decoder side of the Seq2Seq model. Meaning that encoder outputs are already in terms of batches (with mini-batch size samples). Consequently, attn_weights tensor should also be in batch mode.
Thus, in essence, the first dimension (zeroth axis in NumPy terminology) of the tensors attn_weights and encoder_outputs is the number of samples of mini-batch size. Thus, we need torch.bmm on these two tensors.
while #wasiahmad is right about the general implementation of seq2seq, in the mentioned tutorial there's no batch (B=1), and the bmm is just over-engineering and can be safely replaced with matmul with the exact same model quality and performance. See for yourself, replace this:
attn_applied = torch.bmm(attn_weights.unsqueeze(0),
encoder_outputs.unsqueeze(0))
output = torch.cat((embedded[0], attn_applied[0]), 1)
with this:
attn_applied = torch.matmul(attn_weights,
encoder_outputs)
output = torch.cat((embedded[0], attn_applied), 1)
and run the notebook.
Also, note that while #wasiahmad talks about the encoder input as B x S x d, in pytorch 1.7.0, the GRU which is the main engine of the encoder expects an input format of (seq_len, batch, input_size) by default. If you want to work with #wasiahmad format, pass the batch_first = True flag.

How can I get D averages from a HxWxD tensor

How can I create a graph element in Deeplearnjs which turns my [h, w, d] shape tensor in to one which is [d] shape where each is the max of that layer. If h and w are the same, this can be done with the maxpool function. If like the same for mean. Mean can be achieved using conv2d, but only if w and h are equal.
I need this in a graph so I can apply training.
You can do dl.mean(your_tensor, [0, 1]) or your_tensor.mean([0, 1]) to get the mean along the h and w dimensions. Either one will return a tensor with shape [d]. This also works in training because deeplearnjs has moved to an eager execution mode and a gradient is defined in the mean reduction op. You can see the mnist_eager demo for an example of training without a Graph.

Why are my Keras Conv2D kernels 3-dimensional?

In a typical CNN, a conv layer will have Y filters of size NxM, and thus it has N x M x Y trainable parameters (not including bias).
Accordingly, in the following simple keras model, I expect the second conv layer to have 16 kernels of size (7x7), and thus kernel weights of size (7x7x16). Why then are its weights actually size (7x7x8x16)?
I understand the mechanics of what is happening: the Conv2D layers are actually doing a 3D convolution, treating the output maps of the previous layer as channels. It has 16 3D kernels of size(7x7x8). What I don't understand is:
why this is Keras's default behavior?
how do I get a "traditional" convolutional layer without dropping down into the low-level API (avoiding that is my reason for using Keras in the first place)?
_
from keras.models import Sequential
from keras.layers import InputLayer, Conv2D
model = Sequential([
InputLayer((101, 101, 1)),
Conv2D(8, (11, 11)),
Conv2D(16, (7, 7))
])
model.weights
Q1:and thus kernel weights of size (7x7x16). Why then are its weights actually size (7x7x8x16)?
No, the kernel weights is not the size(7x7x16).
from cs231n:
Example 2. Suppose an input volume had size [16x16x20]. Then using an example receptive field size of 3x3, every neuron in the Conv Layer would now have a total of 3*3*20 = 180 connections to the input volume. Notice that, again, the connectivity is local in space (e.g. 3x3), but full along the input depth (20).
Be careful the 'every'.
In your model, 7x7 is your single filter size, and it will connect to previous conv layer, so the parameters on a single filter is 7x7x8, and you have 16, so the total parameters is 7x7x8x16
Q2:why this is Keras's default behavior?
See Q1.
In the typical jargon, when someone refers to a conv layer with N kernels of size (x, y), it is implied that the kernels actually have size (x, y, z), where z is the depth of the input volume to that layer.
Imagine what happens when the input image to the network has R, G, and B channels: each of the initial kernels itself has 3 channels. Subsequent layers are the same, treating the input volume as a multi-channel image, where the channels are now maps of some other feature.
The motion of that 3D kernel as it "sweeps" across the input is only 2D, so it is still referred to as a 2D convolution, and the output of that convolution is a 2D feature map.
Edit:
I found a good quote about this in a recent paper, https://arxiv.org/pdf/1809.02601v1.pdf
"In a convolutional layer, the input feature map X is a W1 × H1 × D1 cube, with W1, H1 and D1 indicating its width, height and depth (also referred to as the number of channels), respectively. The output feature map, similarly, is a cube Z with W2 × H2 × D2 entries. The convolution Z = f(X) is parameterized by D2 convolutional kernels, each of which is a S × S × D1 cube."