Interconnection between two Convolutional Layers - deep-learning

I have a question regarding interconnection between two convolutional layers in CNN. for example suppose I have architecture like this:
input: 28 x 28
conv1: 3 x 3 filter, no. of filters : 16
conv2: 3 x 3 filter, no. of filters : 32
after conv1 we get output as 16 x 28 x 28 assuming dimension of image is not reduced. So we have 16 feature maps. In the next layer each feature map is connected to next layer means if we consider each feature map(28 x 28) as a neuron then each neuron will be connected to all 32 filters means total
(3 x 3 x 16) x 32 parameters. How these two layers are stacked or interconnected? In the case of Artificial Neural Network we have weights between two layers. Is there something like this in CNN also? How the output of one convolutional layer is fed to the next convolutional layer?

The number of parameters of a convolutional layer with n filters of size k×k which comes after f feature maps is
n ⋅ (f ⋅ k ⋅ k + 1)
where the +1 comes from the bias.
Hence each of the f filters is not of shape k×k×1 but of shape k×k×f.
How the output of one convolutional layer is fed to the next convolutional layer?
Just like the input is fed to the first convolutional layer. There is no difference (except the number of feature maps).
Convolution on one input feature map
Image source: https://github.com/vdumoulin/conv_arithmetic
See also: another animation
Multiple input feature maps
It works the same:
The filter has the same depth as the input. Before it was 1, now it is more.
You still slide the filter over all (x, y) positions. For each position, it gives one output.
Your example
First conv layer: 160 = 16*(3*3+1)
Second conv layer: 4640 = 32*(16*3*3+1)

Related

Questions about convolution (in CNN)

I suddenly came up with a question about convolution and just wanted to be clear if I'm missing something. The question is whether if the two operations below are identical.
Case1)
Suppose we have a feature map C^2 x H x W. And, we have a K x K x C^2 Conv weight with stride S. (To be clear, C^2 is the channel dimension but just wanted to make it as a square number, K is the kernel size).
Case2)
Suppose we have a feature map 1 x CH x CW. And, we have a CK x CK x 1 Conv weight with stride CS.
So, basically Case2 is a pixel-upshuffled version of case1 (both feature-map and Conv weight.) As convolutions are simply element-wise multiplication, both operations seem identical to me.
# given a feature map and a conv_weight, namely f_map, conv_weight
#case1)
convLayer = Conv(conv_weight)
result = convLayer(f_map, stride=1)
#case2)
f_map = pixelshuffle(f_map, scale=C)
conv_weight = pixelshuffle(f_map, scale=C)
result = convLayer(f_map, stride=C)
But this means that, (for example) given a 256xHxW feature-map with a 3x3 Conv (as in many deep learning models), performing a convolution was simply doing a HUUUGE 48x48 Conv to a 1 x 16*H x 16*W Feature map.
But this doesn't meet my basic intuition of CNNs, stacking multiple of layers with the smallest 3x3 Conv, resulting in somewhat large receptive field, and each channel having different (possibly redundant) information.
You can, in a sense, think of "folding" spatial information into the channel dimension. This is the rationale behind ResNet's trade-off between spatial resolution and feature dimension. In the ResNet case whenever they sample x2 in space they increase feature space x2. However, since you have two spatial dimensions and you sample x2 in both you effectively reduce the "volume" of the feature map by x0.5.

2D convolution along three orthogonals (axis) for 3D volumetric image

Since 3D convolution requires too much computational cost, so I prefer to use 2D conv. My motivation here is using 2D conv for volumetric images to reduce this cost.
I want to apply 2D convolution along three orthogonals to get 3 results, each belongs to one of these orthogonals. More clearly, suppose I have a 3D volumetric image. Instead of apply 3D conv, I want to use 2D conv both xy, xz, yz axis. Then, I expect that 3 different volumetric results. Each result represent three different orthogonals.
Is there way to do that? Thanks for help.
You can permute your images. (Some frameworks such as numpy calls it transpose).
Assume we use 3 x 3 a convolutional kernel.
# A batch of 16 3 channel images (channels first)
a = tensor(shape=[16,3,1920,1080])
# 2D conv will slide over a `1920 x 1080` image, kernel size is `3 x 3 x 3`
a.shape is (16,3,1920,1080)
# 2D conv will slide over a `3 x 1080` image, kernel size is `1920 x 3 x 3`
a.permute(0,2,1,3)
a.shape is (16,1920,3,1080)
# 2D conv will slide over a `1920 x 3` image, kernel size is `1080 x 3 x 3`
a.permute(0,3,2,1)
a.shape is (16,1080,1920,3)

Question on the kernel dimensions for convolutions on mel filter bank features

I am currently trying to understand the following paper: https://arxiv.org/pdf/1703.08581.pdf. I am struggling to understand a part about how a convolution is performed on an input of log mel filterbank features:
We train seq2seq models for both end-to-end speech translation, and a baseline model for speech recognition. We found
that the same architecture, a variation of that from [10], works
well for both tasks. We use 80 channel log mel filterbank features extracted from 25ms windows with a hop size of 10ms,
stacked with delta and delta-delta features. The output softmax
of all models predicts one of 90 symbols, described in detail in
Section 4, that includes English and Spanish lowercase letters.
The encoder is composed of a total of 8 layers. The input
features are organized as a T × 80 × 3 tensor, i.e. raw features,
deltas, and delta-deltas are concatenated along the ’depth’ dimension. This is passed into a stack of two convolutional layers
with ReLU activations, each consisting of 32 kernels with shape
3 × 3 × depth in time × frequency. These are both strided by
2 × 2, downsampling the sequence in time by a total factor of 4,
decreasing the computation performed in the following layers.
Batch normalization [26] is applied after each layer.
As I understand it, the input to the convolutional layer is 3 dimensional (number of 25 ms windows (T) x 80 (features for each window) x 3 (features, delta features and delta-delta features). However, the kernels used on those inputs seem to have 4 dimensions and I do not understand why that is. Wouldn't a 4 dimensional kernel need a 4 dimensional input? In my head, the input has the same dimensions as a rgb picture: width (time) x height (frequency) x color channels (features, delta features and delta-delta features). Therefore I would think of a kernel for a 2D convolution as a filter of size a (filter width) x b (filter height) x 3 (depth of the input). Am I missing something here? What is wrong about my idea or what is done different in this paper?
Thanks in advance for your answer!
I figured it out, turns out it was just a misunderstanding from my side: the authors are using 32 kernels of shape 3x3, which results (after two layers with 2x2 striding) in an output of shape t/4x20x32 where t stands for the time dimension.

find how many parameters do i have train

Suppose you have a 10x10x3 colour image input and you want to stack two convolutional layers with kernel size 3x3 with 10 and 20 filters respectively.
How many parameters do you have to train for these two layers?
Don't forget bias terms!
I've tried (3*3*3+1) * (10+20) but it's apparently not right.
How to calculate the number of parameters in the CNN?
For each layer do:
n: kernel width
m: kernel length
l: no. input feature maps
k: no. output feature maps
no. parameters = (n*m*l+1)*k

Why are my Keras Conv2D kernels 3-dimensional?

In a typical CNN, a conv layer will have Y filters of size NxM, and thus it has N x M x Y trainable parameters (not including bias).
Accordingly, in the following simple keras model, I expect the second conv layer to have 16 kernels of size (7x7), and thus kernel weights of size (7x7x16). Why then are its weights actually size (7x7x8x16)?
I understand the mechanics of what is happening: the Conv2D layers are actually doing a 3D convolution, treating the output maps of the previous layer as channels. It has 16 3D kernels of size(7x7x8). What I don't understand is:
why this is Keras's default behavior?
how do I get a "traditional" convolutional layer without dropping down into the low-level API (avoiding that is my reason for using Keras in the first place)?
_
from keras.models import Sequential
from keras.layers import InputLayer, Conv2D
model = Sequential([
InputLayer((101, 101, 1)),
Conv2D(8, (11, 11)),
Conv2D(16, (7, 7))
])
model.weights
Q1:and thus kernel weights of size (7x7x16). Why then are its weights actually size (7x7x8x16)?
No, the kernel weights is not the size(7x7x16).
from cs231n:
Example 2. Suppose an input volume had size [16x16x20]. Then using an example receptive field size of 3x3, every neuron in the Conv Layer would now have a total of 3*3*20 = 180 connections to the input volume. Notice that, again, the connectivity is local in space (e.g. 3x3), but full along the input depth (20).
Be careful the 'every'.
In your model, 7x7 is your single filter size, and it will connect to previous conv layer, so the parameters on a single filter is 7x7x8, and you have 16, so the total parameters is 7x7x8x16
Q2:why this is Keras's default behavior?
See Q1.
In the typical jargon, when someone refers to a conv layer with N kernels of size (x, y), it is implied that the kernels actually have size (x, y, z), where z is the depth of the input volume to that layer.
Imagine what happens when the input image to the network has R, G, and B channels: each of the initial kernels itself has 3 channels. Subsequent layers are the same, treating the input volume as a multi-channel image, where the channels are now maps of some other feature.
The motion of that 3D kernel as it "sweeps" across the input is only 2D, so it is still referred to as a 2D convolution, and the output of that convolution is a 2D feature map.
Edit:
I found a good quote about this in a recent paper, https://arxiv.org/pdf/1809.02601v1.pdf
"In a convolutional layer, the input feature map X is a W1 × H1 × D1 cube, with W1, H1 and D1 indicating its width, height and depth (also referred to as the number of channels), respectively. The output feature map, similarly, is a cube Z with W2 × H2 × D2 entries. The convolution Z = f(X) is parameterized by D2 convolutional kernels, each of which is a S × S × D1 cube."