suppose I am having a neural network with the following structure
Input layer:10 neuron
Hidden layer 1: 20 neuron with relu activation function
Batch normalization
Hidden layer 2: 30 neuron with relu activation function
Batch normalization
Hidden layer 3: 40 neuron with relu activation function
Batch normalization
Output layer : 4 neuron with logistic regression
Then how to calculate the big O complexity for online training of it ?
The training is assume to be back propagation
Thank you for your enthusiast
O(2^n) to converge to optimal solution.
Related
I am trying to reproduce a Neural Network trained to detect whether there is a 0-3 digit in an image with another confounding image. The paper I am following lists the corresponding architecture:
A neural network with 28×56 input neurons and one output neuron is
trained on this task. The input values are coded between −0.5 (black)
and +1.5 (white). The neural network is composed of a first detection
pooling layer with 400 detection neurons sum-pooled into 100 units
(i.e. we sum-pool non-overlapping groups of 4 detection units). A
second detection-pooling layer with 400 detection neurons is applied
to the 100-dimensional output of the previous layer, and activities
are sum-pooled onto a single unit representing the deep network
output. Positive examples (0-3 digit in the image) are assigned target
value 100 and negative examples are assigned target value 0. The
neural network is trained to minimize the mean-square error between
the target values and its output.
My main doubt is in this context what they mean by detection neurons, if they mean filters or a single standard ReLU neuron. Also, if the mean filters, how could they be applied in the second layer to a 100-dimensional output when they are designed to operate on 2x2 matrixes.
Reference:
Montavon, G., Bach, S., Binder, A., Samek, W., & Müller, K. (2015).
Explaining NonLinear Classification Decisions with Deep Taylor
Decomposition. arXiv. https://doi.org/10.1016/j.patcog.2016.11.008.
Specifically section 4.C
Thanks a lot for the help!
My best guess at this is something like (code not tested - just rough PyTorch):
from torch import nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Sequential(
nn.Flatten(), # Flatten row-wise into a 1D sequence
nn.Linear(28 * 56, 400), # Linear layer with 400 outputs.
nn.AvgPool1D(4, 4), # Sum pool to 100 outputs.
)
self.layer2 = nn.Sequential(
nn.Linear(100, 400), # Linear layer with 400 outputs.
nn.AdaptiveAvgPool1D(1), # Sum pool to 1 output.
)
def forward(self, x):
return self.layer2(self.layer1(x))
But overall I would agree with the commentor on your post that there are some issues with language here.
I have two groups of images (concrete cracks and uncracked concrete) so they are binary classification, I am making classification for them by using vgg19.
when I used (1) neuron for the output layer and using softmax I got accuracy 0.5 and fixed during 250 epochs, while when I used 2 neuron with softmax the accuaracy increaced above 0.9.
So, shall I have to use 1 or 2 neurons for the output for VGG19 with binary classification?
Knowing that the total number of layers in EfficientNet-B0 is 237 and in EfficientNet-B7 the total comes out to 813, what is the total number of layers in EfficientNetB2 ?
If you print len(model.layers) on EfficientNetB2 model with keras you will have 342 layers.
import tensorflow as tf
from tensorflow.keras.applications import EfficientNetB2
model = EfficientNetB2(weights='imagenet')
print(len(model.layers))
You can do this with all other versions of EfficientNetBx if you wish.
But as Pradyut said here normally not all layers are taken into account when we count them:
While counting the number of layers in a Neural Network we usually
only count convolutional layers and fully connected layers. Pooling
Layer is taken together with the Convolutional Layer and counted as
one layer and Dropout is a regularization technique so it will also
not be counted as a separate layer.
For reference, the VGG16 mode is defined as a 16 layer model. Those 16
layers are only the Convolutional Layers and Fully Connected Dense
Layers. If you count all the pooling and activation layers it will
change to a 41 layer model, which it is not. Reference: VGG16, VGG16
Paper
So as per your code you have 3 layers (1 Convolutional Layer with 28
Neurons, 1 Fully Connected Layer with 128 Neurons and 1 Fully
Connected Layer with 10 neurons)
As for making it a 10 layer network you can add more convolutional
layers or dense layers before the output layers, but it won't be
necessary for the MNIST dataset.
I hope I answered your question!
You can find my code here with the results:
https://github.com/shwe87/tfm-asr/blob/master/ASR-Spanish-Bi-RNN-17062020.ipynb
I tested two simple models for ASR in Spanish:
Model 1:
- Layer Normalization
- Bi-directional GRU
- Dropout
- Fully Connected layer
- Dropout
- Fully Connected layer as a classifier (classifies one of the alphabet chars)
Model 2:
- Conv Layer 1
- Conv Layer 2
- Fully Connected
- Dropout
- Bidirectional GRU
- Fully connected layer as a classifier
I tried with 30 epochs because I have limited resources of GPU.
The validation and training loss for these two models:
Model 1 performed not so good as expected.
Model 2 worked too well, after 20 epochs, it started overfitting (please see the graph in the notebook results) and in the output, I could actually see some words creating which seems like the labels. Although it is overfitting, it still needs training because it doesn't predict the total outcome. For start, I am happy with this model.
I tested a third complex Model.
You can find it here with the results output:
https://github.com/shwe87/tfm-asr/blob/master/ASR-DNN.ipynb
Model 3:
- Layer Normalization
- RELU
- Bidirectional GRU
- Dropout
- Stack this 10 times more.
The valid loss and training loss for this model:
I tested this on 30 epochs and there were no good results, actually, all the predictions were blank...
Is this because this complex model needs more epochs for training?
Update:
I modified the model by adding 2 convolutional layer before the stacked GRU and the model seems to have improved.
I see that in the first model and the third model I applied layer normalization and both's prediction seems to be very bad....Does layer normalization makes the learning delay? But according to papers like:
https://www.arxiv-vanity.com/papers/1607.06450/ layer normalization speeds up the training and also helps in speeding the training loss. So, I am really confused. I have limited resources of GPU and I am not sure if I should go for another try without layer normalization.......
After going through the Caffe tutorial here: http://caffe.berkeleyvision.org/gathered/examples/mnist.html
I am really confused about the different (and efficient) model using in this tutorial, which is defined here: https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt
As I understand, Convolutional layer in Caffe simply calculate the sum of Wx+b for each input, without applying any activation function. If we would like to add the activation function, we should add another layer immediately below that convolutional layer, like Sigmoid, Tanh, or Relu layer. Any paper/tutorial I read on the internet applies the activation function to the neuron units.
It leaves me a big question mark as we only can see the Convolutional layers and Pooling layers interleaving in the model. I hope someone can give me an explanation.
As a site note, another doubt for me is the max_iter in this solver:
https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt
We have 60.000 images for training, 10.000 images for testing. So why does the max_iter here only 10.000 (and it still can get > 99% accuracy rate)? What does Caffe do in each iteration?
Actually, I'm not so sure if the accuracy rate is the total correct prediction/test size.
I'm very amazed of this example, as I haven't found any example, framework that can achieve this high accuracy rate in that very short time (only 5 mins to get >99% accuracy rate). Hence, I doubt there should be something I misunderstood.
Thanks.
Caffe uses batch processing. The max_iter is 10,000 because the batch_size is 64. No of epochs = (batch_size x max_iter)/No of train samples. So the number of epochs is nearly 10. The accuracy is calculated on the test data. And yes, the accuracy of the model is indeed >99% as the dataset is not very complicated.
For your question about the missing activation layers, you are correct. The model in the tutorial is missing activation layers. This seems to be an oversight of the tutorial. For the real LeNet-5 model, there should be activation functions following the convolution layers. For MNIST, the model still works surprisingly well without the additional activation layers.
For reference, in Le Cun's 2001 paper, it states:
As in classical neural networks, units in layers up to F6 compute a dot product between their input vector and their weight vector, to which a bias is added. This weighted sum, denoted a_i, for unit i, is then passed through a sigmoid squashing function to produce the state of unit i ...
F6 is the "blob" between the two fully connected layers. Hence the first fully connected layers should have an activation function applied (the tutorial uses ReLU activation functions instead of sigmoid).
MNIST is the hello world example for neural networks. It is very simple to today's standard. A single fully connected layer can solve the problem with accuracy of about 92%. Lenet-5 is a big improvement over this example.