To create different embedding layers in keras - deep-learning

I recently read this paper End-To-End Memory Networks which uses three different embeddings layers for sentence embeddings. Now I am trying to reproduce this architecture in keras.
But I am not sure how to create three different embeddings. These are exactly same dimension based on same corpus but should have different values for the embeddings. So to implement this layers, should I just use Embedding Layers with kernel_initializer =random_uniform?
I know pre-trained embeddings like Word2Vec, but currently pre-trained model is not important, is it?

Related

Freezing certain layers in neural networks using Pytorch Image Models

I am trying to do binary classification using transfer learning using Timm
In the process, I want to experiment with freezing/unfreezing different layers of different architectures but so far, I am able to freeze/unfreeze entire models only.
Can anyone help me in illustrating it with a couple of model architectures for the sake of heterogeneity of different architectures?
Below, I am ilustrating the entire freezing of couple of architectures using Timm - convnext and resnet but can anyone illustrate me with any different models but only using Timm(As it is more comprehensive than Pytorch model zoo)-
import timm
convnext = timm.create_model('convnext_tiny_in22k', pretrained=True,num_classes=2)
resnet = timm.create_model('resnet50d', pretrained=True,num_classes=2)

How to choose which pre-trained weights to use for my model?

I am a beginner, and I am very confused about how we can choose a pre-trained model that will improve my model.
I am trying to create a cat breed classifier using pre-trained weights of a model, lets say VGG16 trained on digits dataset, will that improve the performance of the model? or if I train my model just on the database without using any other weights will be better, or will both be the same as those pre-trained weights will be just a starting point.
Also if I use weights of the VGG16 trained for cat vs dog data as a starting point of my cat breed classification model will that help me in improving the model?
Since you've mentioned that you are a beginner I'll try to be a bit more verbose than normal so please bear with me.
How neural models recognise images
The layers in a pre-trained model store multiple aspects of the images they were trained on like patterns(lines, curves), colours within the image which it uses to decide if an image is of a specific class or not
With each layer the complexity of what it can store increases initially it captures lines or dots or simple curves but with each layer, the representation power increases and it starts capturing features like cat ears, dog face, curves in a number etc.
The image below from Keras blog shows how initial layers learn to represent simple things like dots and lines and as we go deeper they start to learn to represent more complex patterns.
Read more about Conv net Filters at keras's blog here
How does using a pretrained model give better results ?
When we train a model we waste a lot of compute and time initially creating these representations and in order to get to those representations we need quite a lot of data too else we might not be able to capture all relevant features and our model might not be as accurate.
So when we say we want to use a pre-trained model we want to use these representations so if we use a model trained on imagenet which has lots of cat pics we can be sure that the model already has representations to identify important features required to identify a cat and will converge to a better point than if we used random weights.
How to use pre-trained weights
So when we say to use pre-trained weights we mean use the layers which hold the representations to identify cats but discard the last layer (dense and output) and instead add fresh dense and output layers with random weights. So our predictions can make use of the representations already learned.
In real life we freeze our pretrained weights during the initial training as we do not want our random weights at the bottom to ruin the learned representations. we only unfreeze the representations in the end after we have a good classification accuracy to fine-tune them, and that too with a very small learning rate.
Which kind of pre-trained model to use
Always choose those pretrained weights that you know has the most amount of representations which can help you in identifying the class you are interested in.
So will using a mnist digits trained weights give relatively bad results when compared with one trained on image net?
Yes, but given that the initial layers have already learned simple patterns like lines and curves for digits using these weights will still put you at an advantage when compared to starting from scratch in most of the cases.
Sane weight initialization
The pre-trained weights to choose depends upon the type of classes you wish to classify. Since, you wish to classify Cat Breeds, use pre-trained weights from a classifier that is trained on similar task. As mentioned by the above answers the initial layers learn things like edges, horizontal or vertical lines, blobs, etc. As you go deeper, the model starts learning problem specific features. So for generic tasks you can use say imagenet & then fine-tune it for the problem at hand.
However, having a pre-trained model which closely resembles your training data helps immensely. A while ago, I had participated in Scene Classification Challenge where we initialized our model with the ResNet50 weights trained on Places365 dataset. Since, the classes in the above challenge were all present in the Places365 dataset, we used the weights available here and fine-tuned our model. This gave us a great boost in our accuracy & we ended up at top positions on the leaderboard.
You can find some more details about it in this blog
Also, understand that the one of the advantages of transfer learning is saving computations. Using a model with randomly initialized weights is like training a neural net from scratch. If you use VGG16 weights trained on digits dataset, then it might have already learned something, so it will definitely save some training time. If you train a model from scratch then it will eventually learn all the patterns which using a pre-trained digits classifier weights would have learnt.
On the other hand using weights from a Dog-vs-Cat classifier should give you better performance as it already has learned features to detect say paws, ears, nose or whiskers.
Could you provide more information, what do you want to classify exactly? I see you wish to classify images, which type of images (containing what?) and in which classes?
As a general remark : If you use a trained model, it must fit your need, of course. Keep in mind that a model which was trained on a given dataset, learned only the information contained in that dataset and can classify / indentify information analogous to the one in the training dataset.
If you want to classify an image containing an animal with a Y/N (binary) classifier, (cat or not cat) you should use a model trained on different animals, cats among them.
If you want to classify an image of a cat into classes corresponding to cat races, let's say, you should use a model trained only on cats images.
I should say you should use a pipeline, containing steps 1. followed by 2.
it really depends on the size of the dataset you have at hand and how related the task and data that the model was pretrained on to your task and data. Read more about Transfer Learning http://cs231n.github.io/transfer-learning/ or Domain Adaptation if your task is the same.
I am trying to create a cat breed classifier using pre-trained weights of a model, lets say VGG16 trained on digits dataset, will that improve the performance of the model?
There are general characteristics that are still learned from digits like edge detection that could be useful for your target task, so the answer here is maybe. You can here try just training the top layers which is common in computer vision applications.
Also if I use weights of the VGG16 trained for cat vs dog data as a starting point of my cat breed classification model will that help me in improving the model?
Your chances should be better if the task and data are more related and similar

Usefulness of Pretrained NN's for performing binary segmentation in images

I am trying to perform binary segmentation on a custom dataset (DAGM dataset in my case Link to the dataset
I was just curious to know if pretrained networks on the imagenet dataset like VGG,Resnet will be of any particular use as I am not trying to segment objects like cats,dogs etc but anomalies in the images.
Normally you would want to fine tune a model on your new dataset which was previously trained and tuned on a similar problem. Neural networks extract features from samples and use those features to classify. If you have previously trained your network on biomedical dataset, then it has learned how to extract features from those models. So try to find a model that was trained on similar domain.
Also you can check the below link for more insight about the issue.
https://en.wikipedia.org/wiki/Catastrophic_interference

How to perform polynomial landmark detection with deep learning

I am trying to build a system to segment vehicles using a deep convolutional neural network. I am familiar with predicting a set amount of points (i.e. ending a neural architecture with a Dense layer with 4 neurons to predict 2 points(x,y) coords for both). However, vehicles come in many different shapes and sizes and one vehicle may require more segmentation points than another. How can I create a neural network that can have different amounts of output values? I imagine I could use a RNN of some sort but would like a little guidance. Thank you
For example, in the following image the two vehicles have a different number of labeled keypoints.

conditional random field in semantic segmentation

Are CRF (Conditional Random Fields) still actively used in semantic segmentation tasks or do the current deep neural networks made them unnecessary ?
I've seen both of the answers in academic papers and, since it seems quite complicated to implement and infer, I would like to have opinions on them before trying them out.
Thank you
The CRFs are still used for the tasks of image labeling and semantic image segmentation along with the DNNs. In fact, CRFs and DNNs are not self-excluding techniques and a lot of recent publications use both of them.
CRFs are based on probabilistic graphical models, where graph nodes and edges represent random variables, initialized with potential functions. DNN can be used as such potential function:
Conditional Random Fields Meet Deep Neural Networks for Semantic Segmentation
Conditional Random Fields as Recurrent Neural Networks
Brain Tumor Segmentation with Deep Neural Network (Future Work Section)
DCNN may be used for the feature extraction process, which is an essential step in applying CRFs:
Environmental Microorganism Classification Using Conditional Random Fields and Deep Convolutional Neural Networks
Conditional Random Field and Deep Feature Learning for Hyperspectral Image Segmentation
There are also toolkits, combining both CRFs and DNNs:
Direct graphical models C++ library