Caffe - Image augmentation by cropping - caffe

The cropping strategy of caffe is to apply random-crop for training and center-crop for testing.
From experiment, I observed that accuracy of recognition improves if I can provide two cropped version (random and center) for the same image during training. These experimental data (size 100x100) are generated offline (not using caffe) by applying random and center cropping on a 115x115 sized image.
I would like to know how to perform this task in caffe?
Note: I was thinking to use 2 data layers, each with different cropping (center and random), and then perform concatenation. However, I found that caffe does not allow center crop during training.

Easy answer would be to prepare another already-cropped dataset of your training data, cropped to 100x100. Then mix this dataset with your original data and train. In this way, random cropping of your new images will actually give you center cropping.
More complex way is hand-crafting your batches using caffe APIs (MATLAB and Python) and feeding the hand-crafted batches on-the-fly to the network.
You can check this link for different ways to achieve this.

Related

Image Classification on heavy occluded and background camouflage

I am doing a project on image classification on classifying various species of bamboos.
The problems on Kaggle are pretty well labeled, singluar and concise pictures.
But the issue with bamboo is they appear in a cluster in most images sometimes with more than 1 species. Also there is a prevalence of heavy occlusion and background camouflage.
Besides there is not much training data available for this problem.
So I have been making my own dataset by collecting the data from the internet and also clicking images from my DSLR.
My first approach was to use a weighted Mask RCNN for instance segmentation and then classifying it using VGGNet and GoogleNet.
My next approach is to test on Attention UNet, YOLO v3 and a new paper BCNet from ICLR 2021.
And then classify on ResNext, GoogleNet and SENet then compare the results.
Any tips or better approach is much appreciated.

Which is best for object localization among R-CNN, fast R-CNN, faster R-CNN and YOLO

what is the difference between R-CNN, fast R-CNN, faster R-CNN and YOLO in terms of the following:
(1) Precision on same image set
(2) Given SAME IMAGE SIZE, the run time
(3) Support for android porting
Considering these three criteria which is the best object localization technique?
R-CNN is the daddy-algorithm for all the mentioned algos, it really provided the path for researchers to build more complex and better algorithm on top of it.
R-CNN, or Region-based Convolutional Neural Network
R-CNN consist of 3 simple steps:
Scan the input image for possible objects using an algorithm called Selective Search, generating ~2000 region proposals
Run a convolutional neural net (CNN) on top of each of these region proposals
Take the output of each CNN and feed it into a) an SVM to classify the region and b) a linear regressor to tighten the bounding box of the object, if such an object exists.
Fast R-CNN:
Fast R-CNN was immediately followed R-CNN. Fast R-CNN is faster and better by the virtue of following points:
Performing feature extraction over the image before proposing regions, thus only running one CNN over the entire image instead of 2000 CNN’s over 2000 overlapping regions
Replacing the SVM with a softmax layer, thus extending the neural network for predictions instead of creating a new model
Intuitively it makes a lot of sense to remove 2000 conv layers and instead take once Convolution and make boxes on top of that.
Faster R-CNN:
One of the drawbacks of Fast R-CNN was the slow selective search algorithm and Faster R-CNN introduced something called Region Proposal network(RPN).
Here’s is the working of the RPN:
At the last layer of an initial CNN, a 3x3 sliding window moves across the feature map and maps it to a lower dimension (e.g. 256-d)
For each sliding-window location, it generates multiple possible regions based on k fixed-ratio anchor boxes (default bounding boxes)
Each region proposal consists of:
an “objectness” score for that region and
4 coordinates representing the bounding box of the region
In other words, we look at each location in our last feature map and consider k different boxes centered around it: a tall box, a wide box, a large box, etc. For each of those boxes, we output whether or not we think it contains an object, and what the coordinates for that box are. This is what it looks like at one sliding window location:
The 2k scores represent the softmax probability of each of the k bounding boxes being on “object.” Notice that although the RPN outputs bounding box coordinates, it does not try to classify any potential objects: its sole job is still proposing object regions. If an anchor box has an “objectness” score above a certain threshold, that box’s coordinates get passed forward as a region proposal.
Once we have our region proposals, we feed them straight into what is essentially a Fast R-CNN. We add a pooling layer, some fully-connected layers, and finally a softmax classification layer and bounding box regressor. In a sense, Faster R-CNN = RPN + Fast R-CNN.
YOLO:
YOLO uses a single CNN network for both classification and localising the object using bounding boxes. This is the architecture of YOLO :
In the end you will have a tensor of shape 1470 i.e 7*7*30 and the structure of the CNN output will be:
The 1470 vector output is divided into three parts, giving the probability, confidence and box coordinates. Each of these three parts is also further divided into 49 small regions, corresponding to the predictions at the 49 cells that form the original image.
In postprocessing steps, we take this 1470 vector output from the network to generate the boxes that with a probability higher than a certain threshold.
I hope you get the understanding of these networks, to answer your question on how the performance of these network differs:
On the same dataset: 'You can be sure that the performance of these networks are in the order they are mentioned, with YOLO being the best and R-CNN being the worst'
Given SAME IMAGE SIZE, the run time: Faster R-CNN achieved much better speeds and a state-of-the-art accuracy. It is worth noting that although future models did a lot to increase detection speeds, few models managed to outperform Faster R-CNN by a significant margin. Faster R-CNN may not be the simplest or fastest method for object detection, but it is still one of the best performing. However researchers have used YOLO for video segmentation and by far its the best and fastest when it comes to video segmentation.
Support for android porting: As far as my knowledge goes, Tensorflow has some android APIs to port to android but I am not sure how these network will perform or even will you be able to port it or not. That again is subjected to hardware and data_size. Can you please provide the hardware and the size so that I will be able to answer it clearly.
The youtube video tagged by #A_Piro gives a nice explanation too.
P.S. I borrowed a lot of material from Joyce Xu Medium blog.
If your are interested in these algorithms you should take a look into this lesson which go through the algoritmhs you named : https://www.youtube.com/watch?v=GxZrEKZfW2o.
PS: There is also a Fast YOLO if I remember well haha !
I have been working with YOLO and FRCNN a lot. To me the YOLO has the best accuracy and speed but if you want to do research on image processing, I will suggest FRCNN as many previous works are done with it, and to do research you really want to be consistent.
For Object detection, I am trying SSD+ Mobilenet. It has a balance of accuracy and speed So it can also be ported to android devices easily with good fps.
It has less accuracy compared to faster rcnn but more speed than other algorithms.
It also has good support for android porting.

Is there any particular reason why people pick 224x224 image size for imagenet experiments?

Is it that 224x224 gives better accuracy for some reason or just computational constraint? I would think that bigger picture should give better accuracy, no?
Well bigger images contain more information that could either be relevant or not. The size of your input is important because the bigger the input, the more parameters your network will have to handle. More parameters may lead to several problems, first you'll need more computing power. Then you may need more data to train on, since a lot of parameters and not enough samples may lead to overfitting, specially with CNNs.
The choice for a 224 from AlexNet also allowed them to apply some data augmentation.
For instance, if you have a 512x512 image and you want to recognize an object there it would be better to resample it to 256x256 and get smaller patches of 224x224 or 200x200, do some data augmentation and then train. You could also use patches of 400x400 and also do data augmentation and train, provided that you have enough data.
Don't forget to do cross-validation so you can check if there's overfitting.

User defined transformation in CNTK

Problem setting
I have a dataset with N images.
A certain network (e.g - Alexnet) has to be trained from scratch over this dataset.
For each image, 10 augmented versions are to be produced. These augmentations involve resizing, cropping and flipping. For example - an image has to be resized with minimum dimension of 256 pixels and then a random crop of 224 x 224 of it is to be taken. Then it has to be flipped. 5 such random crops have to be taken and their flipped versions also have to be prepared.
Those augmented versions have to go inside the network for training instead of the original image
What would be additionally very beneficial is that, multiple images in the dataset are augmented in parallel and put in a queue or any container from where abatchsize number of samples are pushed into the GPU for training.
The reason is that we would not ideally like multiple augmented versions of the same image going into the network for training simultaneously.
Context
It is not a random feature requirement. There are some papers such as OverFeat which involve such augmentations. Moreover such a random training can be a very good idea to improve the training of the network.
My understanding
To the best of my search, I could not find any framework inside CNTK that can do this.
Questions
Is it possible to achieve in CNTK ?
Please take a look at the CNTK 201 tutorial:
https://github.com/Microsoft/CNTK/blob/penhe/reasonet_tutorial/Tutorials/CNTK_201B_CIFAR-10_ImageHandsOn.ipynb
The image reader has built in transforms that addresses many of your requirements. Unfortunately, it is not in the GPU.

Building an Image search engine using Convolutional Neural Networks

I am trying to implement an image search engine using AlexNethttps://github.com/akrizhevsky/cuda-convnet2
The idea is to implement an image search engine by training a neural net to classify images and then using the code from the net's last hidden layer as a similarity measure.
I am trying to figure out how to train the CNN on a new set of images to classify them. Does anyone know how to get started with this?
Thanks
You basically have two approaches to your problem:
-Either you have plenty of good training data (>1M) and dozens of GPUs and you retrain the network from scratch using SGD with the classes you have for your queries.
-Either you don't and then you simply truncate a pretrained AlexNet (where exactly you truncate it is for you to choose) and plug it to your images (possibly resized to fit the network (227x227x3 if I am not mistaken)).
Then from your image you get a feature vector (sometimes called a descriptor) and you use those feature vectors to train a linear SVM on your images and your specific task.