Fully convolutional autoencoder for variable-sized images in keras - deep-learning

I want to build a convolutional autoencoder where the size of the input in not constant. I'm doing that by stacking up conv-pool layers until I reach an encoding layer, and then doing the reverse with upsample-conv layers. the problem is that no matter what settings I use, I can't get the exact same size in the output layer as the input layer. The reason for that is that the UpSampling layer (given say (2,2) size), doubles the size of the input, so I can't get odd dimensions for instance. Is there a way to tie the output dimension of a given layer to the input dimension of a previous layer for individual samples (as I said, the input size for the max-pool layer in variable)?

Yes, there is.
You can use three methods
Padding
Resizing
Crop or Pad
Padding will only work to increase the dimensions. Not beneficial for reducing the size.
Resizing should be more costly but optimum solution for each case (up or downsampling). It will keep all the values in the range and will simply resample them to resize in a given dimension.
Crop or Pad will work as resize and it will be more compute-efficient as there is no interpolation in this method. However, if you want to resize it to a smaller dimension, it will crop from the edges.
By using those 3, you can arrange your layer's dimensions.

Related

Determining position of anchor boxes in original image using downsampled feature map

From what I have read, I understand that methods used in faster-RCNN and SSD involve generating a set of anchor boxes. We first downsample the training image using a CNN and for every pixel in the downsampled feature map (which will form the center for our anchor boxes) we project it back onto the training image. We then draw the anchor boxes centered around that pixel using our pre-determined scales and ratios. What I dont understand is why dont we directly assume the centers of our anchor boxes on the training image with a suitable stride and use the CNN to only output the classification and regression values. What are we gaining by using the CNN to determine the centers of our anchor boxes which are ultimately going to be distributed evenly on the training image ?
To state more clearly -
Where will the centers of our anchor boxes be on the training image before our first prediction of the offset values and how do we decide those?
I think the confusion comes from this:
What are we gaining by using the CNN to determine the centers of our anchor boxes which are ultimately going to be distributed evenly on the training image
The network usually doesn't predict centers but corrections to a prior belief. The initial anchor centers are distributed evenly across the image, and as such don't fit the objects in the scene tightly enough. Those anchors just constitute a prior in the probabilistic sense. What your network will exactly output is implementation dependent, but will likely just be updates, i.e. corrections to those initial priors. This means that the centers that are predicted by your network are some delta_x, delta_y that adjust the bounding boxes.
Regarding this part:
why dont we directly assume the centers of our anchor boxes on the training image with a suitable stride and use the CNN to only output the classification and regression values
The regression values should still contain sufficient information to determine a bounding box in a unique way. Predicting width, height and center offsets (corrections) is a straightforward way to do it, but it's certainly not the only way. For example, you could modify the network to predict for each pixel, the distance vector to its nearest object center, or you could use parametric curves. However, crude, fixed anchor centers are not a good idea since they will also cause problems in classification, as you use them to pool features that are representative of the object.

How to do object detection on high resolution images?

I have images of around 2000 X 2000 pixels. The objects that I am trying to identify are of smaller sizes (typically around 100 X 100 pixels), but there are lot of them.
I don't want to resize the input images, apply object detection and rescale the output back to the original size. The reason for this is I have very few images to work with and I would prefer cropping (which would lead to multiple training instances per image) over resizing to smaller size (this would give me 1 input image per original image).
Is there a sophisticated way or cropping and reassembling images for object detection, especially at the time of inference on test images?
For training, I suppose I would just take out the random crops, and use those for training. But for testing, I want to know if there is a specific way of cropping the test image, applying object detection and combining the results back to get the output for the original large image.
I guess using several (I've never tried) networks simultaneously is a choice, for you, using 4*4 (500+50 * 500+50) with respect to each 1*1), then reassembling at the output stage, (probably with NMS at the border since you mentioned the target is dense).
But it's weird.
You know one insight in detection with high resolution images is altering the backbone with "U" shape shortcut, which solves some problems without resize the images. Refer U-Net.

How to use two different sized images as input into a deep network?

I am trying to train a deep neural network which uses information from two separate images in order to get a final image output similar to this. The difference is that my two input images don't have any spatial relation as they are completely different images with different amounts of information. How can I use a two-stream CNN or any other architecture using these kinds of input?
For reference: One image has size (5184x3456) and other has size (640x240).
First of all: It doesn't matter that you have two image. You have exactly the same problem when you have one image as input that single image can have different sizes.
There are multiple strategies to solve this problem:
Cropping and scaling: Just force the input in the size you need it. The cropping is done to make sure the aspect ratio is correct. Sometimes the same image but different parts of it are then fed into the network and the results are combined (e.g. averaged).
Convolutions + Global pooling: Convolutional layers don't care about the input size. At the point where you care about it, you can make global pooling. This means you have a pooling region which will always cover the complete input, no matter of the size.
Special layers: I don't remember the concept or name, but there are some layers which allow different sized input... maybe it was one of the attention-based approaches?
Combining two inputs
Look for "merge layer" or "concatenation layer" in the framework of your choice:
Keras
See also
Keras: Variable-size image to convolutional layer
Caffe: Allow images of different sizes as inputs

How to upsample one layer to any size in keras?

I'd like to upsample one layer with size of (w,h,channels) to size of (w',h',channels), but the Upsample2D layer just can upsample to the double size.
Anybody could tell me how do any size upsampling?
The Keras UpSample2D can upsample to different sizes, not just double size. From the Keras docs we can see this is indicated for such layer:
keras.layers.UpSampling2D(size=(2, 2), data_format=None)
Upsampling layer for 2D inputs.
Repeats the rows and columns of the data by size[0] and size[1] respectively.
The default size value is indeed (2,2), so in that case your upsampling will be double. By specifying the size you desire you can manage to upsample to different sizes according to your needs. So, if you want an upsample factor of say, 3 then you should use size=(3,3), etc.
As alternatives, you can also define your own custom layers if you want something really specific to your case. For example, here is a Github issue about creating custom pooling function (opposite of upsampling layers, so easily comparable), which could help you in case you needed such custom layer.

how to format the image data for training/prediction when images are different in size?

I am trying to train my model which classifies images.
The problem I have is, they have different sizes. how should i format my images/or model architecture ?
You didn't say what architecture you're talking about. Since you said you want to classify images, I'm assuming it's a partly convolutional, partly fully connected network like AlexNet, GoogLeNet, etc. In general, the answer to your question depends on the network type you are working with.
If, for example, your network only contains convolutional units - that is to say, does not contain fully connected layers - it can be invariant to the input image's size. Such a network could process the input images and in turn return another image ("convolutional all the way"); you would have to make sure that the output matches what you expect, since you have to determine the loss in some way, of course.
If you are using fully connected units though, you're up for trouble: Here you have a fixed number of learned weights your network has to work with, so varying inputs would require a varying number of weights - and that's not possible.
If that is your problem, here's some things you can do:
Don't care about squashing the images. A network might learn to make sense of the content anyway; does scale and perspective mean anything to the content anyway?
Center-crop the images to a specific size. If you fear you're losing data, do multiple crops and use these to augment your input data, so that the original image will be split into N different images of correct size.
Pad the images with a solid color to a squared size, then resize.
Do a combination of that.
The padding option might introduce an additional error source to the network's prediction, as the network might (read: likely will) be biased to images that contain such a padded border.
If you need some ideas, have a look at the Images section of the TensorFlow documentation, there's pieces like resize_image_with_crop_or_pad that take away the bigger work.
As for just don't caring about squashing, here's a piece of the preprocessing pipeline of the famous Inception network:
# This resizing operation may distort the images because the aspect
# ratio is not respected. We select a resize method in a round robin
# fashion based on the thread number.
# Note that ResizeMethod contains 4 enumerated resizing methods.
# We select only 1 case for fast_mode bilinear.
num_resize_cases = 1 if fast_mode else 4
distorted_image = apply_with_random_selector(
distorted_image,
lambda x, method: tf.image.resize_images(x, [height, width], method=method),
num_cases=num_resize_cases)
They're totally aware of it and do it anyway.
Depending on how far you want or need to go, there actually is a paper here called Spatial Pyramid Pooling in Deep Convolution Networks for Visual Recognition that handles inputs of arbitrary sizes by processing them in a very special way.
Try making a spatial pyramid pooling layer. Then put it after your last convolution layer so that the FC layers always get constant dimensional vectors as input . During training , train the images from the entire dataset using a particular image size for one epoch . Then for the next epoch , switch to a different image size and continue training .