i want to generate synthetic image of size 224x224 using gan generator. but i dont understand how to design dcgan generator specific image size using 2d transpose convolution. kindly help
TensorFlow Gan code for DCGAN I tried
Related
I trained a 3D semantic segmentation model and want to test it on some test slices. Should I provide a volumetric test image with the same shape as the input volumetric training image? What is the conventional approach? Is there any way to feed 2D test images and get the predictions?
I tried to feed a volumetric test image of the same spatial dimensions but with fewer slices (17). I was not able to apply the model since it was trained on a volumetric image that had 32 slices.
I try to implementing cnn for image denoising. I use noisy image fragments (size 32x32) and same-sized clear image for validation as a training set. I used the trained network on a noisy image and noticed that denoised image contains artifacts, like a grid 32x32 pixels. I visualised feature maps produced by convolution layers and noticed that layers with zero-padding give distorted edges. I found this topic in which as a solution of same problem describes convolution, last step of which (eg for 3x3 kernel) is operation divide result by 9 or 4, when using zero-padding.
In all articles about convolution operations that I read, this is not mentioned. Does anyone know where I can read more about this?
I am trying to train my model UNET which segments images. I used random crop on a large image for training. The problem i have is, my images have different size in training and testing. Which method i can use for prediction on large image?
I tried to predict a full image and predict patch by patch with each patch's size correspond image size on training data. But i still don't undertand why i don't have the same result between two methods.
The Normal Image
The Anomaly Image
I have tried using CNN Autoencoder for anomaly detection. I trained it with just the normal images and then tried to test the model on anomaly images and used reconstruction error to classify the image as an anomaly or not, but the autoencoder is not able to reconstruct the normal images properly.
I am trying to training deep CNN models for semantic segmentation. Because the model size and the resolution of the input image are large, it runs out of memory even with batchsize=1. How can I use multiple GPUs to get more available memories under this circumstance? (Currently I am using caffe)
Thanks.
Dale already gave you the basic answer: Caffe doesn't support model parallelism.
Do you have the option of reducing the input size? Does it make sense to shrink the image (losing resolution) or to break the image into pieces (losing full-image cohesion)? Would that make the input and model fit? If you already have your model coded and debugged in Caffe, it would be nice to at least smoke-test the topology on the current system.