I'm new to tensorflow and I've served the saved model to Google AI Platform models. However, I am having issues with the format of sample input data. Can you please guide how should I format the data input based on the requested format below? Thanks in advance.
To request an online prediction, it is required to input data instances as a JSON object as follow.
{
"instances": [
<value>|<simple/nested list>|<object>,
...
]
}
below is part of the outputs from $saved_model_cli --dir path /home/.. --all
In summary, I have 12 data inputs as string values. How should I put together in the above request format so the model can return prediction? Thanks!
Defined Functions:
Function Name: '__call__'
Option #1
Callable with:
Argument #1
DType: list
Value: [TensorSpec(shape=(None, 3), dtype=tf.float32, name='a_xf'), TensorSpec(shape=(None, 2), dtype=tf.float32, name='b_xf'), TensorSpec(shape=(None, 8), dtype=tf.float32, name='c_xf'), TensorSpec(shape=(None, 12), dtype=tf.float32, name='d_xf'), TensorSpec(shape=(None, 4), dtype=tf.float32, name='e_xf'), TensorSpec(shape=(None, 16), dtype=tf.float32, name='f_xf'), TensorSpec(shape=(None, 26), dtype=tf.float32, name='g_xf'), TensorSpec(shape=(None, 4), dtype=tf.float32, name='h_xf'), TensorSpec(shape=(None, 2), dtype=tf.float32, name='i_xf'), TensorSpec(shape=(None, 11), dtype=tf.float32, name='j_xf'), TensorSpec(shape=(None, 6), dtype=tf.float32, name='k_xf'), TensorSpec(shape=(None, 2), dtype=tf.float32, name='l_xf')]
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Related
Loss Output
Epoch 996 Output
I have been working on a deep convolutional generative adversarial network (DCGAN) that generates pictures of cats (RGB, 64x64 pixels). It seems to learn rather quickly, as it is clear the images are cats by around the 300th epoch. For some reason, even after 1000 epochs, they have a good amount of blur on them, which is preventing them from being their full resolution. I am almost certain the issue is in my generator network structure, so I have attached it below.
model = tf.keras.Sequential()
model.add(layers.Dense(8*8*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((8, 8, 256)))
assert model.output_shape == (None, 8, 8, 256)
model.add(layers.Conv2DTranspose(256, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 8, 8, 256)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 16, 16, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 32, 32, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
# activation is tanh to not squash out the negatives thats we've been keeping through leakyReLU
model.add(layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
print("model.output_shape=", model.output_shape)
assert model.output_shape == (None, 64, 64, 3)[[enter image description here](https://i.stack.imgur.com/eICCQ.png)](https://i.stack.imgur.com/pfMY5.png)
I suspect the problems results from the artifacts generated due to my use of Conv2DTranspose layers, but is it worth it to switch to Upscaling followed by a Convolutional layer? I feel like it would do less learning this way.
I have my model (a VGG16, but it is not important). I want to check only some parameters of my network, for example the first ones.
To do this I do list(model.parameters()) and it prints all the parameters.
Now, considering that a VGG has this shape:
VGG16(
(block_1): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU()
(6): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
)
...
If I want only the weights of the convolutions I do this: list(model.block_1[0].parameters()) and it prints this:
[Parameter containing:
tensor([[[[-0.3215, -0.0771, 0.4429],
[-0.6455, -0.0827, -0.4266],
[-0.2029, -0.2288, 0.1696]]],
[[[ 0.5323, -0.2418, -0.1031],
[ 0.5917, 0.2669, -0.5630],
[ 0.3064, -0.4984, -0.1288]]],
[[[ 0.3804, 0.0906, -0.2116],
[ 0.2659, -0.3325, -0.1873],
[-0.5044, 0.0900, 0.1386]]],
Now, these lists are always enormous. How can I print only the first values, for example, the first matrix?
[[[[-0.3215, -0.0771, 0.4429],
[-0.6455, -0.0827, -0.4266],
[-0.2029, -0.2288, 0.1696]]]
You can treat it as a NumPy array when it's processed correctly. In your example, this should work:
from torchvision import models
model = models.vgg16()
first_param = list(model.features[0].parameters())[0].data
The first_param will hold the tensor as:
tensor([[[[-0.3215, -0.0771, 0.4429],
[-0.6455, -0.0827, -0.4266],
[-0.2029, -0.2288, 0.1696]]],
[[[ 0.5323, -0.2418, -0.1031],
[ 0.5917, 0.2669, -0.5630],
[ 0.3064, -0.4984, -0.1288]]],
[[[ 0.3804, 0.0906, -0.2116],
[ 0.2659, -0.3325, -0.1873],
[-0.5044, 0.0900, 0.1386]]]
Then just continue as NumPy array:
print(first_param[0])
>> tensor([[[[-0.3215, -0.0771, 0.4429],
[-0.6455, -0.0827, -0.4266],
[-0.2029, -0.2288, 0.1696]]])
You can slice Tensorflow tensors with the same syntax as Python lists. For example:
import tensorflow as tf
tensor = tf.constant([[[[-0.3215, -0.0771, 0.4429],
[-0.6455, -0.0827, -0.4266],
[-0.2029, -0.2288, 0.1696]]],
[[[ 0.5323, -0.2418, -0.1031],
[ 0.5917, 0.2669, -0.5630],
[ 0.3064, -0.4984, -0.1288]]],
[[[ 0.3804, 0.0906, -0.2116],
[ 0.2659, -0.3325, -0.1873],
[-0.5044, 0.0900, 0.1386]]]])
print(tensor[0, :])
This will give you the first matrix from your example, together with related shape information. If you want to get rid of this shape information, you could, for instance, convert the sliced tensor into a numpy array with print(np.array(tensor[0, :])).
I am looking for an approach to train a hyperspectral image data on Tensorflow.
The training sample is encoded in CSV and has an arbitrary x-y dimension but constant depth:
The data looks like this:
Sample1.csv: 50x4x220 (Row 1-50 is supposed to be aligned with row 51-100, 101-150, and 151-200)
Sample2.csv: 18x71x220 (Row 1-18 is supposed to be aligned with row 19-36, etc.)
Sample3.csv: 33x41x220 (same as above)
....
Sample100.csv: 15x8x220 (same as above)
Is there any project example that I can use? Thanks in advance.
Here is a survey on DL algorithms used to classify hyperspectral datas.
Since you have datas or varying size, you will have to create patches of datas, you won't be able to feed datas of different sizes.
For example you could feed patches of (16, 16, 220) to your network.
I worked on a CNN with images of multispectral bands, I had less bands that you have, the size of patches was obviously important, I used a UNET in image segmentation.
Edit with an example using(None, None, 220) as input :
model = Sequential()
# this applies 32 convolution filters of size 3x3 each.
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(None, None, 220)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
# model.add(Flatten())
# Replace flatten by GlobalPooling example :
model.add(GlobalMaxPooling2D())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
adam = Adam(lr=1e-4)
model.compile(loss='categorical_crossentropy', optimizer=adam)
I'm trying to build an autoencoder using Keras, based on [this example][1] from the docs. Because my data is large, I'd like to use a generator to avoid loading it into memory.
My model looks like:
model = Sequential()
model.add(Convolution2D(16, 3, 3, activation='relu', border_mode='same', input_shape=(3, 256, 256)))
model.add(MaxPooling2D((2, 2), border_mode='same'))
model.add(Convolution2D(8, 3, 3, activation='relu', border_mode='same'))
model.add(MaxPooling2D((2, 2), border_mode='same'))
model.add(Convolution2D(8, 3, 3, activation='relu', border_mode='same'))
model.add(MaxPooling2D((2, 2), border_mode='same'))
model.add(Convolution2D(8, 3, 3, activation='relu', border_mode='same'))
model.add(UpSampling2D((2, 2)))
model.add(Convolution2D(8, 3, 3, activation='relu', border_mode='same'))
model.add(UpSampling2D((2, 2)))
model.add(Convolution2D(16, 3, 3, activation='relu'))
model.add(UpSampling2D((2, 2)))
model.add(Convolution2D(1, 3, 3, activation='sigmoid', border_mode='same'))
model.compile(optimizer='adadelta', loss='binary_crossentropy')
My generator:
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory('IMAGE DIRECTORY', color_mode='rgb', class_mode='binary', batch_size=32, target_size=(256, 256))
And then fitting the model:
model.fit_generator(
train_generator,
samples_per_epoch=1,
nb_epoch=1,
verbose=1,
)
I'm getting this error:
Exception: Error when checking model target: expected convolution2d_76 to have 4 dimensions, but got array with shape (32, 1)
That looks like the size of my batch rather than a sample. What am I doing wrong?
The error is most likely due to the class_mode='binary'. It makes the generator produce binary classes, so the output has shape (batch_size, 1), while your model produces a four dimensional output (since the last layer is a convolution).
I guess that you want your label to be the image itself. Based on the source of the flow_from_directory and the DirectoryIterator it uses, it is impossible to do by just changing the class_mode. A possible solution would be along the lines of:
train_generator_ = train_datagen.flow_from_directory('IMAGE DIRECTORY', color_mode='rgb', class_mode=None, batch_size=32, target_size=(256, 256))
def train_generator():
for x in train_iterator_:
yield x, x
Note that I set class_mode to None. It makes the generator to return just the image instead of tuple(image, label). I then define a new generator, that returns the image as both the input and the label.
So I am working on an Octave script (I am relatively inexperienced with the language), and I am trying to open two csv files who's names I pass to my script as command line arguments. Here is my script:
#!/usr/bin/env octave
function plotregs(fig, regs)
figure(fig);
title('Foo');
xlabel('Value');
ylabel('Cycle #');
grid on;
plot(rows(regs(:, 1)), regs(:, 1),
rows(regs(:, 2)), regs(:, 2),
rows(regs(:, 3)), regs(:, 3),
rows(regs(:, 4)), regs(:, 4),
rows(regs(:, 5)), regs(:, 5),
rows(regs(:, 6)), regs(:, 6),
rows(regs(:, 7)), regs(:, 7),
rows(regs(:, 8)), regs(:, 8));
legend('A', 'B', 'C', 'D', 'E', 'F', 'H', 'L');
endfunction
args = argv ();
filename = strcat(cellstr(args(1)));
typeinfo filename
regs = csvread(filename);
graphics_toolkit("gnuplot");
plotregs(1, regs);
filename = strcat(cellstr(args(2)));
regs = csvread(filename);
plotregs(2, regs);
pause
And here is the output I get when I run the script:
ans = sq_string
error: dlmread: FILE argument must be a string or file id
error: called from:
error: /usr/share/octave/3.4.3/m/io/csvread.m at line 34, column 5
error: /home/tnecniv/Code/Octave/regigraph/regigraph.m at line 25, column 6
Any advice would be appreciated
The problem is that you create an executable Octave script which expects arguments yet do not provide any arguments.
First of all I would start the file as
#!/usr/bin/octave -qf
Then one could run the script as
$ ./myscript.sh datafile1.csv datafile2.csv
But in my opinion argv() behaves a bit strange, because when no arguments are given to -say myscript.sh-, it returns the filename of the executing script, but when one or more arguments are given it contains the arguments only.
You can refer to Section 2.6 of the documentation for "Executable Octave Programs".