I am looking for a way to numerically evaluate the results of my unet-like CNN.
The CNN is trained to remove artifacts from grayscale images. Therefore the CNN gets a "9 channel" grayscale image containing artifacts in each channel (9 grayscale images with partially redundant data but different artifacts are concatenated --> dimensions[numTrainInputs, 512, 512, 9]) as input and should output a single grayscale image without artifacts [numTrainInputs, 512, 512, 1]. The CNN is trained using MSE as loss function and Adam as Optimizer and Keras. So far, so good.
Visually the CNN provides good results when compared to an artifact free "ground truth" image --> dimensions[numTrainInputs, 512, 512, 1] but the accuracy during training remains at 0%. I think this is because none of the result images perfectly fits to the ground truth, right!?
But how can I numerically evaluate the results? I searched for some numerical evaluations in the field of autoencoders but coulnd't find a proper way. Can someone give me a hint?
The CNN looks like this:
input_1 = Input((X_train.shape[1],X_train.shape[2], X_train.shape[3]))
conv1 = Conv2D(16, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(input_1)
conv2 = Conv2D(32, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(conv1)
conv3 = Conv2D(64, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(conv2)
conv4 = Conv2D(128, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(conv3)
conv5 = Conv2D(256, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(conv4)
conv6 = Conv2D(512, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(conv5)
upconv1 = Conv2DTranspose(256, (3,3), strides=(1,1), activation='elu', use_bias=True, padding='same')(conv6)
upconv2 = Conv2DTranspose(128, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(upconv1)
upconv3 = Conv2DTranspose(64, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(upconv2)
upconv3_1 = concatenate([upconv3, conv4], axis=3)
upconv4 = Conv2DTranspose(32, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(upconv3_1)
upconv4_1 = concatenate([upconv4, conv3], axis=3)
upconv5 = Conv2DTranspose(16, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(upconv4_1)
upconv5_1 = concatenate([upconv5,conv2], axis=3)
upconv6 = Conv2DTranspose(8, (3,3), strides=(2,2), activation='elu', use_bias=True, padding='same')(upconv5_1)
upconv6_1 = concatenate([upconv6,conv1], axis=3)
upconv7 = Conv2DTranspose(1, (3,3), strides=(2,2), activation='linear', use_bias=True, padding='same')(upconv6_1)
model = Model(outputs=upconv7, inputs=input_1)
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=1, epochs=100, shuffle=True, validation_split=0.01, callbacks=[tbCallback])
Thank you very much for your help!
You are using the wrong metrics for this problem.
In regression 'accuracy' as metric makes no sense.
Change it to MSE for example:
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_squared_error']))
Related
The model return an accuracy of 36.5% on the fit phase and only 14.5% in the predict phase despite the fact I am considering the same data (val_ds).
What am I doing wrong ?
model = tf.keras.Sequential([
tf.keras.layers.Rescaling(1./255, input_shape=(200, 200, 3)),
tf.keras.layers.Conv2D(16, 3, padding='same', activation='relu',
kernel_regularizer=regularizers.l2(l=0.01)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Conv2D(32, 3, padding='same', activation='relu',
kernel_regularizer=regularizers.l2(l=0.01)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Dropout(0.4),
tf.keras.layers.Conv2D(64, 3, padding='same', activation='relu',
kernel_regularizer=regularizers.l2(l=0.01)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(8, activation='softmax')
])
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['sparse_categorical_accuracy'])
early_stop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=1, mode='auto')
epochs=40
history = model.fit(
train_ds,
validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs
)
val_ds --> <class 'tensorflow.python.data.ops.dataset_ops.SkipDataset'>
train_ds --> <class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>
cnn1_pred = model.predict(val_ds)
cnn1_pred = cnn1_pred.argmax(axis=-1)
val_label = np.concatenate([y for x, y in val_ds], axis=0)
count = 0
for n in range(3384):
if val_label[n] == cnn1_pred[n]:
count += 1
perf = round(count/3384, 4)
EDIT: I noticed that if I run
val_label = np.concatenate([y for x, y in val_ds], axis=0)
print(val_label)
I always obtain different results. This shouldn't happen I guess
Did you check with model.evaluate(val_ds) ? try it may be something go wrong with the calculation that you implement to get accuracy perf.
input_shape=(100,100,6)
input_tensor=keras.Input(input_shape)
model.add(Conv2D(32, 3, padding='same', activation='relu', input_shape=input_shape))
model.add((Conv1D(filters=32, kernel_size=2, activation='relu', padding='same')))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Conv2D(64, 3, padding='same', activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Conv2D(128, 3, padding='same', activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=['accuracy'])
training_set = train_datagen.flow_from_directory('/content/gdrive/My Drive/Data/training_set',
target_size = (128, 128),
batch_size = 32,
class_mode = 'categorical')
history=model.fit(training_set,
steps_per_epoch=nb_train_images//batch_size,
epochs=100,
validation_data=test_set,
validation_steps=nb_test_images//batch_size,
callbacks=callbacks)
history=model.fit(training_set,
steps_per_epoch=nb_train_images//batch_size,
epochs=40,
validation_data=test_set,
validation_steps=nb_test_images//batch_size,
callbacks=callbacks)
I have 6 different types of set to classify. where am i going wrong? i have add the input shape in above where i mentioned 1001006 ,can someone help to understand this issue.
This was happening to me too. The following is my code. The way that I fixed it was that instead of having some different input shape, I just made the input shape the training data's image shape.
train = image_gen.flow_from_directory(
train_path,
target_size=(500, 500
),
color_mode='grayscale',
class_mode='binary',
batch_size=16
)
#and then later, when I build the model
model.add(Conv2D(filters[0], (5, 5), padding='same',kernel_regularizer=12(0.001), activation='relu', input_shape=train.image_shape))
#the important part is the input_shape=train.image_shape
I have built two models, the first works fine, the code is
model = Sequential()
model.add(Dense(25, input_dim=8, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(25, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(25, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(1, activation='tanh'))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss='mean_squared_error', optimizer=opt)
history = model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=1000, verbose=1)
train_mse = model.evaluate(X_train, y_train, verbose=0)
test_mse = model.evaluate(X_test, y_test, verbose=0)
I believe the next model below, implemented using grid search, is identical. Except it doesn't seem to learn, the loss starts off astronomical and barely decreases. The code is:
def build_classifier():
classifier = Sequential()
classifier.add(Dense(25, input_dim=8, activation='relu', kernel_initializer='he_uniform'))
classifier.add(Dense(25, activation='relu', kernel_initializer='he_uniform'))
classifier.add(Dense(25, activation='relu', kernel_initializer='he_uniform'))
classifier.add(Dense(1, activation='tanh'))
opt = SGD(lr=0.01, momentum=0.9)
classifier.compile(loss='mean_squared_error', optimizer=opt, metrics=["accuracy"])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {'epochs':[1000]}
grid_search = GridSearchCV(estimator = classifier,
param_grid = parameters,
cv = 5)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
^I am aware I am not actually tuning any hyper parameters here. I kept reduce and removing the hyper parameters I was tuning, in the hope I could identify the problem.
I have cross_categorical models using the same data, and do not have this problem (making sure my model is reasonable with test train split, then remaking with grid search for tuning hyper parameters).
I cannot see what the difference between the two models would be, that allows the first to learn reasonably but the second to not work at all.
I am having trouble reshaping the layer before feeding it through deconvolution. I dont know how to reverse the flatten layer in convolution.
Thanks for the help!
def build_deep_autoencoder(img_shape, code_size):
H,W,C = img_shape
encoder = keras.models.Sequential()
encoder.add(L.InputLayer(img_shape))
encoder.add(L.Conv2D(32, (3,3), padding = 'same', activation = 'elu', name='layer_1'))
encoder.add(L.MaxPooling2D((3,3), padding = 'same',name = 'max_pooling_1'))
encoder.add(L.Conv2D(64, (3,3), padding = 'same', activation = 'elu', name='layer_2'))
encoder.add(L.MaxPooling2D((3,3),padding = 'same',name = 'max_pooling_2'))
encoder.add(L.Conv2D(128, (3,3), padding = 'same', activation = 'elu', name='layer_3'))
encoder.add(L.MaxPooling2D((3,3),padding = 'same',name = 'max_pooling_3'))
encoder.add(L.Conv2D(256, (3,3), padding = 'same', activation = 'elu', name='layer_4'))
encoder.add(L.MaxPooling2D((3,3),padding = 'same',name = 'max_pooling_4'))
encoder.add(L.Flatten())
encoder.add(L.Dense(256))
# decoder
decoder = keras.models.Sequential()
decoder.add(L.InputLayer((code_size,)))
decoder.add(L.Dense(256))
decoder.add(L.Conv2DTranspose(filters=128, kernel_size=(3, 3), strides=2, activation='elu', padding='same'))
decoder.add(L.Conv2DTranspose(filters=64, kernel_size=(3, 3), strides=2, activation='elu', padding='same'))
decoder.add(L.Conv2DTranspose(filters=32, kernel_size=(3, 3), strides=2, activation='elu', padding='same'))
decoder.add(L.Conv2DTranspose(filters=3, kernel_size=(3, 3), strides=2, activation='none', padding='same'))
return encoder, decoder
In your encoder, use the following instead of adding dense layer of 256:
decoder.add(L.Dense(2*2*256)) #actual encoder
decoder.add(L.Reshape((2,2,256))) #un-flatten
I am new to DL and Keras. Currently I try to implement a Unet-like CNN and now I want to include batch normalization layers into my non-sequential model but do not really now how.
That is my current try to include it:
input_1 = Input((X_train.shape[1],X_train.shape[2], X_train.shape[3]))
conv1 = Conv2D(16, (3,3), strides=(2,2), activation='relu', padding='same')(input_1)
batch1 = BatchNormalization(axis=3)(conv1)
conv2 = Conv2D(32, (3,3), strides=(2,2), activation='relu', padding='same')(batch1)
batch2 = BatchNormalization(axis=3)(conv2)
conv3 = Conv2D(64, (3,3), strides=(2,2), activation='relu', padding='same')(batch2)
batch3 = BatchNormalization(axis=3)(conv3)
conv4 = Conv2D(128, (3,3), strides=(2,2), activation='relu', padding='same')(batch3)
batch4 = BatchNormalization(axis=3)(conv4)
conv5 = Conv2D(256, (3,3), strides=(2,2), activation='relu', padding='same')(batch4)
batch5 = BatchNormalization(axis=3)(conv5)
conv6 = Conv2D(512, (3,3), strides=(2,2), activation='relu', padding='same')(batch5)
drop1 = Dropout(0.25)(conv6)
upconv1 = Conv2DTranspose(256, (3,3), strides=(1,1), padding='same')(drop1)
upconv2 = Conv2DTranspose(128, (3,3), strides=(2,2), padding='same')(upconv1)
upconv3 = Conv2DTranspose(64, (3,3), strides=(2,2), padding='same')(upconv2)
upconv4 = Conv2DTranspose(32, (3,3), strides=(2,2), padding='same')(upconv3)
upconv5 = Conv2DTranspose(16, (3,3), strides=(2,2), padding='same')(upconv4)
upconv5_1 = concatenate([upconv5,conv2], axis=3)
upconv6 = Conv2DTranspose(8, (3,3), strides=(2,2), padding='same')(upconv5_1)
upconv6_1 = concatenate([upconv6,conv1], axis=3)
upconv7 = Conv2DTranspose(1, (3,3), strides=(2,2), activation='linear', padding='same')(upconv6_1)
model = Model(outputs=upconv7, inputs=input_1)
Is the batch normalization used in the right way? In the keras documentation I read that you typically want to normalize the "features axis"!?
This is a short snippet out of the model summary:
====================================================================================================
input_1 (InputLayer) (None, 512, 512, 9) 0
____________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 256, 256, 16) 1312 input_1[0][0]
____________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 128, 128, 32) 4640 conv2d_1[0][0]
____________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 64) 18496 conv2d_2[0][0]
____________________________________________________________________________________________________
In this case my features axis is axis 3(start counting at 0), right?
I read about discussions whether you should implement the batch normalization before or after the activation function. In this case it is used after the activation function, right? Is there a possibility to use it before the activation function?
Thank you very much for your help and feedback! Really appreciate it!
Part 1: Is the batch normalization used in the right way?
The way you've called the BatchNormalization layer is correct; axis=3 is what you want, as recommended by the documentation.
Keep in mind that in the case of your model, axis=3 is equivalent to the default setting, axis=-1, so you do not need to call it explicitly.Part 2: In this case it is used after the activation function, right? Is there a possibility to use it before the activation function?
Yes, batch normalization as defined in the 2014 research paper by Ioffe and Szegedy is intended for use after the activation layer as a means of reducing internal covariate shift. Your code correctly applies the batchnorm after the activations on your convolutional layers. Its use after the activation layer can be thought of as a "pre-processing step" for the information before it reaches the next layer as an input.
For that reason, batch normalization can also serve as a data pre-processing step, which you can use immediately after your input layer (as discussed in this response.) However, as that answer mentions, batchnorm should not be abused; it's computationally expensive and can force your model into approximately linear behavior (this answer goes into more detail about this issue).
Using batchnorm in some other step in the model (not after activation layer or input layer) would have poorly-understood effects on model performance; it's a process intended explicitly to be applied to the outputs of the activation layer.
In my experience with u-nets, I've had a lot of success applying batchnorm only after the convolutional layers before max pooling; this effectively doubles the computational "bang for my buck" on normalization, since these tensors are re-used in the u-net architecture. Aside from that, I don't use batchnorm (except maybe on the inputs if the mean pixel intensities per image are super heterogeneous.)
axis 3 = axis -1 which is the default parameter.