I have a Convolutional Neural Network, and it's trying to resolve a classification problem using images (2 classes, so binary classification), using sigmoid.
To evaluate the model I use:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
path_dir = '../../dataset/train'
parth_dir_test = '../../dataset/test'
datagen = ImageDataGenerator(
rescale=1./255,
validation_split = 0.2)
test_set = datagen.flow_from_directory(parth_dir_test,
target_size= (150,150),
batch_size = 64,
class_mode = 'binary')
score = classifier.evaluate(test_set, verbose=0)
print('Test Loss', score[0])
print('Test accuracy', score[1])
And it outputs:
When I try to print the classification report I use:
yhat_classes = classifier.predict_classes(test_set, verbose=0)
yhat_classes = yhat_classes[:, 0]
print(classification_report(test_set.classes,yhat_classes))
But now I get this accuracy:
If I print the test_set.classes, it shows the first 344 numbers of the array as 0, and the next 344 as 1. Is this test_set shuffled before feeding into the network?
I think your model is doing just fine both in "training" and "evaluating".Evaluation accuracy comes on the basis of prediction so maybe you are making some logical mistake while using model.predict_classes().Please check if you are using the trained model weights and not any randomly initialized model while evaluating it.
what "evaluate" does: The model sets apart this fraction of data while training, and will not train on it, and will evaluate loss and any other model's metrics on this data after each "epoch".so, model.evaluate() is for evaluating your trained model. Its output is accuracy or loss, not prediction to your input data!
predict: Generates output predictions for the input samples. model.predict() actually predicts, and its output is target value, predicted from your input data.
FYI: if your accurscy in Binary Classification problem is less than 50%, it's worse than the case that you randomly predict one of those classes (acc = 50%)!
I needed to add a shuffle=False. The code that work is:
test_set = datagen.flow_from_directory(parth_dir_test,
target_size=(150,150),
batch_size=64,
class_mode='binary',
shuffle=False)
Related
I can't find the problem in my code - I'm training a GAN and the gan loss and discriminator loss are very low, 0.04 and it seems like it's converging well but a - the pictures don't look very good but the actual problem is that, b - somehow when I do gan.predict(noise) it's very close to 1, but when I do discriminator.predict(gan(noise)), it's very close to 0 although it's supposed to be identical. Here's my code:
Generator code:
def create_generator():
generator=tf.keras.Sequential()
#generator.add(layers.Dense(units=50176,input_dim=25))
generator.add(layers.Dense(units=12544,input_dim=100))
#generator.add(layers.Dropout(0.2))
generator.add(layers.Reshape([112,112,1])) #112,112
generator.add(layers.Conv2D(32, kernel_size=3,padding='same',activation='relu'))
generator.add(layers.UpSampling2D()) #224,224
generator.add(layers.Conv2D(1, kernel_size=4,padding='same',activation='tanh'))
generator.compile(loss='binary_crossentropy', optimizer=adam_optimizer())
return generator
g=create_generator()
g.summary()
Discriminator code:
#IMAGE DISCRIMINATOR
def create_discriminator():
discriminator=tf.keras.Sequential()
discriminator.add(layers.Conv2D(64, kernel_size=2,padding='same',activation='relu',input_shape=[224,224,1]))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(32,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(16,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(8,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(1,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Flatten())
discriminator.add(layers.Dense(units=1, activation='sigmoid'))
discriminator.compile(loss='binary_crossentropy', optimizer=tf.optimizers.Adam(lr=0.0002))
return discriminator
d =create_discriminator()
d.summary()
Gan code:
def create_gan(discriminator, generator):
discriminator.trainable=False
gan_input = tf.keras.Input(shape=(100,))
x = generator(gan_input)
gan_output= discriminator(x)
gan= tf.keras.Model(inputs=gan_input, outputs=gan_output)
#gan.compile(loss='binary_crossentropy', optimizer='adam')
gan.compile(loss='binary_crossentropy', optimizer=adam_optimizer())
return gan
gan = create_gan(d,g)
gan.summary()
Training code (I purposely don't do train_on_batch with the gan cause I wanted to see if the gradients zero out.)
##tf.function
def training(epochs=1, batch_size=128, rounds=50):
batch_count = X_bad.shape[0] / batch_size
# Creating GAN
generator = create_generator()
discriminator = create_discriminator()
###########if you want to continue training an already trained gan
#discriminator.set_weights(weights)
gan = create_gan(discriminator, generator)
start = time.time()
for e in range(1,epochs+1 ):
#print("Epoch %d" %e)
#for _ in tqdm(range(batch_size)):
#generate random noise as an input to initialize the generator
noise= np.random.normal(0,1, [batch_size, 100])
# Generate fake MNIST images from noised input
generated_images = generator.predict(noise)
#print('gen im shape: ',np.shape(generated_images))
# Get a random set of real images
image_batch = X_bad[np.random.randint(low=0,high=X_bad.shape[0],size=batch_size)]
#print('im batch shape: ',image_batch.shape)
#Construct different batches of real and fake data
X= np.concatenate([image_batch, generated_images])
# Labels for generated and real data
y_dis=np.zeros(2*batch_size)
y_dis[:batch_size]=0.99
#Pre train discriminator on fake and real data before starting the gan.
discriminator.trainable=True
discriminator.train_on_batch(X, y_dis)
#Tricking the noised input of the Generator as real data
noise= np.random.normal(0,1, [batch_size, 100])
y_gen = np.ones(batch_size)
# During the training of gan,
# the weights of discriminator should be fixed.
#We can enforce that by setting the trainable flag
discriminator.trainable=False
#training the GAN by alternating the training of the Discriminator
#and training the chained GAN model with Discriminator’s weights freezed.
#gan.train_on_batch(noise, y_gen)
with tf.GradientTape() as tape:
pred=gan(noise)
loss_val=tf.keras.losses.mean_squared_error(y_gen,pred)
# loss_val=gan.test_on_batch(noise,y_gen)
# loss_val=tf.cast(loss_val,dtype=tf.float32)
grads=tape.gradient(loss_val,gan.trainable_variables)
optimizer.apply_gradients(zip(grads, gan.trainable_variables))
if e == 1 or e % rounds == 0:
end = time.time()
loss_value=discriminator.test_on_batch(X, y_dis)
print("Epoch {:03d}: Loss: {:.3f}".format(e,loss_value))
gen_loss=gan.test_on_batch(noise,y_gen)
print('gan loss: ',gen_loss)
#print('Epoch: ',e,' Loss: ',)
print('Time for ',rounds,' epochs: ',end-start,' seconds')
local_time = time.ctime(end)
print('Printing time: ',local_time)
plot_generated_images(e, generator,examples=5)
start = time.time()
return discriminator,generator,grads
Now, the final losses after around 2000 epochs are 0.039 for the generator and 0.034 for the discriminator. The thing is that when I do the following, look what I get
print('disc for bad train: ',np.mean(discriminator(X_bad[:50])))
#output
disc for bad train: 0.9995248
noise= np.random.normal(0,1, [500, 100])
generated_images = generator.predict(noise)
print('disc for gen: ',np.mean(discriminator(generated_images[:50])))
print('gan for gen: ',np.mean(gan(noise[:50])))
#output
disc for gen: 0.0018724388
gan for gen: 0.96554756
Can anyone find the problem?
Thanks!
If anyone stumbles upon this, then I figured it out (although I haven't fixed it yet). What's happening here is that my Gan object is training only the generator weights. The discriminator is a different object that is training itself but when I train the gan, I don't update the discriminator weights in the gan, I only updated the generator weights and therefore when I had the noise as the input to the gan, the output was as I wanted, close to one, however when I took the generated images as the input to the discriminator, since the discriminator was being trained separately from the generator (the discriminator weights were not being changed at all inside the gan), the result was close to zero. A better approach is to create a GAN as a class and be able to approach the discriminator within the gan and the generator within the gan in order to update both of their weights inside the gan.
I want to do a binary classification and I used the DenseNet from Pytorch.
Here is my predict code:
densenet = torch.load(model_path)
densenet.eval()
output = densenet(input)
print(output)
And here is the output:
Variable containing:
54.4869 -54.3721
[torch.cuda.FloatTensor of size 1x2 (GPU 0)]
I want to get the probabilities of each class. What should I do?
I have noticed that torch.nn.Softmax() could be used when there are many categories, as discussed here.
import torch.nn as nn
Add a softmax layer to the classifier layer:
i.e. typical:
num_ftrs = model_ft.classifier.in_features
model_ft.classifier = nn.Linear(num_ftrs, num_classes)
updated:
model_ft.classifier = nn.Sequential(nn.Linear(num_ftrs, num_classes),
nn.Softmax(dim=1))
I am trying to train a very simple model for image recognition, nothing spectacular. My first attempt worked just fine, when I used image rescaling:
# this is the augmentation configuration to enhance the training dataset
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# validation generator, only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
Then I simply trained the model as such:
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
This works perfectly fine and leads to a reasonable accuracy. Then I thought it may be a good idea to try out mean subtraction, as VGG16 model uses. Instead of doing it manually, I chose to use ImageDataGenerator.fit(). For that, however, you need to supply it with training images as numpy arrays, so I first read the images, convert them, and then feed them into it:
train_datagen = ImageDataGenerator(
featurewise_center=True,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(featurewise_center=True)
def process_images_from_directory(data_dir):
x = []
y = []
for root, dirs, files in os.walk(data_dir, topdown=False):
class_names = sorted(dirs)
global class_indices
if len(class_indices) == 0:
class_indices = dict(zip(class_names, range(len(class_names))))
for dir in class_names:
filenames = os.listdir(os.path.join(root,dir))
for file in filenames:
img_array = img_to_array(load_img(os.path.join(root,dir,file), target_size=(224, 224)))[np.newaxis]
if len(x) == 0:
x = img_array
else:
x = np.concatenate((x,img_array))
y.append(class_indices[dir])
#this step converts an array of classes [0,1,2,3...] into sparse vectors [1,0,0,0], [0,1,0,0], etc.
y = np.eye(len(class_names))[y]
return x, y
x_train, y_train = process_images_from_directory(train_data_dir)
x_valid, y_valid = process_images_from_directory(validation_data_dir)
nb_train_samples = x_train.shape[0]
nb_validation_samples = x_valid.shape[0]
train_datagen.fit(x_train)
test_datagen.mean = train_datagen.mean
train_generator = train_datagen.flow(
x_train,
y_train,
batch_size=batch_size,
shuffle=False)
validation_generator = test_datagen.flow(
x_valid,
y_valid,
batch_size=batch_size,
shuffle=False)
Then, I train the model the same way, simply giving it both iterators. After the training completes, the accuracy is basically stuck at ~25% even after 50 epochs:
80/80 [==============================] - 77s 966ms/step - loss: 12.0886 - acc: 0.2500 - val_loss: 12.0886 - val_acc: 0.2500
When I run predictions on the above model, it classifies only 1 out 4 total classes correctly, all images from other 3 classes are classified as belonging to the first class - clearly the percentage of 25% has something to do with this fact, I just can't figure out what I am doing wrong.
I realize that I could calculate the mean manually and then simply set it for both generators, or that I could use ImageDataGenerator.fit() and then still go with flow_from_directory, but that would be a waste of already processed images, I would be doing the same processing twice.
Any opinions on how to make it work with flow() all the way?
Did you try setting shuffle=True in your generators?
You did not specify shuffling in the first case (it should be True by default) and set it to False in the second case.
Your input data might be sorted by classes. Without shuffling, your model first only sees class #1 and simply learns to predict class #1 always. It then sees class #2 and learns to always predict class #2 and so on. At the end of one epoch your model learns to always predict class #4 and thus gives a 25% accuracy on validation.
I am learning about designing Convolutional Neural Networks using Keras. I have developed a simple model using VGG16 as the base. I have about 6 classes of images in the dataset. Here are the code and description of my model.
model = models.Sequential()
conv_base = VGG16(weights='imagenet' ,include_top=False, input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3))
conv_base.trainable = False
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.001)))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(6, activation='sigmoid'))
Here is the code for compiling and fitting the model:
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
model.summary()
callbacks = [
EarlyStopping(monitor='acc', patience=1, mode='auto'),
ModelCheckpoint(monitor='val_loss', save_best_only=True, filepath=model_file_path)
]
history = model.fit_generator(
train_generator,
steps_per_epoch=10,
epochs=EPOCHS,
validation_data=validation_generator,
callbacks = callbacks,
validation_steps=10)
Here is the code for prediction of a new image
img = image.load_img(img_path, target_size=(IMAGE_SIZE, IMAGE_SIZE))
plt.figure(index)
imgplot = plt.imshow(img)
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
prediction = model.predict(x)[0]
# print(prediction)
Often model.predict() method predicts more than one class.
[0 1 1 0 0 0]
I have a couple of questions
Is it normal for a multiclass classification model to predict more than one output?
How is accuracy measured during training time if more than one class was predicted?
How can I modify the neural network so that only one class is predicted?
Any help is appreciated. Thank you so much!
You are not doing multi-class classification, but multi-label. This is caused by the use of a sigmoid activation at the output layer. To do multi-class classification properly, use a softmax activation at the output, which will produce a probability distribution over classes.
Taking the class with the biggest probability (argmax) will produce a single class prediction, as expected.
In PyTorch, we can define architectures in multiple ways. Here, I'd like to create a simple LSTM network using the Sequential module.
In Lua's torch I would usually go with:
model = nn.Sequential()
model:add(nn.SplitTable(1,2))
model:add(nn.Sequencer(nn.LSTM(inputSize, hiddenSize)))
model:add(nn.SelectTable(-1)) -- last step of output sequence
model:add(nn.Linear(hiddenSize, classes_n))
However, in PyTorch, I don't find the equivalent of SelectTable to get the last output.
nn.Sequential(
nn.LSTM(inputSize, hiddenSize, 1, batch_first=True),
# what to put here to retrieve last output of LSTM ?,
nn.Linear(hiddenSize, classe_n))
Define a class to extract the last cell output:
# LSTM() returns tuple of (tensor, (recurrent state))
class extract_tensor(nn.Module):
def forward(self,x):
# Output shape (batch, features, hidden)
tensor, _ = x
# Reshape shape (batch, hidden)
return tensor[:, -1, :]
nn.Sequential(
nn.LSTM(inputSize, hiddenSize, 1, batch_first=True),
extract_tensor(),
nn.Linear(hiddenSize, classe_n)
)
According to the LSTM cell documentation the outputs parameter has a shape of (seq_len, batch, hidden_size * num_directions) so you can easily take the last element of the sequence in this way:
rnn = nn.LSTM(10, 20, 2)
input = Variable(torch.randn(5, 3, 10))
h0 = Variable(torch.randn(2, 3, 20))
c0 = Variable(torch.randn(2, 3, 20))
output, hn = rnn(input, (h0, c0))
print(output[-1]) # last element
Tensor manipulation and Neural networks design in PyTorch is incredibly easier than in Torch so you rarely have to use containers. In fact, as stated in the tutorial PyTorch for former Torch users PyTorch is built around Autograd so you don't need anymore to worry about containers. However, if you want to use your old Lua Torch code you can have a look to the Legacy package.
As far as I'm concerned there's nothing like a SplitTable or a SelectTable in PyTorch. That said, you are allowed to concatenate an arbitrary number of modules or blocks within a single architecture, and you can use this property to retrieve the output of a certain layer. Let's make it more clear with a simple example.
Suppose I want to build a simple two-layer MLP and retrieve the output of each layer. I can build a custom class inheriting from nn.Module:
class MyMLP(nn.Module):
def __init__(self, in_channels, out_channels_1, out_channels_2):
# first of all, calling base class constructor
super().__init__()
# now I can build my modular network
self.block1 = nn.Linear(in_channels, out_channels_1)
self.block2 = nn.Linear(out_channels_1, out_channels_2)
# you MUST implement a forward(input) method whenever inheriting from nn.Module
def forward(x):
# first_out will now be your output of the first block
first_out = self.block1(x)
x = self.block2(first_out)
# by returning both x and first_out, you can now access the first layer's output
return x, first_out
In your main file you can now declare the custom architecture and use it:
from myFile import MyMLP
import numpy as np
in_ch = out_ch_1 = out_ch_2 = 64
# some fake input instance
x = np.random.rand(in_ch)
my_mlp = MyMLP(in_ch, out_ch_1, out_ch_2)
# get your outputs
final_out, first_layer_out = my_mlp(x)
Moreover, you could concatenate two MyMLP in a more complex model definition and retrieve the output of each one in a similar way.
I hope this is enough to clarify, but in case you have more questions, please feel free to ask, since I may have omitted something.