Variable bug(s) O'Reilly Programming PyTorch - deep-learning

I'm reading O'Reilly's Sept, 2019 publication 'Programming Pytorch ..' describing a simple linear neural network for image classification.
There is a bug in a variable name in the opening model (no worries) target vs. targets, however there is what appears to be a weird omission of a variable declaration, train_iterator (and also dev_iterator, not shown).
I wish to know is what was the train_iterator variable they (I presume) intended?
p27
def train(model, optimiser, loss_fn, train_loader, val_loader, epochs=20, device='cpu'):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, target = batch # Bug here for 'target'
inputs = inputs.to(device)
target = targets.to(device)
output = model(inputs)
loss = loss_fin(output, target)
loss.backward()
optimizer.step()
training_loss += loss.data.item()
training_loss /= len(train_iterator) # What is train_iterator?
so,..
inputs, target = batch
must be
inputs, targets = batch
In the validation step below the training step (not shown) is
inputs, targets = batch
...
targets = targets.to(device)
Its no biggie and the code is simply assigning to Cudas (GPU) or CPU.
The variable train_iterator is defining the training loss (important diagnostic). I assume that there should be an iterator declared between the epoch and the batch iterator, or is this within the training loop?
Notes train_loader simply refers Pytorch Dataloader. The model refers to 3 linear layers with a ReLU activation function.

Related

Why Is accuracy so different when I use evaluate() and predict()?

I have a Convolutional Neural Network, and it's trying to resolve a classification problem using images (2 classes, so binary classification), using sigmoid.
To evaluate the model I use:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
path_dir = '../../dataset/train'
parth_dir_test = '../../dataset/test'
datagen = ImageDataGenerator(
rescale=1./255,
validation_split = 0.2)
test_set = datagen.flow_from_directory(parth_dir_test,
target_size= (150,150),
batch_size = 64,
class_mode = 'binary')
score = classifier.evaluate(test_set, verbose=0)
print('Test Loss', score[0])
print('Test accuracy', score[1])
And it outputs:
When I try to print the classification report I use:
yhat_classes = classifier.predict_classes(test_set, verbose=0)
yhat_classes = yhat_classes[:, 0]
print(classification_report(test_set.classes,yhat_classes))
But now I get this accuracy:
If I print the test_set.classes, it shows the first 344 numbers of the array as 0, and the next 344 as 1. Is this test_set shuffled before feeding into the network?
I think your model is doing just fine both in "training" and "evaluating".Evaluation accuracy comes on the basis of prediction so maybe you are making some logical mistake while using model.predict_classes().Please check if you are using the trained model weights and not any randomly initialized model while evaluating it.
what "evaluate" does: The model sets apart this fraction of data while training, and will not train on it, and will evaluate loss and any other model's metrics on this data after each "epoch".so, model.evaluate() is for evaluating your trained model. Its output is accuracy or loss, not prediction to your input data!
predict: Generates output predictions for the input samples. model.predict() actually predicts, and its output is target value, predicted from your input data.
FYI: if your accurscy in Binary Classification problem is less than 50%, it's worse than the case that you randomly predict one of those classes (acc = 50%)!
I needed to add a shuffle=False. The code that work is:
test_set = datagen.flow_from_directory(parth_dir_test,
target_size=(150,150),
batch_size=64,
class_mode='binary',
shuffle=False)

Gan loss tiny, discriminator loss huge?

I can't find the problem in my code - I'm training a GAN and the gan loss and discriminator loss are very low, 0.04 and it seems like it's converging well but a - the pictures don't look very good but the actual problem is that, b - somehow when I do gan.predict(noise) it's very close to 1, but when I do discriminator.predict(gan(noise)), it's very close to 0 although it's supposed to be identical. Here's my code:
Generator code:
def create_generator():
generator=tf.keras.Sequential()
#generator.add(layers.Dense(units=50176,input_dim=25))
generator.add(layers.Dense(units=12544,input_dim=100))
#generator.add(layers.Dropout(0.2))
generator.add(layers.Reshape([112,112,1])) #112,112
generator.add(layers.Conv2D(32, kernel_size=3,padding='same',activation='relu'))
generator.add(layers.UpSampling2D()) #224,224
generator.add(layers.Conv2D(1, kernel_size=4,padding='same',activation='tanh'))
generator.compile(loss='binary_crossentropy', optimizer=adam_optimizer())
return generator
g=create_generator()
g.summary()
Discriminator code:
#IMAGE DISCRIMINATOR
def create_discriminator():
discriminator=tf.keras.Sequential()
discriminator.add(layers.Conv2D(64, kernel_size=2,padding='same',activation='relu',input_shape=[224,224,1]))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(32,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(16,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(8,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Conv2D(1,kernel_size=2,padding='same',activation='relu'))
discriminator.add(layers.Dropout(0.5))
discriminator.add(layers.Flatten())
discriminator.add(layers.Dense(units=1, activation='sigmoid'))
discriminator.compile(loss='binary_crossentropy', optimizer=tf.optimizers.Adam(lr=0.0002))
return discriminator
d =create_discriminator()
d.summary()
Gan code:
def create_gan(discriminator, generator):
discriminator.trainable=False
gan_input = tf.keras.Input(shape=(100,))
x = generator(gan_input)
gan_output= discriminator(x)
gan= tf.keras.Model(inputs=gan_input, outputs=gan_output)
#gan.compile(loss='binary_crossentropy', optimizer='adam')
gan.compile(loss='binary_crossentropy', optimizer=adam_optimizer())
return gan
gan = create_gan(d,g)
gan.summary()
Training code (I purposely don't do train_on_batch with the gan cause I wanted to see if the gradients zero out.)
##tf.function
def training(epochs=1, batch_size=128, rounds=50):
batch_count = X_bad.shape[0] / batch_size
# Creating GAN
generator = create_generator()
discriminator = create_discriminator()
###########if you want to continue training an already trained gan
#discriminator.set_weights(weights)
gan = create_gan(discriminator, generator)
start = time.time()
for e in range(1,epochs+1 ):
#print("Epoch %d" %e)
#for _ in tqdm(range(batch_size)):
#generate random noise as an input to initialize the generator
noise= np.random.normal(0,1, [batch_size, 100])
# Generate fake MNIST images from noised input
generated_images = generator.predict(noise)
#print('gen im shape: ',np.shape(generated_images))
# Get a random set of real images
image_batch = X_bad[np.random.randint(low=0,high=X_bad.shape[0],size=batch_size)]
#print('im batch shape: ',image_batch.shape)
#Construct different batches of real and fake data
X= np.concatenate([image_batch, generated_images])
# Labels for generated and real data
y_dis=np.zeros(2*batch_size)
y_dis[:batch_size]=0.99
#Pre train discriminator on fake and real data before starting the gan.
discriminator.trainable=True
discriminator.train_on_batch(X, y_dis)
#Tricking the noised input of the Generator as real data
noise= np.random.normal(0,1, [batch_size, 100])
y_gen = np.ones(batch_size)
# During the training of gan,
# the weights of discriminator should be fixed.
#We can enforce that by setting the trainable flag
discriminator.trainable=False
#training the GAN by alternating the training of the Discriminator
#and training the chained GAN model with Discriminator’s weights freezed.
#gan.train_on_batch(noise, y_gen)
with tf.GradientTape() as tape:
pred=gan(noise)
loss_val=tf.keras.losses.mean_squared_error(y_gen,pred)
# loss_val=gan.test_on_batch(noise,y_gen)
# loss_val=tf.cast(loss_val,dtype=tf.float32)
grads=tape.gradient(loss_val,gan.trainable_variables)
optimizer.apply_gradients(zip(grads, gan.trainable_variables))
if e == 1 or e % rounds == 0:
end = time.time()
loss_value=discriminator.test_on_batch(X, y_dis)
print("Epoch {:03d}: Loss: {:.3f}".format(e,loss_value))
gen_loss=gan.test_on_batch(noise,y_gen)
print('gan loss: ',gen_loss)
#print('Epoch: ',e,' Loss: ',)
print('Time for ',rounds,' epochs: ',end-start,' seconds')
local_time = time.ctime(end)
print('Printing time: ',local_time)
plot_generated_images(e, generator,examples=5)
start = time.time()
return discriminator,generator,grads
Now, the final losses after around 2000 epochs are 0.039 for the generator and 0.034 for the discriminator. The thing is that when I do the following, look what I get
print('disc for bad train: ',np.mean(discriminator(X_bad[:50])))
#output
disc for bad train: 0.9995248
noise= np.random.normal(0,1, [500, 100])
generated_images = generator.predict(noise)
print('disc for gen: ',np.mean(discriminator(generated_images[:50])))
print('gan for gen: ',np.mean(gan(noise[:50])))
#output
disc for gen: 0.0018724388
gan for gen: 0.96554756
Can anyone find the problem?
Thanks!
If anyone stumbles upon this, then I figured it out (although I haven't fixed it yet). What's happening here is that my Gan object is training only the generator weights. The discriminator is a different object that is training itself but when I train the gan, I don't update the discriminator weights in the gan, I only updated the generator weights and therefore when I had the noise as the input to the gan, the output was as I wanted, close to one, however when I took the generated images as the input to the discriminator, since the discriminator was being trained separately from the generator (the discriminator weights were not being changed at all inside the gan), the result was close to zero. A better approach is to create a GAN as a class and be able to approach the discriminator within the gan and the generator within the gan in order to update both of their weights inside the gan.

Why/how can model.forward() succeed both on input being mini-batch vs single item?

Why and how does this work?
When I run the forward phase on input
being mini-batch tensor
or alternatively being a single input item
model.__call__() (which AFAIK is calling forward() ) swallows that and spills out adequate output (i.e. a tensor of mini-batch of estimates or a single item of estimate)
Adopting testcode from the Pytorch NN example shows what I mean, but I don't get it.
I would expect it to create problems and me forced to transform the single item input into a mini-batch of size 1( reshape (1,xxx)) or likewise, like I did in the code below.
( I did variations of the test to be sure it is e.g. not depending on execution order )
# -*- coding: utf-8 -*-
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
#N, D_in, H, D_out = 64, 1000, 100, 10
N, D_in, H, D_out = 64, 10, 4, 3
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(1):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
model.eval()
print ("###########")
print ("x[0]",x[0])
print ("x[0].size()", x[0].size())
y_1pred = model(x[0])
print ("y_1pred.size()", y_1pred.size())
print (y_1pred)
model.eval()
print ("###########")
print ("x.size()", x.size())
y_pred = model(x)
print ("y_pred.size()", y_pred.size())
print ("y_pred[0]", y_pred[0])
print ("###########")
model.eval()
input_item = x[0]
batch_len1_shape = torch.Size([1,*(input_item.size())])
batch_len1 = input_item.reshape(batch_len1_shape)
y_pred_batch_len1 = model(batch_len1)
print ("input_item",input_item)
print ("input_item.size()", input_item.size())
print ("y_pred_batch_len1.size()", y_pred_batch_len1.size())
print (y_1pred)
raise Exception
This is the output it generates:
###########
x[0] tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
x[0].size() torch.Size([10])
y_1pred.size() torch.Size([3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
###########
x.size() torch.Size([64, 10])
y_pred.size() torch.Size([64, 3])
y_pred[0] tensor([-0.5366, -0.4826, 0.0538], grad_fn=<SelectBackward>)
###########
input_item tensor([-1.3901, -0.2659, 0.4352, -0.6890, 0.1098, -0.3124, 0.6419, 1.1004,
-0.7910, -0.5389])
input_item.size() torch.Size([10])
y_pred_batch_len1.size() torch.Size([1, 3])
tensor([-0.5366, -0.4826, 0.0538], grad_fn=<AddBackward0>)
The docs on nn.Linear state that
Input: (N,∗,in_features) where ∗ means any number of additional dimensions
so one would naturally expect that at least two dimensions are necessary. However, if we look under the hood we will see that Linear is implemented in terms of nn.functional.linear, which dispatches to torch.addmm or torch.matmul (depending whether bias == True) which broadcast their argument.
So this behavior is likely a bug (or an error in documentation) and I would not depend on it working in the future, if I were you.

Keras' ImageDataGenerator.flow() results in very low training/validation accuracy as opposed to flow_from_directory()

I am trying to train a very simple model for image recognition, nothing spectacular. My first attempt worked just fine, when I used image rescaling:
# this is the augmentation configuration to enhance the training dataset
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# validation generator, only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
Then I simply trained the model as such:
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
This works perfectly fine and leads to a reasonable accuracy. Then I thought it may be a good idea to try out mean subtraction, as VGG16 model uses. Instead of doing it manually, I chose to use ImageDataGenerator.fit(). For that, however, you need to supply it with training images as numpy arrays, so I first read the images, convert them, and then feed them into it:
train_datagen = ImageDataGenerator(
featurewise_center=True,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(featurewise_center=True)
def process_images_from_directory(data_dir):
x = []
y = []
for root, dirs, files in os.walk(data_dir, topdown=False):
class_names = sorted(dirs)
global class_indices
if len(class_indices) == 0:
class_indices = dict(zip(class_names, range(len(class_names))))
for dir in class_names:
filenames = os.listdir(os.path.join(root,dir))
for file in filenames:
img_array = img_to_array(load_img(os.path.join(root,dir,file), target_size=(224, 224)))[np.newaxis]
if len(x) == 0:
x = img_array
else:
x = np.concatenate((x,img_array))
y.append(class_indices[dir])
#this step converts an array of classes [0,1,2,3...] into sparse vectors [1,0,0,0], [0,1,0,0], etc.
y = np.eye(len(class_names))[y]
return x, y
x_train, y_train = process_images_from_directory(train_data_dir)
x_valid, y_valid = process_images_from_directory(validation_data_dir)
nb_train_samples = x_train.shape[0]
nb_validation_samples = x_valid.shape[0]
train_datagen.fit(x_train)
test_datagen.mean = train_datagen.mean
train_generator = train_datagen.flow(
x_train,
y_train,
batch_size=batch_size,
shuffle=False)
validation_generator = test_datagen.flow(
x_valid,
y_valid,
batch_size=batch_size,
shuffle=False)
Then, I train the model the same way, simply giving it both iterators. After the training completes, the accuracy is basically stuck at ~25% even after 50 epochs:
80/80 [==============================] - 77s 966ms/step - loss: 12.0886 - acc: 0.2500 - val_loss: 12.0886 - val_acc: 0.2500
When I run predictions on the above model, it classifies only 1 out 4 total classes correctly, all images from other 3 classes are classified as belonging to the first class - clearly the percentage of 25% has something to do with this fact, I just can't figure out what I am doing wrong.
I realize that I could calculate the mean manually and then simply set it for both generators, or that I could use ImageDataGenerator.fit() and then still go with flow_from_directory, but that would be a waste of already processed images, I would be doing the same processing twice.
Any opinions on how to make it work with flow() all the way?
Did you try setting shuffle=True in your generators?
You did not specify shuffling in the first case (it should be True by default) and set it to False in the second case.
Your input data might be sorted by classes. Without shuffling, your model first only sees class #1 and simply learns to predict class #1 always. It then sees class #2 and learns to always predict class #2 and so on. At the end of one epoch your model learns to always predict class #4 and thus gives a 25% accuracy on validation.

Unable to save weights while using pre-trained VGG16 model

While using the pre-trained VGG16 model I am unable to save the weights of the best model. I use this code:
checkpointer = [
# Stop if the accuracy is not improving after 7 iterations
EarlyStopping(monitor='val_loss', patience=3, verbose=1),
# Saving the best model and re-use it while prediction
ModelCheckpoint(filepath="C:/Users/skumarravindran/Documents/keras_save_model/vgg16_v1.hdf5", verbose=1, monitor='val_acc', save_best_only=True),
#
]
And I get the following error:
C:\Users\skumarravindran\AppData\Local\Continuum\Anaconda2\envs\py35gpu1\lib\site-packages\keras\callbacks.py:405: RuntimeWarning: Can save best model only with val_acc available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
I experienced two situations where this error arises:
introducing a custom metric
using multiple outputs
In both cases the acc and val_acc are not computed. Strangely, Keras does compute an overall loss and val_loss.
You can remedy the first situation by adding accuracy to the metrics but that may have side effects, I am not sure. In both cases however, you can add acc and val_acc yourself in a callback. I have added an example for the multi output case where I have created a custom callback in which I compute my own acc and val_acc results by averaging over all val's and val_acc's of the output layers.
I have a model having are 5 dense output layers at the end, labeled D0..D4. The output of one epoch is as follows:
3540/3540 [==============================] - 21s 6ms/step - loss: 14.1437 -
D0_loss: 3.0446 - D1_loss: 2.6544 - D2_loss: 3.0808 - D3_loss: 2.7751 -
D4_loss: 2.5889 - D0_acc: 0.2362 - D1_acc: 0.3681 - D2_acc: 0.1542 - D3_acc: 0.1161 -
D4_acc: 0.3994 - val_loss: 8.7598 - val_D0_loss: 2.0797 - val_D1_loss: 1.4088 -
val_D2_loss: 2.0711 - val_D3_loss: 1.9064 - val_D4_loss: 1.2938 -
val_D0_acc: 0.2661 - val_D1_acc: 0.3924 - val_D2_acc: 0.1763 -
val_D3_acc: 0.1695 - val_D4_acc: 0.4627
As you can see it outputs an overall loss and val_loss and for each output layer: Di_loss, Di_acc, val_Di_loss and val_Di_acc, for i in 0..4. All of this is the content of the logs dictionary which is transmitted as a parameter in on_epoch_begin and on_epoch_end of a callback. Callbacks have more event handlers but for our purpose these two are the most relevant. When you have 5 outputs (as in my case) then the size of the dictionary is 5 times 4(acc, loss, val_acc, val_loss) + 2 (loss+val_loss).
What I did is compute the average of all accuracies and validation accuracies to add two items to logs:
logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs
Be sure you add this callback before the checkpoint callback, else the extra information you provide will not bee 'seen'. If all is implemented correctly the error message does not appear anymore and the model is happily checkpointing.
The code of my callback for the multiple output case is provided below.
class ExtraLogInfo(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.timed = time.time()
return
def on_epoch_end(self, epoch, logs):
print(logs.keys())
som_acc = 0.0
som_val_acc = 0.0
n_accs = (len(logs) - 2) // 4
for i in range(n_accs):
acc_ptn = 'D{:d}_acc'.format(i)
val_acc_ptn = 'val_D{:d}_acc'.format(i)
som_acc += logs[acc_ptn]
som_val_acc += logs[val_acc_ptn]
logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs
logs['time'] = time.time() - self.timed
return
By using following code you will be able to save best model based on accuracy.
Please use following code:
model.compile(loss='categorical_crossentropy', optimizer= 'adam',
metrics=['accuracy'])
history = model.fit_generator(
train_datagen.flow(x_train, y_train, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,
callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)]
)