Unable to save weights while using pre-trained VGG16 model - deep-learning

While using the pre-trained VGG16 model I am unable to save the weights of the best model. I use this code:
checkpointer = [
# Stop if the accuracy is not improving after 7 iterations
EarlyStopping(monitor='val_loss', patience=3, verbose=1),
# Saving the best model and re-use it while prediction
ModelCheckpoint(filepath="C:/Users/skumarravindran/Documents/keras_save_model/vgg16_v1.hdf5", verbose=1, monitor='val_acc', save_best_only=True),
#
]
And I get the following error:
C:\Users\skumarravindran\AppData\Local\Continuum\Anaconda2\envs\py35gpu1\lib\site-packages\keras\callbacks.py:405: RuntimeWarning: Can save best model only with val_acc available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)

I experienced two situations where this error arises:
introducing a custom metric
using multiple outputs
In both cases the acc and val_acc are not computed. Strangely, Keras does compute an overall loss and val_loss.
You can remedy the first situation by adding accuracy to the metrics but that may have side effects, I am not sure. In both cases however, you can add acc and val_acc yourself in a callback. I have added an example for the multi output case where I have created a custom callback in which I compute my own acc and val_acc results by averaging over all val's and val_acc's of the output layers.
I have a model having are 5 dense output layers at the end, labeled D0..D4. The output of one epoch is as follows:
3540/3540 [==============================] - 21s 6ms/step - loss: 14.1437 -
D0_loss: 3.0446 - D1_loss: 2.6544 - D2_loss: 3.0808 - D3_loss: 2.7751 -
D4_loss: 2.5889 - D0_acc: 0.2362 - D1_acc: 0.3681 - D2_acc: 0.1542 - D3_acc: 0.1161 -
D4_acc: 0.3994 - val_loss: 8.7598 - val_D0_loss: 2.0797 - val_D1_loss: 1.4088 -
val_D2_loss: 2.0711 - val_D3_loss: 1.9064 - val_D4_loss: 1.2938 -
val_D0_acc: 0.2661 - val_D1_acc: 0.3924 - val_D2_acc: 0.1763 -
val_D3_acc: 0.1695 - val_D4_acc: 0.4627
As you can see it outputs an overall loss and val_loss and for each output layer: Di_loss, Di_acc, val_Di_loss and val_Di_acc, for i in 0..4. All of this is the content of the logs dictionary which is transmitted as a parameter in on_epoch_begin and on_epoch_end of a callback. Callbacks have more event handlers but for our purpose these two are the most relevant. When you have 5 outputs (as in my case) then the size of the dictionary is 5 times 4(acc, loss, val_acc, val_loss) + 2 (loss+val_loss).
What I did is compute the average of all accuracies and validation accuracies to add two items to logs:
logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs
Be sure you add this callback before the checkpoint callback, else the extra information you provide will not bee 'seen'. If all is implemented correctly the error message does not appear anymore and the model is happily checkpointing.
The code of my callback for the multiple output case is provided below.
class ExtraLogInfo(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.timed = time.time()
return
def on_epoch_end(self, epoch, logs):
print(logs.keys())
som_acc = 0.0
som_val_acc = 0.0
n_accs = (len(logs) - 2) // 4
for i in range(n_accs):
acc_ptn = 'D{:d}_acc'.format(i)
val_acc_ptn = 'val_D{:d}_acc'.format(i)
som_acc += logs[acc_ptn]
som_val_acc += logs[val_acc_ptn]
logs['acc'] = som_acc / n_accs
logs['val_acc'] = som_val_acc / n_accs
logs['time'] = time.time() - self.timed
return

By using following code you will be able to save best model based on accuracy.
Please use following code:
model.compile(loss='categorical_crossentropy', optimizer= 'adam',
metrics=['accuracy'])
history = model.fit_generator(
train_datagen.flow(x_train, y_train, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs,
callbacks=[ModelCheckpoint('VGG16-transferlearning.model', monitor='val_acc', save_best_only=True)]
)

Related

Why Is accuracy so different when I use evaluate() and predict()?

I have a Convolutional Neural Network, and it's trying to resolve a classification problem using images (2 classes, so binary classification), using sigmoid.
To evaluate the model I use:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
path_dir = '../../dataset/train'
parth_dir_test = '../../dataset/test'
datagen = ImageDataGenerator(
rescale=1./255,
validation_split = 0.2)
test_set = datagen.flow_from_directory(parth_dir_test,
target_size= (150,150),
batch_size = 64,
class_mode = 'binary')
score = classifier.evaluate(test_set, verbose=0)
print('Test Loss', score[0])
print('Test accuracy', score[1])
And it outputs:
When I try to print the classification report I use:
yhat_classes = classifier.predict_classes(test_set, verbose=0)
yhat_classes = yhat_classes[:, 0]
print(classification_report(test_set.classes,yhat_classes))
But now I get this accuracy:
If I print the test_set.classes, it shows the first 344 numbers of the array as 0, and the next 344 as 1. Is this test_set shuffled before feeding into the network?
I think your model is doing just fine both in "training" and "evaluating".Evaluation accuracy comes on the basis of prediction so maybe you are making some logical mistake while using model.predict_classes().Please check if you are using the trained model weights and not any randomly initialized model while evaluating it.
what "evaluate" does: The model sets apart this fraction of data while training, and will not train on it, and will evaluate loss and any other model's metrics on this data after each "epoch".so, model.evaluate() is for evaluating your trained model. Its output is accuracy or loss, not prediction to your input data!
predict: Generates output predictions for the input samples. model.predict() actually predicts, and its output is target value, predicted from your input data.
FYI: if your accurscy in Binary Classification problem is less than 50%, it's worse than the case that you randomly predict one of those classes (acc = 50%)!
I needed to add a shuffle=False. The code that work is:
test_set = datagen.flow_from_directory(parth_dir_test,
target_size=(150,150),
batch_size=64,
class_mode='binary',
shuffle=False)

what should I do if my regression model stuck at a high value loss?

I'm using neural nets for a regression problem where I have 3 features and I'm trying to predict one continuous value. I noticed that my neural net start learning good but after 10 epochs it get stuck on a high loss value and could not improve anymore.
I tried to use Adam and other adaptive optimizers instead of SGD but that didn't work. I tried a complex architectures like adding layers, neurons, batch normalization and other activations etc.. and that also didn't work.
I tried to debug and try to find out if something is wrong with the implementation but when I use only 10 examples of the data my model learn fast so there are no errors. I start to increase the examples of the data and monitoring my model results as I increase the data examples. when I reach 3000 data examples my model start to get stuck on a high value loss.
I tried to increase layers, neurons and also to try other activations, batch normalization. My data are also normalized between [-1, 1], my target value is not normalized since it is regression and I'm predicting a continuous value. I also tried using keras but I've got the same result.
My real dataset have 40000 data, I don't know what should I try, I almost try all things that I know for optimization but none of them worked. I would appreciate it if someone can guide me on this. I'll post my Code but maybe it is too messy to try to understand, I'm sure there is no problem with my implementation, I'm using skorch/pytorch and some SKlearn functions:
# take all features as an Independant variable except the bearing and distance
# here when I start small the model learn good but from 3000 data points as you can see the model stuck on a high value. I mean the start loss is 15 and it start to learn good but when it reach 9 it stucks there
# and if I try to use the whole dataset for training then the loss start at 47 and start decreasing until it reach 36 and then stucks there too
X = dataset.iloc[:3000, 0:-2].reset_index(drop=True).to_numpy().astype(np.float32)
# take distance and bearing as the output values:
y = dataset.iloc[:3000, -2:].reset_index(drop=True).to_numpy().astype(np.float32)
y_bearing = y[:, 0].reshape(-1, 1)
y_distance = y[:, 1].reshape(-1, 1)
# normalize the input values
scaler = StandardScaler()
X_norm = scaler.fit_transform(X, y)
X_br_train, X_br_test, y_br_train, y_br_test = train_test_split(X_norm,
y_bearing,
test_size=0.1,
random_state=42,
shuffle=True)
X_dis_train, X_dis_test, y_dis_train, y_dis_test = train_test_split(X_norm,
y_distance,
test_size=0.1,
random_state=42,
shuffle=True)
bearing_trainset = Dataset(X_br_train, y_br_train)
bearing_testset = Dataset(X_br_test, y_br_test)
distance_trainset = Dataset(X_dis_train, y_dis_train)
distance_testset = Dataset(X_dis_test, y_dis_test)
def root_mse(y_true, y_pred):
return np.sqrt(mean_squared_error(y_true, y_pred))
class RMSELoss(nn.Module):
def __init__(self):
super().__init__()
self.mse = nn.MSELoss()
def forward(self, yhat, y):
return torch.sqrt(self.mse(yhat, y))
class AED(nn.Module):
"""custom average euclidean distance loss"""
def __init__(self):
super().__init__()
def forward(self, yhat, y):
return torch.dist(yhat, y)
def train(on_target,
hidden_units,
batch_size,
epochs,
optimizer,
lr,
regularisation_factor,
train_shuffle):
network = None
trainset = distance_trainset if on_target.lower() == 'distance' else bearing_trainset
testset = distance_testset if on_target.lower() == 'distance' else bearing_testset
print(f"shape of trainset.X = {trainset.X.shape}, shape of trainset.y = {trainset.y.shape}")
print(f"shape of testset.X = {testset.X.shape}, shape of testset.y = {testset.y.shape}")
mse = EpochScoring(scoring=mean_squared_error, lower_is_better=True, name='MSE')
r2 = EpochScoring(scoring=r2_score, lower_is_better=False, name='R2')
rmse = EpochScoring(scoring=make_scorer(root_mse), lower_is_better=True, name='RMSE')
checkpoint = Checkpoint(dirname=f'results/{on_target}/checkpoints')
train_end_checkpoint = TrainEndCheckpoint(dirname=f'results/{on_target}/checkpoints')
if on_target.lower() == 'bearing':
network = BearingNetwork(n_features=X_norm.shape[1],
n_hidden=hidden_units,
n_out=y_distance.shape[1])
elif on_target.lower() == 'distance':
network = DistanceNetwork(n_features=X_norm.shape[1],
n_hidden=hidden_units,
n_out=1)
model = NeuralNetRegressor(
module=network,
criterion=RMSELoss,
device='cpu',
batch_size=batch_size,
lr=lr,
optimizer=optim.Adam if optimizer.lower() == 'adam' else optim.SGD,
optimizer__weight_decay=regularisation_factor,
max_epochs=epochs,
iterator_train__shuffle=train_shuffle,
train_split=predefined_split(testset),
callbacks=[mse, r2, rmse, checkpoint, train_end_checkpoint]
)
print(f"{'*' * 10} start training the {on_target} model {'*' * 10}")
history = model.fit(trainset, y=None)
print(f"{'*' * 10} End Training the {on_target} Model {'*' * 10}")
if __name__ == '__main__':
args = parser.parse_args()
train(on_target=args.on_target,
hidden_units=args.hidden_units,
batch_size=args.batch_size,
epochs=args.epochs,
optimizer=args.optimizer,
lr=args.learning_rate,
regularisation_factor=args.regularisation_lambda,
train_shuffle=args.shuffle)
and this is my network declaration:
class DistanceNetwork(nn.Module):
"""separate NN for predicting distance"""
def __init__(self, n_features=5, n_hidden=16, n_out=1):
super().__init__()
self.model = nn.Sequential(
nn.Linear(n_features, n_hidden),
nn.LeakyReLU(),
nn.Linear(n_hidden, 5),
nn.LeakyReLU(),
nn.Linear(5, n_out)
)
here is the log while training:

Variable bug(s) O'Reilly Programming PyTorch

I'm reading O'Reilly's Sept, 2019 publication 'Programming Pytorch ..' describing a simple linear neural network for image classification.
There is a bug in a variable name in the opening model (no worries) target vs. targets, however there is what appears to be a weird omission of a variable declaration, train_iterator (and also dev_iterator, not shown).
I wish to know is what was the train_iterator variable they (I presume) intended?
p27
def train(model, optimiser, loss_fn, train_loader, val_loader, epochs=20, device='cpu'):
for epoch in range(epochs):
training_loss = 0.0
valid_loss = 0.0
model.train()
for batch in train_loader:
optimizer.zero_grad()
inputs, target = batch # Bug here for 'target'
inputs = inputs.to(device)
target = targets.to(device)
output = model(inputs)
loss = loss_fin(output, target)
loss.backward()
optimizer.step()
training_loss += loss.data.item()
training_loss /= len(train_iterator) # What is train_iterator?
so,..
inputs, target = batch
must be
inputs, targets = batch
In the validation step below the training step (not shown) is
inputs, targets = batch
...
targets = targets.to(device)
Its no biggie and the code is simply assigning to Cudas (GPU) or CPU.
The variable train_iterator is defining the training loss (important diagnostic). I assume that there should be an iterator declared between the epoch and the batch iterator, or is this within the training loop?
Notes train_loader simply refers Pytorch Dataloader. The model refers to 3 linear layers with a ReLU activation function.

Keras' ImageDataGenerator.flow() results in very low training/validation accuracy as opposed to flow_from_directory()

I am trying to train a very simple model for image recognition, nothing spectacular. My first attempt worked just fine, when I used image rescaling:
# this is the augmentation configuration to enhance the training dataset
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# validation generator, only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
Then I simply trained the model as such:
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
This works perfectly fine and leads to a reasonable accuracy. Then I thought it may be a good idea to try out mean subtraction, as VGG16 model uses. Instead of doing it manually, I chose to use ImageDataGenerator.fit(). For that, however, you need to supply it with training images as numpy arrays, so I first read the images, convert them, and then feed them into it:
train_datagen = ImageDataGenerator(
featurewise_center=True,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(featurewise_center=True)
def process_images_from_directory(data_dir):
x = []
y = []
for root, dirs, files in os.walk(data_dir, topdown=False):
class_names = sorted(dirs)
global class_indices
if len(class_indices) == 0:
class_indices = dict(zip(class_names, range(len(class_names))))
for dir in class_names:
filenames = os.listdir(os.path.join(root,dir))
for file in filenames:
img_array = img_to_array(load_img(os.path.join(root,dir,file), target_size=(224, 224)))[np.newaxis]
if len(x) == 0:
x = img_array
else:
x = np.concatenate((x,img_array))
y.append(class_indices[dir])
#this step converts an array of classes [0,1,2,3...] into sparse vectors [1,0,0,0], [0,1,0,0], etc.
y = np.eye(len(class_names))[y]
return x, y
x_train, y_train = process_images_from_directory(train_data_dir)
x_valid, y_valid = process_images_from_directory(validation_data_dir)
nb_train_samples = x_train.shape[0]
nb_validation_samples = x_valid.shape[0]
train_datagen.fit(x_train)
test_datagen.mean = train_datagen.mean
train_generator = train_datagen.flow(
x_train,
y_train,
batch_size=batch_size,
shuffle=False)
validation_generator = test_datagen.flow(
x_valid,
y_valid,
batch_size=batch_size,
shuffle=False)
Then, I train the model the same way, simply giving it both iterators. After the training completes, the accuracy is basically stuck at ~25% even after 50 epochs:
80/80 [==============================] - 77s 966ms/step - loss: 12.0886 - acc: 0.2500 - val_loss: 12.0886 - val_acc: 0.2500
When I run predictions on the above model, it classifies only 1 out 4 total classes correctly, all images from other 3 classes are classified as belonging to the first class - clearly the percentage of 25% has something to do with this fact, I just can't figure out what I am doing wrong.
I realize that I could calculate the mean manually and then simply set it for both generators, or that I could use ImageDataGenerator.fit() and then still go with flow_from_directory, but that would be a waste of already processed images, I would be doing the same processing twice.
Any opinions on how to make it work with flow() all the way?
Did you try setting shuffle=True in your generators?
You did not specify shuffling in the first case (it should be True by default) and set it to False in the second case.
Your input data might be sorted by classes. Without shuffling, your model first only sees class #1 and simply learns to predict class #1 always. It then sees class #2 and learns to always predict class #2 and so on. At the end of one epoch your model learns to always predict class #4 and thus gives a 25% accuracy on validation.

prioritized experience replay in deep Q-learning

i was implementing DQN in mountain car problem of openai gym. this problem is special as the positive reward is very sparse. so i thought of implementing prioritized experience replay as proposed in this paper by google deep mind.
there are certain things that are confusing me:
how do we store the replay memory. i get that pi is the priority of transition and there are two ways but what is this P(i)?
if we follow the rules given won't P(i) change every time a sample is added.
what does it mean when it says "we sample according to this probability distribution". what is the distribution.
finally how do we sample from it. i get that if we store it in a priority queue we can sample directly but we are actually storing it in a sum tree.
thanks in advance
According to the paper, there are two ways for calculating Pi and base on your choice, your implementation differs. I assume you selected Proportional Prioriziation then you should use "sum-tree" data structure for storing a pair of transition and P(i). P(i) is just the normalized version of Pi and it shows how important that transition is or in other words how effective that transition is for improving your network. When P(i) is high, it means it's so surprising for the network so it can really help the network to tune itself.
You should add each new transition with infinity priority to make sure it will be played at least once and there is no need to update all the experience replay memory for each new coming transition. During the experience replay process, you select a mini-batch and update the probability of those experiences in the mini-batch.
Each experience has a probability so all of the experiences together make a distribution and we select our next mini-batch according to this distribution.
You can sample via this policy from your sum-tree:
def retrieve(n, s):
if n is leaf_node: return n
if n.left.val >= s: return retrieve(n.left, s)
else: return retrieve(n.right, s - n.left.val)
I have taken the code from here.
You can reuse the code in OpenAI Baseline or using SumTree
import numpy as np
import random
from baselines.common.segment_tree import SumSegmentTree, MinSegmentTree
class ReplayBuffer(object):
def __init__(self, size):
"""Create Replay buffer.
Parameters
----------
size: int
Max number of transitions to store in the buffer. When the buffer
overflows the old memories are dropped.
"""
self._storage = []
self._maxsize = size
self._next_idx = 0
def __len__(self):
return len(self._storage)
def add(self, obs_t, action, reward, obs_tp1, done):
data = (obs_t, action, reward, obs_tp1, done)
if self._next_idx >= len(self._storage):
self._storage.append(data)
else:
self._storage[self._next_idx] = data
self._next_idx = (self._next_idx + 1) % self._maxsize
def _encode_sample(self, idxes):
obses_t, actions, rewards, obses_tp1, dones = [], [], [], [], []
for i in idxes:
data = self._storage[i]
obs_t, action, reward, obs_tp1, done = data
obses_t.append(np.array(obs_t, copy=False))
actions.append(np.array(action, copy=False))
rewards.append(reward)
obses_tp1.append(np.array(obs_tp1, copy=False))
dones.append(done)
return np.array(obses_t), np.array(actions), np.array(rewards), np.array(obses_tp1), np.array(dones)
def sample(self, batch_size):
"""Sample a batch of experiences.
Parameters
----------
batch_size: int
How many transitions to sample.
Returns
-------
obs_batch: np.array
batch of observations
act_batch: np.array
batch of actions executed given obs_batch
rew_batch: np.array
rewards received as results of executing act_batch
next_obs_batch: np.array
next set of observations seen after executing act_batch
done_mask: np.array
done_mask[i] = 1 if executing act_batch[i] resulted in
the end of an episode and 0 otherwise.
"""
idxes = [random.randint(0, len(self._storage) - 1) for _ in range(batch_size)]
return self._encode_sample(idxes)
class PrioritizedReplayBuffer(ReplayBuffer):
def __init__(self, size, alpha):
"""Create Prioritized Replay buffer.
Parameters
----------
size: int
Max number of transitions to store in the buffer. When the buffer
overflows the old memories are dropped.
alpha: float
how much prioritization is used
(0 - no prioritization, 1 - full prioritization)
See Also
--------
ReplayBuffer.__init__
"""
super(PrioritizedReplayBuffer, self).__init__(size)
assert alpha >= 0
self._alpha = alpha
it_capacity = 1
while it_capacity < size:
it_capacity *= 2
self._it_sum = SumSegmentTree(it_capacity)
self._it_min = MinSegmentTree(it_capacity)
self._max_priority = 1.0
def add(self, *args, **kwargs):
"""See ReplayBuffer.store_effect"""
idx = self._next_idx
super().add(*args, **kwargs)
self._it_sum[idx] = self._max_priority ** self._alpha
self._it_min[idx] = self._max_priority ** self._alpha
def _sample_proportional(self, batch_size):
res = []
p_total = self._it_sum.sum(0, len(self._storage) - 1)
every_range_len = p_total / batch_size
for i in range(batch_size):
mass = random.random() * every_range_len + i * every_range_len
idx = self._it_sum.find_prefixsum_idx(mass)
res.append(idx)
return res
def sample(self, batch_size, beta):
"""Sample a batch of experiences.
compared to ReplayBuffer.sample
it also returns importance weights and idxes
of sampled experiences.
Parameters
----------
batch_size: int
How many transitions to sample.
beta: float
To what degree to use importance weights
(0 - no corrections, 1 - full correction)
Returns
-------
obs_batch: np.array
batch of observations
act_batch: np.array
batch of actions executed given obs_batch
rew_batch: np.array
rewards received as results of executing act_batch
next_obs_batch: np.array
next set of observations seen after executing act_batch
done_mask: np.array
done_mask[i] = 1 if executing act_batch[i] resulted in
the end of an episode and 0 otherwise.
weights: np.array
Array of shape (batch_size,) and dtype np.float32
denoting importance weight of each sampled transition
idxes: np.array
Array of shape (batch_size,) and dtype np.int32
idexes in buffer of sampled experiences
"""
assert beta > 0
idxes = self._sample_proportional(batch_size)
weights = []
p_min = self._it_min.min() / self._it_sum.sum()
max_weight = (p_min * len(self._storage)) ** (-beta)
for idx in idxes:
p_sample = self._it_sum[idx] / self._it_sum.sum()
weight = (p_sample * len(self._storage)) ** (-beta)
weights.append(weight / max_weight)
weights = np.array(weights)
encoded_sample = self._encode_sample(idxes)
return tuple(list(encoded_sample) + [weights, idxes])
def update_priorities(self, idxes, priorities):
"""Update priorities of sampled transitions.
sets priority of transition at index idxes[i] in buffer
to priorities[i].
Parameters
----------
idxes: [int]
List of idxes of sampled transitions
priorities: [float]
List of updated priorities corresponding to
transitions at the sampled idxes denoted by
variable `idxes`.
"""
assert len(idxes) == len(priorities)
for idx, priority in zip(idxes, priorities):
assert priority > 0
assert 0 <= idx < len(self._storage)
self._it_sum[idx] = priority ** self._alpha
self._it_min[idx] = priority ** self._alpha
self._max_priority = max(self._max_priority, priority)