UPDATE: It was a mistake in the logic generating new characters. See answer below.
ORIGINAL QUESTION: I built an LSTM for character-level text generation with Pytorch. The model trains well (loss decreases reasonably etc.) but the trained model ends up outputting the last handful of words of the input repeated over and over again (e.g. Input: "She told her to come back later, but she never did"; Output: ", but she never did, but she never did, but she never did" and so on).
I have played around with the hyperparameters a bit, and the problem persists. I'm currently using:
Loss function: BCE
Optimizer: Adam
Learning rate: 0.001
Sequence length: 64
Batch size: 32
Embedding dim: 128
Hidden dim: 512
LSTM layers: 2
I also tried not always choosing the top choice, but this only introduces incorrect words and doesn't break the loop. I've been looking at countless tutorials, and I can't quite figure out what I'm doing differently/wrong.
The following is the code for training the model. training_data is one long string and I'm looping over it predicting the next character for each substring of length SEQ_LEN. I'm not sure if my mistake is here or elsewhere but any comment or direction is highly appreciated!
loss_dict = dict()
for e in range(EPOCHS):
print("------ EPOCH {} OF {} ------".format(e+1, EPOCHS))
lstm.reset_cell()
for i in range(0, DATA_LEN, BATCH_SIZE):
if i % 50000 == 0:
print(i/float(DATA_LEN))
optimizer.zero_grad()
input_vector = torch.tensor([[
vocab.get(char, len(vocab))
for char in training_data[i+b:i+b+SEQ_LEN]
] for b in range(BATCH_SIZE)])
if USE_CUDA and torch.cuda.is_available():
input_vector = input_vector.cuda()
output_vector = lstm(input_vector)
target_vector = torch.zeros(output_vector.shape)
if USE_CUDA and torch.cuda.is_available():
target_vector = target_vector.cuda()
for b in range(BATCH_SIZE):
target_vector[b][vocab.get(training_data[i+b+SEQ_LEN])] = 1
error = loss(output_vector, target_vector)
error.backward()
optimizer.step()
loss_dict[(e, int(i/BATCH_SIZE))] = error.detach().item()
ANSWER: I had made a stupid mistake when producing the characters with the trained model: I got confused with the batch size and assumed that at each step the network would predict an entire batch of new characters when in fact it only predicts a single one… That's why it simply repeated the end of the input. Yikes!
Anyways, if you run into this problem DOUBLE CHECK that you have the right logic for producing new output with the trained model (especially if you're using batches). If it's not that and the problem persists, you can try fine-tuning the following:
sequence length
greediness (e.g. probabilistic choice vs. top choice for next character)
batch size
epochs
Related
I am working on a project to predict soccer player values from a set of inputs. The data consists of about 19,000 rows and 8 columns (7 columns for input and 1 column for the target) all of numerical values.
I am using a fully connected Neural Network for the prediction but the problem is the loss is not decreasing as it should.
The loss is very large (1e+13) and doesn’t decrease as it should, it just fluctuates.
This is the function I am using to run the model:
def gradient_descent(model, learning_rate, num_epochs, data_loader, criterion):
losses = []
optimizer = torch.optim.Adam(model.parameters())
for epoch in range(num_epochs): # one epoch
for inputs, outputs in data_loader: # one iteration
inputs, outputs = inputs.to(torch.float32), outputs.to(torch.float32)
logits = model(inputs)
loss = criterion(torch.squeeze(logits), outputs) # forward-pass
optimizer.zero_grad() # zero out the gradients
loss.backward() # compute the gradients (backward-pass)
optimizer.step() # take one step
losses.append(loss.item())
loss = sum(losses[-len(data_loader):]) / len(data_loader)
print(f'Epoch #{epoch}: Loss={loss:.3e}')
return losses
The model is fully connected neural network with 4 hidden layers, each with 7 neurons. input layer has 7 neurons and output has 1. I am using MSE for loss function. I tried changing the learning rate but it is still bad.
What could be the reason behind this?
Thank you!
It is difficult to diagnose your problem from the information you provided, but I'll try to point you in some useful directions.
Data Normalization:
The way we initialize the weights in deep NN has a significant effect on the training process. See, e.g.:
He, K., Zhang, X., Ren, S. and Sun, J., Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (ICCV 2015).
Most initialization methods assume the inputs have zero mean and unit variance (or similar statistics). If your inputs violate these assumptions, you will find it difficult to train. See, e.g., this post.
Normalize the Targets:
You are trying to solve a regression problem (MSE loss), it might be the case that your targets are poorly scaled and causing very large loss values. Try and normalize the targets to span a more compact range.
Learning Rate:
Try and adjust your learning rate: both increasing it and decreasing it by orders of magnitude.
First I want to say thank to anyone consider reading this question, and I want to sorry if my question is so stubborn, and for my poor English.
So currently I'm working on a recommendation system problem, and my approach was to use matrix factorization with implicit feedback using BPR (arXiv:1205.2618). Somehow, I discovered that when I trained my model (BPRMF), using a large batch size (in this case 4096), resulted in a poorer BPR loss compared to when I used a smaller batch size (1024). my training log on few epochs.
I noted that higher batch size resulted in faster training time as it can utilize GPU memory more efficiently, but the higher loss is something maybe I'm not so willingly to trade. As far as I know, a large batch size bring much more information for the gradient descent step to take a better step, so it should help with convergence, and usually problem with large batch size is in memory and resource, not with loss.
I have did some research about this, and saw that Large Batch Training Result in Poor Generalization and here another, but in my case, it was poor lost while in training.
My best guess is that using a large batch size, then take the mean of the loss make the gradient flow to the user and item embedding lower by the mean ( 1 / batch size) coefficient, make it hard to escape local maxima while training. Is it the answer in this case ? (However, I saw that recent study has show that local minima is not necessarily bad, so ...)
Really appreciated anybody help me answer why large batchsize ended up with anomaly results.
Side note: Might be another stupid question, but as you can see in the code below, you can see that the l2 loss is not normalized by batch size, so I expected it to at least double or quadruple when I multiply batch size by 4, but that seem not to be the case here in the log above.
Here is my code
from typing import Tuple
import torch
from torch.nn.parameter import Parameter
import torch.nn.functional as F
from .PretrainedModel import PretrainedModel
class BPRMFModel(PretrainedModel):
def __init__(self, n_users: int, n_items: int, u_embed: int, l2:float,
dataset: str, u_i_pretrained_dir, use_pretrained = 0, **kwargs) -> None:
super().__init__(n_users=n_users, n_items=n_items, u_embed=u_embed, dataset=dataset,
u_i_pretrained_dir=u_i_pretrained_dir, use_pretrained=use_pretrained,
**kwargs)
self.l2 = l2
self.reset_parameters()
self.items_e = Parameter(self._items_e)
self.users_e = Parameter(self._users_e)
def forward(self, u: torch.Tensor, i: torch.Tensor) -> torch.Tensor:
u = F.embedding(u, self.users_e)
i = F.embedding(i, self.items_e)
return torch.matmul(u, i.T)
def CF_loss(self, u: torch.Tensor, i_pos: torch.Tensor, i_neg: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
#u, i_pos, i_neg shape is [batch_size,]
u = F.embedding(u, self.users_e)
i_pos = F.embedding(i_pos, self.items_e)
i_neg = F.embedding(i_neg, self.items_e)
pos_scores = torch.einsum("ij,ij->i", u, i_pos)
neg_scores = torch.einsum("ij,ij->i", u, i_neg)
# loss = torch.mean(
# F.softplus(-(pos_scores - neg_scores))
# )
loss = torch.neg(
torch.mean(
F.logsigmoid(pos_scores - neg_scores)
)
)
l2_loss = (
u.pow(2).sum() +
i_pos.pow(2).sum() +
i_neg.pow(2).sum()
)
return loss, self.l2 * l2_loss
def get_users_rating_for_each_items(self, u: torch.Tensor, i: torch.Tensor) -> torch.Tensor:
return self(u, i)
def save_pretrained(self):
self._items_e = self.items_e.data
self._users_e = self.users_e.data
return super().save_pretrained()
PretrainedModel is just a base class helping me with the save and load model weight
Really appreciated anyone who bear with me till this end.
I'm interested in fine-tuning a Mask-RCNN model that I'm using for instance segmentation. Currently I have trained the model for 6 epochs and the various Mask-RCNN losses are as follows:
The reason I'm stopping is that the COCO evaluation metrics seem to have dipped in the last epoch:
I know this is a far reaching question, but I'm looking to gain some intuition of how to understand which parameters are going to be the most impactful in improving the evaluation metrics. I understand there are three places to consider:
Should I be looking at batch size, learning rate and momentum, this uses an SGD optimizer with a learning rate of 1e-4 and batch size 2?
Should I be looking at using more training data or adding augmentation (I don't currently use any) and my dataset is current pretty large 40K images?
Should I be looking at the specific MaskRCNN parameters?
I thing I'll likely be asked to me more specific on what I want to improve so let me say that I would like to improve the recall of the individual masks. The model is performing well but doesn't quite capture the full extend of what I would like it to. I'm also leaving out details of the specific learning problem as I'd like to gain intuition of how to approach this in general.
A couple of notes:
6 epochs are too small for the network to converge even if you use a pre-trained network—especially such a big one as resnet50. I think you need at least 50 epochs. On a pre-trained resnet18 I started to get good results after 30 epochs, resnet34 needed +10-20 epochs and your resnet50 + 40k images of the train set - definitely need more epochs than 6;
definitely use a pre-trained network;
in my experience, I failed to get the results I like with SGD. I started using AdamW + ReduceLROnPlateau scheduler. The network converges quite fast, like 50-60% AP on epoch 7 or 8 but then it comes up to 80-85 after 50-60 epochs using very small improvements from epoch to epoch, only if the LR is small enough. You must be familiar with the gradient descent notion. I used to think of it as if you have more augmentation, your "hill" is covered with "boulders" that you have to be able to bypass and this is only possible if you control the LR. Additionally, AdamW helps with the overfitting.
This is how I do it. For networks with higher input resolution (your input images are scaled on input by the net itself), I use higher LR.
init_lr = 0.00005
weight_decay = init_lr * 100
optimizer = torch.optim.AdamW(params, lr=init_lr, weight_decay=weight_decay)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, verbose=True, patience=3, factor=0.75)
for epoch in range(epochs):
# train for one epoch, printing every 10 iterations
metric_logger = train_one_epoch(model, optimizer, train_loader, scaler, device,
epoch, print_freq=10)
scheduler.step(metric_logger.loss.global_avg)
optimizer.param_groups[0]["weight_decay"] = optimizer.param_groups[0]["lr"] * 100
# scheduler.step()
# evaluate on the test dataset
evaluate(model, test_loader, device=device)
print("[INFO] serializing model to '{}' ...".format(args["model"]))
save_and_print_size_of_model(model, args["model"], script=False)
Find such an LR and weight decay that the training exhausts LR to a very small value, like 1/10 of your initial LR, at the end of the training. If you will have a plateau too often, the scheduler quickly brings it to very small values and the network will learn nothing all the rest of the epochs.
Your plots indicate that your LR is too high at some point in the training, the network stops training and then AP is going down. You need constant improvements, even small ones. The more network trains the more subtle details it learns about your domain and the smaller the learning rate. Imho, constant LR will not allow doing that correctly.
anchor generator settings. Here is how I initialize the network.
def get_maskrcnn_resnet_model(name, num_classes, pretrained, res='normal'):
print('Using maskrcnn with {} backbone...'.format(name))
backbone = resnet_fpn_backbone(name, pretrained=pretrained, trainable_layers=5)
sizes = ((4,), (8,), (16,), (32,), (64,))
aspect_ratios = ((0.25, 0.5, 1.0, 2.0, 4.0),) * len(sizes)
anchor_generator = AnchorGenerator(
sizes=sizes, aspect_ratios=aspect_ratios
)
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0', '1', '2', '3'],
output_size=7, sampling_ratio=2)
default_min_size = 800
default_max_size = 1333
if res == 'low':
min_size = int(default_min_size / 1.25)
max_size = int(default_max_size / 1.25)
elif res == 'normal':
min_size = default_min_size
max_size = default_max_size
elif res == 'high':
min_size = int(default_min_size * 1.25)
max_size = int(default_max_size * 1.25)
else:
raise ValueError('Invalid res={} param'.format(res))
model = MaskRCNN(backbone, min_size=min_size, max_size=max_size, num_classes=num_classes,
rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler)
model.roi_heads.detections_per_img = 512
return model
I need to find small objects here why I use such anchor params.
classes in-balancing issue. If you have only your object and bg - no problem. If you have more classes then make sure that your training split (as 80% for train and 20% for the test) is more or less precisely applied to all the classes used in your particular training.
Good luck!
From what I’ve learned when the validation loss > training loss there is overfitting. However I’m getting this from the first epoch.
See below:
I'm using a tabular learner (from FastAI v2) that has about 72 inputs.
I have 360K unevenly distributed cases, of which the majority is cat1, then cat2, etc… We got about 20K of cat6. I upsample all training data so all categories are equally represented. The validation set is 2% of the training set.
I tried lowering the number of layers. This brings the training loss a bit closer to the validation loss, but the validation loss always is high than the training loss, from the first epoch on.
What could be the explanation for this?
Code used:
coord_labels, semantic_labels = [], []
for i in range(18):
coord_labels += [f'x{i+1}', f'y{i+1}', f'conf{i+1}']
semantic_labels += [f'sem{i+1}']
dls = TabularDataLoaders.from_csv(
'/content/total_training.csv',
y_names='corrected_person_position_type_id',
cont_names = coord_labels,
cat_names = semantic_labels,
procs = [Categorify, Normalize],
valid_idx = valid_idx,
bs=2048
)
learn = tabular_learner(dls, metrics=accuracy)
learn.fit_one_cycle(10)
I'm currently working on a project in pytorch on Wasserstein GAN (https://arxiv.org/pdf/1701.07875.pdf).
In Wasserstain GAN a new objective function is defined using the wasserstein distance as :
Which leads to the following algorithms for training the GAN:
My question is :
When implementing line 5 and 6 of the algorithm in pytorch should I be multiplying my loss -1 ? As in my code (I use RMSprop as my optimizer for both the generator and critic):
############################
# (1) Update D network: maximize (D(x)) + (D(G(x)))
###########################
for n in range(n_critic):
D.zero_grad()
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
output = D(real_cpu)
#errD_real = -criterion(output, label) #DCGAN
errD_real = torch.mean(output)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, 100, device=device) #Careful here we changed shape of input (original : torch.randn(4, 100, 1, 1, device=device))
# Generate fake image batch with G
fake = G(noise)
# Classify all fake batch with D
output = D(fake.detach())
# Calculate D's loss on the all-fake batch
errD_fake = torch.mean(output)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = -(errD_real - errD_fake)
# Update D
optimizerD.step()
#Clipping weights
for p in D.parameters():
p.data.clamp_(-0.01, 0.01)
As you can see, I do the operation errD = -(errD_real - errD_fake), with errD_real and errD_fake being respectively the mean of the predictions of the critic on real and fake samples.
To my understanding RMSprop should optimize the weights of the critic the following way :
w <- w - alpha*gradient(w)
(alpha being the learning rate divided by the square root of the weighted moving average of the squared gradient)
Since the optimization problem requires to "go" in the same direction as the gradient it should be required to multiply gradient(w) by -1 before optimizing the weights.
Do you think that my reasoning is right ?
The program runs but my results are quiet poor.
I follow the same logic for the generator's weights but this time in order to go in the opposite direction of the gradient:
############################
# (2) Update G network: minimize -D(G(x))
###########################
G.zero_grad()
noise = torch.randn(b_size, 100, device=device)
fake = G(noise)
#label.fill_(fake_label) # fake labels are real for generator cost
# Since we just updated D, perform another forward pass of all-fake batch through D
output = D(fake).view(-1)
# Calculate G's loss based on this output
#errG = criterion(output, label) #DCGAN
errG = -torch.mean(output)
# Calculate gradients for G
errG.backward()
D_G_z2 = output.mean().item()
# Update G
optimizerG.step()
Sorry for the long question, I tried to explain my doubt as clear as possible. Thank you everyone.
I noticed some errors in the implementation of your discriminator training protocol. You call your backward functions twice with both the real and fake values loss being backpropagated at different time steps.
Technically an implementation using this scheme is possible but highly unreadable. There was a mistake with your errD_real in which your output is going to be positive instead of negative as an optimal D(G(z))>0 and so you penalize it for being correct. Overall your model converges simply by predicting D(x)<0 for all inputs.
To fix this do not call your errD_readl.backward() or your errD_fake.backward(). Simply using an errD.backward() after you define errD would work perfectly fine. Otherwise, your generator seems to be correct.