PyTorch Experience Replay with multiple inputs - deep-learning

I am running a modified version of the pyTorch deep Q tutorial which I have modified to pass in my own data rather than gym, and one additional input (two inputs in total)
Currently I am generating an individual state for each 'column' of inputs (not sure if this is the correct way though), when trying to pass the second input into my experience replay function it is returning:
__new__() takes 5 positional arguments but 7 were given
Code for expReplay():
class ReplayMemory(object):
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
"""Saves a transition."""
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = Transition(*args)
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
And my Triggering of the function:
memory.push(state,rsistate, action, next_state, next_rsi_state, reward)
If anyone has any examples of experience replay using multiple inputs please fire away! <3

My mistake, this was solved by modifying the Transition named tuple to include the additional inputs. Any information on multi input expreplay is still welcome.

Related

Creating a custom environment for reinforcement learning problems with delayed rewards and continuous time

I want to create a custom environment for further RL tasks. There is one feature of the environment that I am not sure how to deal with. That is, the reward is not recorded right after the action, but after a period of "time".
(Since there is "time", there might need a "clock" so that the order of discrete events can be arranged?).
I looked for some tutorials somewhere and summarize as pseudo code as below. However, most of the open source code is for traditional RL environment, meaning that the reward is recorded right after the action. I am confused about what to do with the def step(self, action): function.
class env():
def __init__(self):
self.current_hour = 0
#states
self.observation = 0
#actions
self.actions = ['action1', 'action2'] #action space
self.n_actions = len(self.actions) #number of possible actions
#rewards
self.reward = 0 #reward
self.done = False
def step(self, action):
#1. Update the environment state based on the action chosen
#2. Calculate the reward for the new state
#3. Store the new observation for the state
#4. Check if the episode is over and store as done
return self.observation, self.reward, self.done
def reset(self): #Reset the environment's state. Returns observation.
self.done = False
self.observation = 0
return self.observation

How to save the model weights after running train_detector in mmdetection?

cfg.optimizer.lr = 0.02 / 8
cfg.lr_config.warmup = None
cfg.log_config.interval = 600
# Change the evaluation metric since we use customized dataset.
cfg.evaluation.metric = 'bbox'
# We can set the evaluation interval to reduce the evaluation times
cfg.evaluation.interval = 3
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 3
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
cfg.load_from = 'gdrive/My Drive/mmdetection/checkpoints/vfnet_r50_fpn_mdconv_c3-
c5_mstrain_2x_coco_20201027pth-6879c318.pth'
cfg.work_dir = "../vinbig"
cfg.runner.max_epochs = 6
cfg.total_epochs = 6model = build_detector(cfg.model)
datasets = [build_dataset(cfg.data.train)]
train_detector(model, datasets[0], cfg, distributed=False, validate=True)
Now,my question is once I have finetuned the model on my custom dataset,how do i use it for testing? Where is the finetuned model stored?
At most of the places, the model is immediately used for testing but how do I save the finetuned model to be tested later on.
img = mmcv.imread('kitti_tiny/training/image_2/000068.jpeg')
model.cfg = cfg
result = inference_detector(model, img)
show_result_pyplot(model, img, result)
The above is what occurs mostly after the training phase. But that is because the model is already in runtime.How can i create my own mmdetection model checkpoint?
I have been working on Google colab.
Well, I don't know how to do it manually, but your checkpoints are automatically
saved in cfg.work_dir = "../vinbig". There you can find 'latest.pth' file as your final checkpoint.
Took me a while to find, because the documentation in mmdet.core.evaluation.eval_hooks is not very clear, but the old version at their readthedocs describes a save_best attribute to the EvalHook
save_best (str, optional): If a metric is specified, it would measure
the best checkpoint during evaluation. The information about best
checkpoint would be save in best.json.
Options are the evaluation metrics to the test dataset. e.g.,
``bbox_mAP``, ``segm_mAP`` for bbox detection and instance
segmentation. ``AR#100`` for proposal recall. If ``save_best`` is
``auto``, the first key will be used. The interval of
``CheckpointHook`` should device EvalHook. Default: None.
To enable it, you can just add the argument to the evaluation attribute in the config:
cfg.evaluation = dict(interval= 2, metric='mAP', save_best='mAP')
This will test the model on the validation set every 2 epochs and save the checkpoint that obtained the best mAP metric (in your case it might need to be bbox instead), in addition to every checkpoint indicated by the checkpoint_config.interval attribute.
:)
Edit: mmdet's EvalHook inherits from mmcv's EvalHook, where the documentation is complete.

Sequence to Sequence Loss

I'm trying to figure out how sequence to sequence loss is calculated. I am using the huggingface transformers library in this case, but this might actually be relevant to other DL libraries.
So to get the required data we can do:
from transformers import EncoderDecoderModel, BertTokenizer
import torch
import torch.nn.functional as F
torch.manual_seed(42)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
MAX_LEN = 128
tokenize = lambda x: tokenizer(x, max_length=MAX_LEN, truncation=True, padding=True, return_tensors="pt")
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
input_seq = ["Hello, my dog is cute", "my cat cute"]
output_seq = ["Yes it is", "ok"]
input_tokens = tokenize(input_seq)
output_tokens = tokenize(output_seq)
outputs = model(
input_ids=input_tokens["input_ids"],
attention_mask=input_tokens["attention_mask"],
decoder_input_ids=output_tokens["input_ids"],
decoder_attention_mask=output_tokens["attention_mask"],
labels=output_tokens["input_ids"],
return_dict=True)
idx = output_tokens["input_ids"]
logits = F.log_softmax(outputs["logits"], dim=-1)
mask = output_tokens["attention_mask"]
Edit 1
Thanks to #cronoik I was able to replicate the loss calculated by huggingface as being:
output_logits = logits[:,:-1,:]
output_mask = mask[:,:-1]
label_tokens = output_tokens["input_ids"][:, 1:].unsqueeze(-1)
select_logits = torch.gather(output_logits, -1, label_tokens).squeeze()
huggingface_loss = -select_logits.mean()
However, since the last two tokens of the second input is just padding, shouldn't we calculate the loss to be:
seq_loss = (select_logits * output_mask).sum(dim=-1, keepdims=True) / output_mask.sum(dim=-1, keepdims=True)
seq_loss = -seq_loss.mean()
^This takes into account the length of the sequence of each row of outputs, and the padding by masking it out. Think this is especially useful when we have batches of varying length outputs.
ok I found out where I was making the mistakes. This is all thanks to this thread in the HuggingFace forum.
The output labels need to have -100 for the masked version. The transoformers library does not do it for you.
One silly mistake I made was with the mask. It should have been output_mask = mask[:, 1:] instead of :-1.
1. Using Model
We need to set the masks of output to -100. It is important to use clone as shown below:
labels = output_tokens["input_ids"].clone()
labels[output_tokens["attention_mask"]==0] = -100
outputs = model(
input_ids=input_tokens["input_ids"],
attention_mask=input_tokens["attention_mask"],
decoder_input_ids=output_tokens["input_ids"],
decoder_attention_mask=output_tokens["attention_mask"],
labels=labels,
return_dict=True)
2. Calculating Loss
So the final way to replicate it is as follows:
idx = output_tokens["input_ids"]
logits = F.log_softmax(outputs["logits"], dim=-1)
mask = output_tokens["attention_mask"]
# shift things
output_logits = logits[:,:-1,:]
label_tokens = idx[:, 1:].unsqueeze(-1)
output_mask = mask[:,1:]
# gather the logits and mask
select_logits = torch.gather(output_logits, -1, label_tokens).squeeze()
-select_logits[output_mask==1].mean(), outputs["loss"]
The above however ignores the fact that this comes from two different lines. So an alternate way of calculating loss could be:
seq_loss = (select_logits * output_mask).sum(dim=-1, keepdims=True) / output_mask.sum(dim=-1, keepdims=True)
seq_loss.mean()
thanks for sharing. However, the new version of transformers as of today actually does not "shift" anymore. The following is not needed.
#shift things
output_logits = logits[:,:-1,:]
label_tokens = idx[:, 1:].unsqueeze(-1)
output_mask = mask[:,1:

Trying to define a function that creates lists from files and uses random.choices to choose an element from the weighted lists

I'm trying to define a function that will create lists from multiple text files and print a random element from one of the weighted lists. I've managed to get the function to work with random.choice for a single list.
enter code here
def test_rollitems():
my_commons = open('common.txt')
all_common_lines = my_commons.readlines()
common = []
for i in all_common_lines:
common.append(i)
y = random.choice(common)
print(y)
When I tried adding a second list to the function it wouldn't work and my program just closes when the function is called.
enter code here
def Improved_rollitem():
#create the lists from the files#
my_commons = open('common.txt')
all_common_lines= my_commons.readlines()
common = []
for i in all_common_lines:
common.append(i)
my_uncommons = open('uncommon.txt')
all_uncommon_lines =my_uncommons.readlines()
uncommon =[]
for i in all_uncommon_lines:
uncommon.apend(i)
y = random.choices([common,uncommon], [80,20])
print(y)
Can anyone offer any insight into what I'm doing wrong or missing ?
Nevermind. I figured this out on my own! Was having issues with Geany so I installed Pycharm and was able to work through the issue. Correct code is:
enter code here
def Improved_rollitem():
#create the lists from the files#
my_commons = open('common.txt')
all_common_lines= my_commons.readlines()
common = []
for i in all_common_lines:
common.append(i)
my_uncommons = open('uncommon.txt')
all_uncommon_lines =my_uncommons.readlines()
uncommon =[]
for i in all_uncommon_lines:
uncommon.append(i)
y = random.choices([common,uncommon], [.8,.20])
if y == [common]:
for i in [common]:
print(random.choice(i))
if y == [uncommon]:
for i in [uncommon]:
print(random.choice(i))
If there's a better way to do something like this, it would certainly be cool to know though.

How to get dataset into array

I have worked all the tutorials and searched for "load csv tensorflow" but just can't get the logic of it all. I'm not a total beginner, but I don't have much time to complete this, and I've been suddenly thrown into Tensorflow, which is unexpectedly difficult.
Let me lay it out:
Very simple CSV file of 184 columns that are all float numbers. A row is simply today's price, three buy signals, and the previous 180 days prices
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
This article: https://www.tensorflow.org/guide/datasets
It covers how to load pretty well. It even has a section on changing to numpy arrays, which is what I need to train and test the 'net. However, as the author says in the article leading to this Web page, it is pretty complex. It seems like everything is geared toward doing data manipulation, where we have already normalized our data (nothing has really changed in AI since 1983 in terms of inputs, outputs, and layers).
Here is a way to load it, but not in to Numpy and no example of not manipulating the data.
with tf.Session as sess:
sess.run( tf.global variables initializer())
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
line_count = 0
for row in csv_reader:
?????????
line_count += 1
I need to know how to get the csv file in to the
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
so that I can follow the tutorials to train and test the net.
It's not that clear for me your question. You might be answering, tell me if I'm wrong, how to feed data in your model? There are several fashions to do so.
Use placeholders with feed_dict during the session. This is the basic and easier one but often suffers from training performance issue. Further explanation, check this post.
Use queue. Hard to implement and badly documented, I don't suggest, because it's been taken over by the third method.
tf.data API.
...
So to answer your question by the first method:
# get your array outside the session
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
dataset = np.asarray([data for data in csv_reader])
close_col = dataset[:, 0]
signal_cols = dataset[:, 1: 3]
previous_cols = dataset[:, 3:]
# let's say you load 100 row each time for training
batch_size = 100
# define placeholders like you
...
with tf.Session() as sess:
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation, feed_dict={close: close_col[start: end, ],
signals: signal_col[start: end, ],
previous: previous_col[start: end, ]
}
)
By the third method:
# retrieve your columns like before
...
# let's say you load 100 row each time for training
batch_size = 100
# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size) #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session
# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...
with tf.Session() as sess:
# you need to initialize the iterators
sess.run([tf.global_variable_initializer, iter_init_operation])
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation)
Good luck!