Sequence to Sequence Loss - deep-learning

I'm trying to figure out how sequence to sequence loss is calculated. I am using the huggingface transformers library in this case, but this might actually be relevant to other DL libraries.
So to get the required data we can do:
from transformers import EncoderDecoderModel, BertTokenizer
import torch
import torch.nn.functional as F
torch.manual_seed(42)
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
MAX_LEN = 128
tokenize = lambda x: tokenizer(x, max_length=MAX_LEN, truncation=True, padding=True, return_tensors="pt")
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints
input_seq = ["Hello, my dog is cute", "my cat cute"]
output_seq = ["Yes it is", "ok"]
input_tokens = tokenize(input_seq)
output_tokens = tokenize(output_seq)
outputs = model(
input_ids=input_tokens["input_ids"],
attention_mask=input_tokens["attention_mask"],
decoder_input_ids=output_tokens["input_ids"],
decoder_attention_mask=output_tokens["attention_mask"],
labels=output_tokens["input_ids"],
return_dict=True)
idx = output_tokens["input_ids"]
logits = F.log_softmax(outputs["logits"], dim=-1)
mask = output_tokens["attention_mask"]
Edit 1
Thanks to #cronoik I was able to replicate the loss calculated by huggingface as being:
output_logits = logits[:,:-1,:]
output_mask = mask[:,:-1]
label_tokens = output_tokens["input_ids"][:, 1:].unsqueeze(-1)
select_logits = torch.gather(output_logits, -1, label_tokens).squeeze()
huggingface_loss = -select_logits.mean()
However, since the last two tokens of the second input is just padding, shouldn't we calculate the loss to be:
seq_loss = (select_logits * output_mask).sum(dim=-1, keepdims=True) / output_mask.sum(dim=-1, keepdims=True)
seq_loss = -seq_loss.mean()
^This takes into account the length of the sequence of each row of outputs, and the padding by masking it out. Think this is especially useful when we have batches of varying length outputs.

ok I found out where I was making the mistakes. This is all thanks to this thread in the HuggingFace forum.
The output labels need to have -100 for the masked version. The transoformers library does not do it for you.
One silly mistake I made was with the mask. It should have been output_mask = mask[:, 1:] instead of :-1.
1. Using Model
We need to set the masks of output to -100. It is important to use clone as shown below:
labels = output_tokens["input_ids"].clone()
labels[output_tokens["attention_mask"]==0] = -100
outputs = model(
input_ids=input_tokens["input_ids"],
attention_mask=input_tokens["attention_mask"],
decoder_input_ids=output_tokens["input_ids"],
decoder_attention_mask=output_tokens["attention_mask"],
labels=labels,
return_dict=True)
2. Calculating Loss
So the final way to replicate it is as follows:
idx = output_tokens["input_ids"]
logits = F.log_softmax(outputs["logits"], dim=-1)
mask = output_tokens["attention_mask"]
# shift things
output_logits = logits[:,:-1,:]
label_tokens = idx[:, 1:].unsqueeze(-1)
output_mask = mask[:,1:]
# gather the logits and mask
select_logits = torch.gather(output_logits, -1, label_tokens).squeeze()
-select_logits[output_mask==1].mean(), outputs["loss"]
The above however ignores the fact that this comes from two different lines. So an alternate way of calculating loss could be:
seq_loss = (select_logits * output_mask).sum(dim=-1, keepdims=True) / output_mask.sum(dim=-1, keepdims=True)
seq_loss.mean()

thanks for sharing. However, the new version of transformers as of today actually does not "shift" anymore. The following is not needed.
#shift things
output_logits = logits[:,:-1,:]
label_tokens = idx[:, 1:].unsqueeze(-1)
output_mask = mask[:,1:

Related

Why does my ML model always show the same result?

I've already trained several models for a binary classification problem, basing my election on F-Score and AUC. The code used has been the following:
svm = StandardScaler()
svm.fit(feat_train)
feat_train_std = svm.transform(feat_train)
feat_test_std = svm.transform(feat_test)
model_10= BalancedBaggingClassifier(base_estimator=SVC(C=1.0, random_state=1, kernel='linear'),
sampling_strategy='auto',
replacement=False,
random_state=0)
model_10.fit(feat_train_std, target_train)
pred_target_10 = model_10.predict(feat_test)
mostrar_resultados(target_test, pred_target_10)
pred_target_10 = model_10.predict_proba(feat_test)[:, 1]
average_precision_10 = average_precision_score(target_test, pred_target_10)
precision_10, recall_10, thresholds = precision_recall_curve(target_test, pred_target_10)
auc_precision_recall_10 = auc(recall_10, precision_10)
disp_10 = plot_precision_recall_curve(model_10, feat_test, target_test)
disp_10.ax_.set_title('Binary class Precision-Recall curve: '
'AUC={0:0.2f}'.format(auc_precision_recall_10))
Afterwards, I load the model as follows:
modelo_pickle = 'modelo_pickle.pkl'
joblib.dump(model_10,modelo_pickle)
loaded_model = joblib.load(modelo_pickle)
Then, the aim is to load a new dataset, which columns are the same as the model's variables, and make a prediction for each line:
lista_x=x.to_numpy().tolist()
resultados=[]
for i in lista_x:
pred = loaded_model.predict([i])
resultados.append(pred)
print(resultados)
However, every single result is equal to 1, which does not make any sense. Would anyone tell me what am I missing, please?
Thank you in advance.
Regards,
Previously described.

Meta-model of the field function OpenTurns 1.16rc1

After updating Openturns from 1.15 to 1.16rc1 I have the following issue with building the meta-model of the field function:
to reduce the computational burden:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-KolmogorovSamplingSize", 1)
algo = ot.FunctionalChaosAlgorithm(sample_X, outputSampleChaos)
algo.run()
metaModel = ot.PointToFieldConnection(postProcessing, algo.getResult().getMetaModel())
The "FittingTest-KolmogorovSamplingSize" was removed from OpenTurns 1.16rc1 and when I try to replace the fitting test with:
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMaximumSamplingSize", 10)
Or with
ot.ResourceMap.SetAsUnsignedInteger("FittingTest-LillieforsMinimumSamplingSize", 1)
The code is freezing. Is there any solution for this?
The proposed solution is simply to use another distribution to model your data. You could have used any other multivariate continuous distribution of proper dimension. IMO it is not a valid answer as the distribution has no link to your data.
After inspection, it appears that the problem has nothing to do with Lilliefors's test. In OT 1.15 we were using this test (under the wrong name of Kolmogorov) to select automatically a distribution suited to the input sample, but we switched to a more sophisticated selection algorithm (see MetaModelAlgorithm::BuildDistribution). It is based on a first pass using the raw Kolomgorov test (thus ignoring the fact that parameters have been estimated) then an information-based criterion is used to select the most relevant model (AIC, AICC, BIC depending on the value of the "MetaModelAlgorithm-ModelSelectionCriterion" key in ResourceMap. The problem is caused by the TrapezoidalFactory class during the Kolmogorov phase. I will provide a fix ASAP in OpenTURNS master. In the mean time, I have adapted the proposed solution to something more adapted to your data:
degree = 6
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.HyperbolicAnisotropicEnumerateFunction(dimension_xi_X, 0.8)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.StandardDistributionPolynomialFactory(ot.HistogramFactory().build(sample_X[:,i])) for i in range(dimension_xi_X)], enumerateFunction)
basisSize = enumerateFunction.getStrataCumulatedCardinal(degree)
#basis = ot.OrthogonalProductPolynomialFactory(
# [ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
#basisSize = 450#enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())
I also implemented a quick and dirty estimator of the L2-error:
# Meta_model validation
iMax = 5
# Input values
sample_X_validation = ot.Sample(np.array(month_1_parameters_MSE.iloc[:iMax,0:3]))
print("sample size=", sample_X_validation.getSize())
# sample_X = ot.Sample(month_1_parameters_MSE[['Rseries','Rsh','Isc']])
# output values
#month_1_simulated.iloc[0:1].transpose()
Field = ot.Field(mesh,np.array(month_1_simulated.iloc[0:1]).transpose())
sample_Y_validation = ot.ProcessSample(1,Field)
for k in range(1,iMax):
sample_Y_validation.add( np.array(month_1_simulated.iloc[k:k+1]).transpose() )
# In[18]:
graph = sample_Y_validation.drawMarginal(0)
graph.setColors(['red'])
drawables = graph.getDrawables()
graph2 = metaModel(sample_X_validation).drawMarginal(0)
graph2.setColors(['blue'])
drawables = graph2.getDrawables()
graph.add(graph2)
graph.setTitle('Model/Metamodel Validation')
graph.setXTitle(r'$t$')
graph.setYTitle(r'$z$')
drawables = graph.getDrawables()
L2_error = 0.0
for i in range(iMax):
L2_error = (drawables[i].getData()[:,1]-drawables[iMax+i].getData()[:,1]).computeRawMoment(2)[0]
print("L2_error=", L2_error)
You get an error of 79.488 with the previous answer and 1.3994 with the new proposal. Here is a graphical comparison.
Comparison between test data & previous answer
Comparison between test data & new proposal
The solution is to use:
degree = 1
dimension_xi_X = 3
dimension_xi_Y = 450
enumerateFunction = ot.LinearEnumerateFunction(dimension_xi_X)
basis = ot.OrthogonalProductPolynomialFactory(
[ot.HermiteFactory()] * dimension_xi_X, enumerateFunction)
basisSize =450 #enumerateFunction.getStrataCumulatedCardinal(degree)
adaptive = ot.FixedStrategy(basis, basisSize)
projection = ot.LeastSquaresStrategy(
ot.LeastSquaresMetaModelSelectionFactory(ot.LARS(), ot.CorrectedLeaveOneOut()))
ot.ResourceMap.SetAsScalar("LeastSquaresMetaModelSelection-ErrorThreshold", 1.0e-7)
algo_chaos = ot.FunctionalChaosAlgorithm(sample_X,
outputSampleChaos,basis.getMeasure(), adaptive, projection)
algo_chaos.run()
result_chaos = algo_chaos.getResult()
meta_model = result_chaos.getMetaModel()
metaModel1 = ot.PointToFieldConnection(postProcessing,
algo_chaos.getResult().getMetaModel())

How to get dataset into array

I have worked all the tutorials and searched for "load csv tensorflow" but just can't get the logic of it all. I'm not a total beginner, but I don't have much time to complete this, and I've been suddenly thrown into Tensorflow, which is unexpectedly difficult.
Let me lay it out:
Very simple CSV file of 184 columns that are all float numbers. A row is simply today's price, three buy signals, and the previous 180 days prices
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
This article: https://www.tensorflow.org/guide/datasets
It covers how to load pretty well. It even has a section on changing to numpy arrays, which is what I need to train and test the 'net. However, as the author says in the article leading to this Web page, it is pretty complex. It seems like everything is geared toward doing data manipulation, where we have already normalized our data (nothing has really changed in AI since 1983 in terms of inputs, outputs, and layers).
Here is a way to load it, but not in to Numpy and no example of not manipulating the data.
with tf.Session as sess:
sess.run( tf.global variables initializer())
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
line_count = 0
for row in csv_reader:
?????????
line_count += 1
I need to know how to get the csv file in to the
close = tf.placeholder(float, name='close')
signals = tf.placeholder(bool, shape=[3], name='signals')
previous = tf.placeholder(float, shape=[180], name = 'previous')
so that I can follow the tutorials to train and test the net.
It's not that clear for me your question. You might be answering, tell me if I'm wrong, how to feed data in your model? There are several fashions to do so.
Use placeholders with feed_dict during the session. This is the basic and easier one but often suffers from training performance issue. Further explanation, check this post.
Use queue. Hard to implement and badly documented, I don't suggest, because it's been taken over by the third method.
tf.data API.
...
So to answer your question by the first method:
# get your array outside the session
with open('/BTC1.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter =',')
dataset = np.asarray([data for data in csv_reader])
close_col = dataset[:, 0]
signal_cols = dataset[:, 1: 3]
previous_cols = dataset[:, 3:]
# let's say you load 100 row each time for training
batch_size = 100
# define placeholders like you
...
with tf.Session() as sess:
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation, feed_dict={close: close_col[start: end, ],
signals: signal_col[start: end, ],
previous: previous_col[start: end, ]
}
)
By the third method:
# retrieve your columns like before
...
# let's say you load 100 row each time for training
batch_size = 100
# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size) #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session
# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...
with tf.Session() as sess:
# you need to initialize the iterators
sess.run([tf.global_variable_initializer, iter_init_operation])
...
for i in range(number_iter):
start = i * batch_size
end = (i + 1) * batch_size
sess.run(train_operation)
Good luck!

Tensorflow: tf.reduce_logsumexp returning negative values

I am new to tensorflow framework. I am using
tf.reduce_logsumexp in my code. But inspecting the output I see that some of the values are negative. How is that possible? I suspected that it might be due to some nans or inf values so I put in a check to remove those values in my input like this(X is my input):
res = tf.where(tf.is_inf(X), tf.zeros_like(X), X)
res = tf.where(tf.is_nan(res), tf.zeros_like(res), res)
output = tf.reduce_logsumexp(res, axis=0)
But even this does not help and I still get some values as negative. Any help appreciated! Thanks
Note that the logarithm is negative, if its argument is smaller than 1. Therefore, you will always get a negative logsumexp output if the sum of the exponentials is smaller than 1. This occurs for example if all of the exponents are much smaller than zero, i.e. you have
res = [-2.5, -1.4, -3.3, -1.65, -2.15], then the corresponding exponentials are
exp(res) = [0.082, 0.247, 0.037, 0.192, 0.116], and their sum
sum(exp(res)) = 0.674 smaller than 1.
If you then take the logarithm, you get
log(sum(exp(res))) = log(0.674) = -0.394.
It simply emerges from the definition of the function that it does not necessarily have a positive output. You can test it with the following program:
import numpy as np
def logsumexp(arr):
summ = 0.0
for i in range(arr.shape[0]):
print(np.exp(arr[i]))
summ += np.exp(arr[i])
print('Sum: {}'.format(summ))
return np.log(np.sum(np.exp(arr)))
arr = np.asarray([-2.5, -1.4, -3.3, -1.65, -2.15])
print('LogSumExp: {}'.format(logsumexp(arr)))

Keras sequence-to-sequence encoder-decoder part-of-speech tagging example with attention mechanism

I have a sequence of indexed words w_1, ..., w_n. Since I'm new to deep learning, I am looking for a simple implementation of a seq2seq pos tagging model in Keras which uses attention mechanism and produces a sequence of POS tags t_1, ..., t_n out of my word sequence.
To be specific, I don't know how to gather the outputs of LSTM hidden layers of the encoder (since they are TimeDistributed) and how to feed the decoder LSTM layer for each timestamp with the outputs of time "t-1" for generating the output "t".
The model I'm thinking about is looking like the one in this paper http://arxiv.org/abs/1409.0473.
I think this issue will help you, though there is no attention mechanism.
https://github.com/fchollet/keras/issues/2654
The code included in the issue is as follows.
B = self.igor.batch_size
R = self.igor.rnn_size
S = self.igor.max_sequence_len
V = self.igor.vocab_size
E = self.igor.embedding_size
emb_W = self.igor.embeddings.astype(theano.config.floatX)
## dropout parameters
p_emb = self.igor.p_emb_dropout
p_W = self.igor.p_W_dropout
p_U = self.igor.p_U_dropout
p_dense = self.igor.p_dense_dropout
w_decay = self.igor.weight_decay
M = Sequential()
M.add(Embedding(V, E, batch_input_shape=(B,S),
W_regularizer=l2(w_decay),
weights=[emb_W], mask_zero=True, dropout=p_emb))
#for i in range(self.igor.num_lstms):
M.add(LSTM(R, return_sequences=True, dropout_W=p_W, dropout_U=p_U,
U_regularizer=l2(w_decay), W_regularizer=l2(w_decay)))
M.add(Dropout(p_dense))
M.add(LSTM(R*int(1/p_dense), return_sequences=True, dropout_W=p_W, dropout_U=p_U))
M.add(Dropout(p_dense))
M.add(TimeDistributed(Dense(V, activation='softmax',
W_regularizer=l2(w_decay), b_regularizer=l2(w_decay))))
print("compiling")
optimizer = Adam(self.igor.LR, clipnorm=self.igor.max_grad_norm,
clipvalue=5.0)
#optimizer = SGD(lr=0.01, momentum=0.5, decay=0.0, nesterov=True)
M.compile(loss='categorical_crossentropy', optimizer=optimizer,
metrics=['accuracy', 'perplexity'])
print("compiled")
self.model = M