I am trying to implement a Pairwise Learning to rank model with keras where features are being computed by deep neural network.
In the pairwise L2R model, while training, I am giving the query, one positive and one negative result. And it is trained on the classification loss by difference of feature vector.
I am able to do compile and fit model successfully but the problem is to actually use this model on test data.
As in Pairwise L2R model, at testing time I would have only query and sample pair (no separate negative and positives). And I can use the calculated value before softmax to rank samples.
Is there any way I can use keras to pass data manually at test time through particular trained layers. (In short I have 3 set of inputs at train time and 2 at testing time.)
Related
In Kaggle competitions, we have a train and test dataset. So we usually develop a model on the training dataset and evaluate it with a test dataset that is unseen for the algorithm. I was wondering what is the best method for validation of a regression problem if just one dataset is given to us without any test dataset. I think there might be two approaches:
At the first step, after importing the dataset, it is converted to train and test datasets, with this approach the test set will not see by the algorithm until the last step. After performing preprocessing and feature engineering, we can use cross-validation techniques on the training dataset or use train-test-split to improve the error of our model. Finally, the quality of the model can be checked by the unseen data.
Also, I saw that for regression problems, some data scientists use the whole dataset for testing and validation, I mean they use all the data at the same time.
Could you please help me with which strategy is better? Especially, when the recruiter gives us just a dataset and asks us to develop a model to predict the target variable.
Thanks,
Med
You must divide the Data set in to two parts : Training and validation datasets.
Then train your model on to the training data set. Validate the model on validation data set. The more data you have the better your model can be fitted. Quality checking of the model can be done with validation data set split earlier. You can also check the quality of your model by accuracy and scoring parameters.
When checking the quality of the model you can create your own custom data set which is similar to the values of the original data set.
When on Kaggle, the competition is about to be closed, they will release the actual test data set on which the result of the model is ranked.
The reason is that when you have more data, the algorithm will have more feature label pair to train and validate. This will increase the efficiency of the model.
Approach 2 described in the question is better.
Also, I saw that for regression problems, some data scientists use the
whole data set for testing and validation, I mean they use all the data
at the same time.
Approach one is not preferred as in a competitive platform your model has to perform better. So having lesser training and validation data can affect the accuracy.
Divide your One dataset into a Training dataset and Testing dataset.
While training your model divide your Training dataset into training, validation,and testing and run the model and check the accuracy & save the model.
Import the save model and predict the testing dataset.
In supervised learning, original data is divided three part: training dataset, validation dataset and test dataset.
The training dataset is used to train a model.
The test dataset is used to evaluate the model finally, so is not used in training process.
The validation dataset is used for tuning parameters of the model while training, I think.
What I want to know is whether the validation dataset is used for training or not. Is it used for calculating weights and bias?
Yes as you said you use validation data for hyperparameter tuning. One more use of validation data is that it is used to check if you are overfitting on the training data
In supervised learning, the validation dataset is used during the training phase, but not in the same way the training dataset is used.
Since the goal is to get a model that can predict/classify new instances with high precision and/or accuracy, it's very important to minimize the error.
Thus, the training dataset is used to calculate the weights and biases of the neuronal network. And the validation dataset is used to calculate the error and adjust the weights/biases for the next training epoch. This is done by using the validation dataset instances to predict the label, compare it with the actual and calculate the precision.
Hope this helps you clarify this topic. You can also refer to some textbooks.
I’m currently creating and LSTM to predict rare events. I’ve seen this paper which suggest: first an autoencoder LSTM for extracting features and second to use the embeddings for a second LSTM that will make the actual prediction. According to them, the autoencoder extract features (this is usually true) which are then useful for the prediction layers to predict.
In my case, I need to predict if it would be or not an extreme event (this is the most important thing) and then how strong is gonna be. Following their advice, I’ve created the model, but instead of adding one LSTM from embeddings to predictions I add two. One for binary prediction (It is, or it is not), ending with a sigmoid layer, and the second one for predicting how strong will be. Then I have three losses. The reconstruction loss (MSE), the prediction loss (MSE), and the binary loss (Binary Entropy).
The thing is that I’m not sure that is learning anything… the binary loss keeps in 0.5, and even the reconstruction loss is not really good. And of course, the bad thing is that the time series is plenty of 0, and some numbers from 1 to 10, so definitely MSE is not a good metric.
What do you think about this approach?
This is the better architecture for predicting rare events? Which one would be better?
Should I add some CNN or FC from the embeddings before the other to LSTM, for extracting 1D patterns from the embedding, or directly to make the prediction?
Should the LSTM that predicts be just one? And only use MSE loss?
Would be a good idea to multiply the two predictions to force in both cases the predicted days without the event coincide?
Thanks,
I have a question. I have used transfer learning to retrain googlenet on my image classification problem. I have 80,000 images which belong to 14 categories. I set number of training steps equal to 200,000. I think the code provided by Tensorflow should have drop out implimented and it trains based on random shuffling of dataset and cross validation approach, and and I do not see any overfiting in training and classification curves, and I get high cross validation accuracy and high test accuracy but when I apply my model to new dataset then I get poor classification result. Anybodey know what is going on?Thanks!
While going through the different evaluation methods for model, I had tried some text classification using svm model which has lots of features,and after training the model, it is able to classify the text.
But my question is, if I want to calculate the confidence score of svm model? I have checked lots of examples using predict_proba and decision function.
Many of times, predict_proba doesn't align with the model prediction and gives wrong probabilities and if I use decision function then how would i interpret the distance as confidence score? How we should define the threshold?