How CV determines which function to use? - catboost

I am confused with CV class. I thought it accepts a regressor or classifier model like in scikit-learn. However, I haven't found any such model in the input of CV. Could you please tell how CV class determines which model to fit?

CatBoost doesn't have a CV class. But there is a cv function in CatBoost Python library.
The function accepts parameter params which should contail all the parameters that describe the model you want to cross-validate.

Related

How to retrain a pretrained model from PyTorchVideo?

I am using the pretrained PyTorchVideo model slowfast_r50_detection as shown here. I want to retrain this model with a different private dataset that I have and use it in a similar way as shown in the example. I am new to PyTorch and am not sure how to start retraining such a model. Any pointers would be very helpful.
You can simply load your model first, and than use load_state_dict() function to load the pre-trained model
path_to_saved_model = "Directory/directory/your_saved_model.tar"
video_model = slow_r50_detection(True)
video_model.load_state_dict(torch.load(path_to_saved_model)['model_state_dict'])
device = "cuda:0" if torch.cuda.is_available() else "cpu"
video_model = video_model.to(device)
The model loads the pre-trained weights from the saved model, and anything you run post the load_state_dict() line, the model uses previously trained weights.

Where exactly does trainable_variables method belong in Tensorflow?

I'm a newbie in both deep learning and tensorflow and now trying to learn how to implement deep learning codes based on function API (not keras) by following example codes.
Inside the codes I'm looking at, I found out sources saying 'gradients=tape.gradient(loss,model.trainable variables)'
I intuitionally got what trainable variables mean, however in order to understand clearly,I tried to search on tensorflow documentation (which module or class the method belongs to, which are key arguments, etc) ,but I wasn't able to find the information I want. ('trainable variables' method was not in their documentation index and I'm wondering why)
So can anyone please tell me the module/class which trainable_variable method belongs to, and which arguments it takes, and also how it is able to judge and get all the trainable variables from the model ?
The reason you did not find this method is because trainable_variables is not a method, but an attribute/property. The Model class has a trainable_variables attribute, which is not documented officialy. It is inherited from the base class Layer, and to put it shortly, the list (of trainable variables) gets populated as new layers are added, since all layers have an init parameter trainable (this comes from base class Layer too). You can check the source code if you want to: "the source of the property", "adding new weights to layer appends to the list".

Creating a serving graph separately from training in tensorflow for Google CloudML deployment?

I am trying to deploy a tf.keras image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb & variable files), so I'm not sure if I need to do this extra step to get it to work.
e.g. this is code directly from GCP Tensorflow Deploying models documentation
def json_serving_input_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:
# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=tf_files_path)
# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_png(image_str_tensor,
channels=3)
return image # apply additional processing if necessary
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
You can refer to this very helpful answer for a more complete reference and other options for exporting your model.
Edit: If this approach throws a ValueError: Couldn't find trained model at ./tf. error, you can try it the workaround solution that I documented in this answer.

How do I look inside the NLTK classifier train method

I've been looking at Laurent Luce's' blog on sentiment analysis. Unfortunately I have not been able to follow his blog so I cannot ask him a question directly. Here is the link to the blog: http://www.laurentluce.com/posts/twitter-sentiment-analysis-using-python-and-nltk/
Pretty much everything works nicely. However I cannot figure out how to get this part to work - text below is pasted from the link. My question is how do I look inside the classifier train method? It will help my understanding if I could do that.
"Let’s take a look inside the classifier train method in the source code of the NLTK library. ‘label_probdist’ is the prior probability of each label and ‘feature_probdist’ is the feature/value probability dictionary. Those two probability objects are used to create the classifier."
def train(labeled_featuresets, estimator=ELEProbDist):
...
# Create the P(label) distribution
label_probdist = estimator(label_freqdist)
...
# Create the P(fval|label, fname) distribution
feature_probdist = {}
...
return NaiveBayesClassifier(label_probdist, feature_probdist)
NLTK is open source. You can find the source-code here. Github has solid search functionality, so you should be able to find whatever it is you need using that. Specifically, the naive bayes classifier is here
and you can see the train method for it in that file.

Finding draw_if_interactive() in pyplot.py

There are multiple draw_if_interactive() expressions in the pyplot module but I can't find this function's definition anywhere in the module.
From intuition and readings, it's an easy guess that the function enables on-demand plotting but where can I read its definition? Thanks.
The function is actually in the backend code. The actual implementation depends on your backend. For example the function with the TkAgg backend is in backend_tkagg.py:
def draw_if_interactive():
if matplotlib.is_interactive():
figManager = Gcf.get_active()
if figManager is not None:
figManager.show()
Same kind of functions seem to be for other backends, they use the matplotlib.is_interactive to determine if this is an interactive session and then use the backend specific drawing commands to draw the image.