I am using the pretrained PyTorchVideo model slowfast_r50_detection as shown here. I want to retrain this model with a different private dataset that I have and use it in a similar way as shown in the example. I am new to PyTorch and am not sure how to start retraining such a model. Any pointers would be very helpful.
You can simply load your model first, and than use load_state_dict() function to load the pre-trained model
path_to_saved_model = "Directory/directory/your_saved_model.tar"
video_model = slow_r50_detection(True)
video_model.load_state_dict(torch.load(path_to_saved_model)['model_state_dict'])
device = "cuda:0" if torch.cuda.is_available() else "cpu"
video_model = video_model.to(device)
The model loads the pre-trained weights from the saved model, and anything you run post the load_state_dict() line, the model uses previously trained weights.
Related
https://github.com/playerkk/face-py-faster-rcnn
the link above has indicated that a pretrained model is available.
enter image description here
After you download the pretrained weights ( a .caffemodel file), you can instantiate a caffe.Net object with the network definition (.prototxt file - from the repository you referred, test.prototxt), e.g.
net = caffe.Net(prototxt, caffemodel, caffe.TEST)
(I guess you would like to use the pretrained model for inference, if you would like to do transfer-learning on your data you should use caffe.TRAIN).
Then you should load the image, feed it into the input blobs, run net.forward on the image and extract the results from the output blobs - e.g. net.blobs['cls_score'].data, net.blobs['cls_prob'].data and net.blobs['bbox_pred'].data.
You can use the original py-faster-rcnn's demo with minor adjustments.
Good luck!
I want to deploy my GANs model on a web-based UI for this I need to convert my model's checkpoints into js files to be called by web code. There are functions for saved_model and Keras to convert into pb files but none for js
my main concern is that I am confused about how to dump a session or variable weights in js files
You can save a keras model from python. There is a full tutorial here but basically it amounts to calling this after training:
tfjs.converters.save_keras_model(model, tfjs_target_dir)
then hosting the result somewhere publicly accessible (or on the same server as your web UI) then you can load your model into tensorflow.js as follows:
import * as tf from '#tensorflow/tfjs';
const model = await tf.loadLayersModel('https://foo.bar/tfjs_artifacts/model.json');
I am trying to deploy a tf.keras image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb & variable files), so I'm not sure if I need to do this extra step to get it to work.
e.g. this is code directly from GCP Tensorflow Deploying models documentation
def json_serving_input_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:
# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=tf_files_path)
# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_png(image_str_tensor,
channels=3)
return image # apply additional processing if necessary
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
You can refer to this very helpful answer for a more complete reference and other options for exporting your model.
Edit: If this approach throws a ValueError: Couldn't find trained model at ./tf. error, you can try it the workaround solution that I documented in this answer.
I would like to use a single viewer to load/unload models instead of tearing down the viewer and creating a new instance of viewer.
Reasoning: I've loaded multiple models and one of the models is too large and problematic that it slows down the rendering, I'm thinking if it's possible to just unload the problematic model instead of reloading all the models EXCEPT the problematic model.
If you just want to unload a specific model, the following code snippet might help.
const models = viewer.impl.modelQueue().getModels();
const model = models[2]; //!<< The model you want to unload
viewer.impl.unloadModel( model );
My network contains some specific layers which are not supported by current tensorRT. so I want to run the conv layers and pooling layers on tensorRT and then use the output from tensorRT as the input of my caffe model which contains some specific layers. Is there some API or example code thar I can refer to? Thanks
See the source code in the samples directory of your TensorRT installation.
For those stumbling now on this issue I got this to work by making the input and output of TensorRT inference the mutable_gpu_data from caffe blobs:
auto* gpuImagePtr = inputBlob->mutable_gpu_data();
cudaMemcpy(gpuImagePtr, inputData, mNetInputMemory, cudaMemcpyHostToDevice);
std::vector<void*> buffers(2);
buffers[0] = gpuImagePtr;
buffers[1] = outputBlob->mutable_gpu_data();
cudaContext->enqueue(batchSize, &buffers[0], stream, nullptr);