ValueError: Unknown initializer: GlorotUniform - deep-learning

Recently I've trained a model using MNIST dataset in Google colab. I've saved the weights using Model.save('model.h5').
I've downloaded the weights and tried to run in other code by loading weights offline in anaconda Model = Keras.models.load_model('model.h5).
But it throws
ValueError: Unknown initializer: GlorotUniform

The issue could be because you're using tf.keras and keras in a mixed way. There could also be a version mismatch between local and remote keras versions. Do check this discussion on stackoverflow.

Related

Regarding caffe caffe.paramspec has no parameter named compression

I was trying to run a customized model on caffe.Unfortunately,I was only provided with a trainval.prototxt and trainval.caffemodel.
The exact error is as follows
Error parsing text-format caffe.NetParameter: 54:17: Message type “caffe.ParamSpec has no field named compression
This is followed by
[upgrade_proto.cpp :79] check failed read protofromtextfile failed to parse param file
A similar question was asked here.
So should I assume that my version of caffe that I have on my system was different from the client's version of caffe and apparently the client version of caffe has a slightly different proto definition??

mlmodel doesn’t work properly after conversion from Caffe using coremltools

I want to convert this NSFW model to CoreML model. What I did:
Download Anaconda 2.7
Install coremltools
Convert this yahoo nsfw model from here - https://github.com/yahoo/open_nsfw/tree/master/nsfw_model but I am not sure it’s Caffe v1 because Apple documentation says that only this version supported. Anyway…
I use this commands for conversion and it converted without any warnings.
coreml_model = coremltools.converters.caffe.convert(('resnet_50_1by2_nsfw.caffemodel', 'deploy.prototxt'), image_input_names='data')
coreml_model.save(’nsfw2.mlmodel')
I imported this model to my project and again all looks fine.
I prepared 224x224 images and use Vision framework like VNImageRequestHandler with cgImage and etc.
But!
All images return the same result
[<VNCoreMLFeatureValueObservation: 0x281b1daa0> 2E00F417-95C0-4AA1-A621-A0945BB5E095 requestRevision=1 confidence=1.000000 "prob" - "MultiArray : Double 1 x 1 x 2 x 1 x 1 array" (1.000000)]
How can I debug this issue and found out what’s wrong?
Maybe you're looking only at naughty images? ;-)
It's probably the image preprocessing. You didn't specify any preprocessing options while Caffe models usually normalize using ImageNet mean/std. Refer to my blog post for more info: https://machinethink.net/blog/help-core-ml-gives-wrong-output/
However, I don't see any normalization options in your deploy.prototxt, so perhaps it's not that.
How I would debug this: remove everything but the first layer from the Caffe model and convert to Core ML. Run this one-layer model in both Caffe and Core ML and compare the outputs. If they are different, something is up with how you're loading or preprocessing the input data.

Clienterror: An error occured when calling the CreateModel operation

I want to deploy sklearn model in sagemaker. I created a training script.
scripPath=' sklearn.py'
sklearn=SKLearn(entry_point=scripPath,
train_instance_type='ml.m5.xlarge',
role=role, output_path='s3://{}/{}/output'.format(bucket,prefix), sagemaker_session=session)
sklearn.fit({"train-dir' : train_input})
When I deploy it
predictor=sklearn.deploy(initial_count=1,instance_type='ml.m5.xlarge')
It throws,
Clienterror: An error occured when calling the CreateModel operation:Could not find model data at s3://tree/sklearn/output/model.tar.gz
Can anyone say how to solve this issue?
When deploying models, SageMaker looks up S3 to find your trained model artifact. It seems that there is no trained model artifact at s3://tree/sklearn/output/model.tar.gz. Make sure to persist your model artifact in your training script at the appropriate local location in docker which is /opt/ml/model.
for example, in your training script this could look like:
joblib.dump(model, /opt/ml/model/mymodel.joblib)
After training, SageMaker will copy the content of /opt/ml/model to s3 at the output_path location.
If you deploy in the same session a model.deploy() will map automatically to the artifact path. If you want to deploy a model that you trained elsewhere, possibly during a different session or in a different hardware, you need to explicitly instantiate a model before deploying
from sagemaker.sklearn.model import SKLearnModel
model = SKLearnModel(
model_data='s3://...model.tar.gz', # your artifact
role=get_execution_role(),
entry_point='script.py') # script containing inference functions
model.deploy(
instance_type='ml.m5.xlarge',
initial_instance_count=1,
endpoint_name='your_endpoint_name')
See more about Sklearn in SageMaker here https://sagemaker.readthedocs.io/en/stable/using_sklearn.html

Creating a serving graph separately from training in tensorflow for Google CloudML deployment?

I am trying to deploy a tf.keras image classification model to Google CloudML Engine. Do I have to include code to create serving graph separately from training to get it to serve my models in a web app? I already have my model in SavedModel format (saved_model.pb & variable files), so I'm not sure if I need to do this extra step to get it to work.
e.g. this is code directly from GCP Tensorflow Deploying models documentation
def json_serving_input_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
You are probably training your model with actual image files, while it is best to send images as encoded byte-string to a model hosted on CloudML. Therefore you'll need to specify a ServingInputReceiver function when exporting the model, as you mention. Some boilerplate code to do this for a Keras model:
# Convert keras model to TF estimator
tf_files_path = './tf'
estimator =\
tf.keras.estimator.model_to_estimator(keras_model=model,
model_dir=tf_files_path)
# Your serving input function will accept a string
# And decode it into an image
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_png(image_str_tensor,
channels=3)
return image # apply additional processing if necessary
# Ensure model is batchable
# https://stackoverflow.com/questions/52303403/
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{model.input_names[0]: images_tensor},
{'image_bytes': input_ph})
# Export the estimator - deploy it to CloudML afterwards
export_path = './export'
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
You can refer to this very helpful answer for a more complete reference and other options for exporting your model.
Edit: If this approach throws a ValueError: Couldn't find trained model at ./tf. error, you can try it the workaround solution that I documented in this answer.

Message type "caffe.LayerParameter" has no field named "input_param"

I downloaded the latest caffe library today and build.
Tested the library in python to run through a classification program in Python.
In parsing the prototxt file, I have the error as Message type "caffe.LayerParameter" has no field named "input_param".
The error happened at net = caffe.Net(model_def, model_weights, caffe.TEST) .
.model_def = caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt'
model_weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
According to the discussion here, the error is because of not using latest build.
For me I used the latest build. How to fix the issue.