Caffe to CoreML Model Conversion - caffe

I have downloaded a model from the given link
http://posefs1.perception.cs.cmu.edu/OpenPose/models/hand/pose_iter_102000.caffemodel
Then i use this Python code to convert this model into .mlmodel
import coremltools
coreml_model = coremltools.converters.caffe.convert('pose_iter_102000.caffemodel','pose_deploy.prototxt')
coremltools.utils.save_spec(coreml_model, 'my_model.mlmodel')
After compiling this code error is something like this
================= Starting Conversion from Caffe to CoreML ======================
Layer 0: Type: 'CPMData', Name: 'data'. Output(s): 'data', 'label'.
Traceback (most recent call last):
File "ModelConversionFile.py", line 2, in
coreml_model = coremltools.converters.caffe.convert('pose_iter_102000.caffemodel','pose_deploy.prototxt')
File "/Users/tahirhameed/Desktop/NewPythonTest/MyEnv/lib/python2.7/site-packages/coremltools/converters/caffe/_caffe_converter.py", line 191, in convert
predicted_feature_name)
File "/Users/tahirhameed/Desktop/NewPythonTest/MyEnv/lib/python2.7/site-packages/coremltools/converters/caffe/_caffe_converter.py", line 255, in _export
predicted_feature_name)
RuntimeError: Cannot convert caffe layer of type 'CPMData'.

The model you are trying to convert is containing a layer type (CPMData) that is not supported by CoreML. You would need to provide an implementation for that layer yourself.

Related

reading JSON from file and extract the keys returns attribute str has no keys

I am new to Python (and JSON) so apologies of this is obvious to you.
I pull some data from an API using the following code
import requests
import json
headers = {'Content-Type': 'application/json', 'accept-encoding':'identity'}
api_url = api_url_base+api_token+api_request #variables removed for security
response = requests.get(api_url, headers=headers)
data=response.json()
keys=data.keys
if response.status_code == 200:
print(data["message"], "saving to file...")
print("Found the following keys:")
print(keys)
with open('vulns.json', 'w') as outfile:
json.dump(response.content.decode('utf-8'),outfile)
print("File Saved.")
else:
print('The site returned a', response.status_code, 'error')
this works, I get some data returned and I am able to write the file.
I am trying to change what's returned form a short format to a long format and to check its working I need to see the keys, I was trying to do this offline using the written file (as practice for reading JSON from files).
I wrote these few lines (taken from this site https://www.kite.com/python/answers/how-to-print-the-keys-of-a-dictionary-in-python)
import json
with open('vulns.json') as json_file:
data=json.load(json_file)
print(data)
keys=list(data.keys())
print(keys)
Unfortunately, whenever I run this it returns this error
Python 3.9.1 (tags/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> print(keys)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'keys' is not defined
>>> & C:/Users/xxxx/AppData/Local/Microsoft/WindowsApps/python.exe c:/Temp/read-vulnfile.py
File "<stdin>", line 1
& C:/Users/xxxx/AppData/Local/Microsoft/WindowsApps/python.exe c:/Temp/read-vulnfile.py
^
SyntaxError: invalid syntax
>>> exit()
PS C:\Users\xxxx\Documents\scripts\Python> & C:/Users/xxx/AppData/Local/Microsoft/WindowsApps/python.exe c:/Temp/read-vulnfile.py
Traceback (most recent call last):
File "c:\Temp\read-vulnfile.py", line 6, in <module>
keys=list(data.keys)
AttributeError: 'str' object has no attribute 'keys'
The Print(data) command returns what looks like JSON, this is the opening line:
{"count": 1000, "message": "Vulnerabilities found: 1000", "data":
[{"...
I cant show the content it's sensitive.
why is this looking at a str object rather than a dictionary?
how do I read JSON back into a dictionary please?
You just have that content stored in file as a string. Just open the vulns.json in some editor and there most likely is something like "{'count': 1000, ... instead of {"count": 1000, ....
It's opened by json.load, but translated to string (see this table).
So you should take one step back and take a look what happens during saving to file. You take some content from your response, but dump the string decoded value into a file. Take instead a try with
json.dump(response.json(), outfile)
(or just use data variable you already have provided).
This should allow you to succesfully dump and load data as a dict.

Attempting to capture an EagerTensor without building a function in tf 2.0

I want to build an asynchronous advantage actor-critic model (a3c) of an agent with multiple actions in tensorflow 2.0. Some actions have continuous actions, on the other hand, others have discrete actions.
For these actions, I use tfp.distributions.MultivariateNormalDiag library in tensorflow-probability package. But I spent two days struggling this. But I don't know how to build a network to get the value of multiple actions.
I built a function to make distributions for multiple actions and I input logit tensor (output tensor of actor network) to this function below. The function will return distributions of each action.
def make_dist(space, logits):
if space.is_continuous():
mu, logstd = tf.split(logits, 2, axis=-1)
return tfp.distributions.MultivariateNormalDiag(mu, tf.exp(logstd))
else:
return tfp.distributions.Categorical(logits)
For the first time, I tested the environment with one continuous action. When I call 'sample' function of this distribution, The error is like the following.
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_probability\python\distributions\distribution.py", line 848, in sample
return self._call_sample_n(sample_shape, seed, name, **kwargs)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_probability\python\distributions\transformed_distribution.py", line 373, in _call_sample_n
x = self._sample_n(n, seed, **distribution_kwargs)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_probability\python\distributions\transformed_distribution.py", line 353, in _sample_n
**distribution_kwargs)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_probability\python\distributions\distribution.py", line 848, in sample
return self._call_sample_n(sample_shape, seed, name, **kwargs)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_probability\python\distributions\distribution.py", line 826, in _call_sample_n
samples = self._sample_n(n, seed, **kwargs)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_probability\python\distributions\normal.py", line 185, in _sample_n
axis=0)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 1431, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 1257, in concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 481, in _apply_op_helper
value, as_ref=input_arg.is_ref)
File "C:\Users\SDS-1\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1264, in internal_convert_to_tensor
raise RuntimeError("Attempting to capture an EagerTensor without "
RuntimeError: Attempting to capture an EagerTensor without building a function.
I used to use keras and pytorch. I am a newbie of tensorflow 2.0. As far as I know, the placeholder in tf 1.x was deprecated because of eager execution. The problem is related to that.

Convert a pipeline_pb2.TrainEvalPipelineConfig to JSON or YAML file for tensorflow object detection API

I want to convert a pipeline_pb2.TrainEvalPipelineConfig to JSON or YAML file format for tensorflow object detection API. I tried converting the protobuf file using :
import tensorflow as tf
from google.protobuf import text_format
import yaml
from object_detection.protos import pipeline_pb2
def get_configs_from_pipeline_file(pipeline_config_path, config_override=None):
'''
read .config and convert it to proto_buffer_object
'''
pipeline_config = pipeline_pb2.TrainEvalPipelineConfig()
with tf.gfile.GFile(pipeline_config_path, "r") as f:
proto_str = f.read()
text_format.Merge(proto_str, pipeline_config)
if config_override:
text_format.Merge(config_override, pipeline_config)
#print(pipeline_config)
return pipeline_config
def create_configs_from_pipeline_proto(pipeline_config):
'''
Returns the configurations as dictionary
'''
configs = {}
configs["model"] = pipeline_config.model
configs["train_config"] = pipeline_config.train_config
configs["train_input_config"] = pipeline_config.train_input_reader
configs["eval_config"] = pipeline_config.eval_config
configs["eval_input_configs"] = pipeline_config.eval_input_reader
# Keeps eval_input_config only for backwards compatibility. All clients should
# read eval_input_configs instead.
if configs["eval_input_configs"]:
configs["eval_input_config"] = configs["eval_input_configs"][0]
if pipeline_config.HasField("graph_rewriter"):
configs["graph_rewriter_config"] = pipeline_config.graph_rewriter
return configs
configs = get_configs_from_pipeline_file('pipeline.config')
config_as_dict = create_configs_from_pipeline_proto(configs)
But when I try converting this returned dictionary to YAML with yaml.dump(config_as_dict) it says
TypeError: can't pickle google.protobuf.pyext._message.RepeatedCompositeContainer objects
For json.dump(config_as_dict) it says :
Traceback (most recent call last):
File "config_file_parsing.py", line 48, in <module>
config_as_json = json.dumps(config_as_dict)
File "/usr/lib/python3.5/json/__init__.py", line 230, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python3.5/json/encoder.py", line 198, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.5/json/encoder.py", line 256, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.5/json/encoder.py", line 179, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: label_map_path: "label_map.pbtxt"
shuffle: true
tf_record_input_reader {
input_path: "dataset.record"
}
is not JSON serializable
Would appreciate some help here.
JSON can only dump a subset of the python primtivies primitives and dict and list collections (with limitation on self-referencing).
YAML is more powerful, and can be used to dump arbitrary Python objects. But only if those objects can be "investigated" during the representation phase of the dump, which essentially limits that to instances of pure Python classes. For objects created at the C level, one can make explicit dumpers, and if not available Python will try and use the pickle protocol to dump the data to YAML.
Inspecing protobuf on PyPI shows me that there are non-generic wheels available, which is always an indication for some C code optimization. Inspecting one of these files indeed shows a pre-compiled shared object.
Although you make a dict out of the config, this dict can of course only be dumped when all its keys and all its values can be dumped. Since your keys are strings (necessary for JSON), you need to look at each of the values, to find the one that doesn't dump, and convert that to a dumpable object structure (dict/list for JSON, pure Python class for YAML).
You might want to take a look at Module json_format

Convert .caffemodel to .pb files

I have a .caffemodel file, and I want to use it in my iOS application through Caffe2Kit, but instance init function parameters are 2 .pb files called "initNet" and "predictNet". I tried to use caffe_translator:
python -m caffe2.python.caffe_translator deploy_nodist.prototxt global_model.caffemodel
but I got an error message:
KeyError: 'No translator registered for layer: name: "Slice"\ntype: "Slice"\nbottom: "data_l_ab_mask"\ntop: "data_l"\ntop: "data_ab_mask"\nslice_param {\n slice_point: 1\n axis: 1\n}\n yet.'
Also I tried to convert this .caffemodel file to .mlmodel file with coremltools:
coreml_model = coremltools.converters.caffe.convert('global_model.caffemodel')
But I got this:
Layer 0: Type: 'TransformingFastHDF5Input', Name: 'img'. Output(s): 'img'.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/anaconda2/lib/python2.7/site-packages/coremltools/converters/caffe/_caffe_converter.py", line 191, in convert
predicted_feature_name)
File "/anaconda2/lib/python2.7/site-packages/coremltools/converters/caffe/_caffe_converter.py", line 255, in _export
predicted_feature_name)
RuntimeError: Cannot convert caffe layer of type 'TransformingFastHDF5Input'.
How I can integrate this .caffemodel into my iOS application?
Or maybe I need to use custom layers for mlmodel? But idk python.

JSON Parsing with Nao robot - AttributeError

I'm using a NAO robot with naoqi version 2.1 and Choregraphe on Windows. I want to parse json from an attached file to the behavior. I attached the file like in that link.
Code:
def onLoad(self):
self.filepath = os.path.join(os.path.dirname(ALFrameManager.getBehaviorPath(self.behaviorId)), "fileName.json")
def onInput_onStart(self):
with open(self.filepath, "r") as f:
self.data = self.json.load(f.get_Response())
self.dataFromFile = self.data['value']
self.log("Data from file: " + str(self.dataFromFile))
But when I run this code on the robot (connected with a router) I'll get an error:
[ERROR] behavior.box :_safeCallOfUserMethod:281 _Behavior__lastUploadedChoregrapheBehaviorbehavior_1136151280__root__AbfrageKontostand_3__AuslesenJSONDatei_1: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/naoqi.py", line 271, in _safeCallOfUserMethod
func()
File "<string>", line 20, in onInput_onStart
File "/usr/lib/python2.7/site-packages/inaoqi.py", line 265, in <lambda>
__getattr__ = lambda self, name: _swig_getattr(self, behavior, name)
File "/usr/lib/python2.7/site-packages/inaoqi.py", line 55, in _swig_getattr
raise AttributeError(name)
AttributeError: json
I already tried to understand the code from the correspondending lines but I couldn't fixed the error. But I know that the type of my object f is 'file'. How can I open the json file as a json file?
Your problem comes from this:
self.json.load(f.get_Response())
... there is no such thing as "self.json" on a Choregraphe box, import json and then do json.load. And what is get_Response ? That method doesn't exist on anything in Python that I know of.
You might want to first try making a standalone python script (that doesn't use the robot) that can read your json file before you try it with choregraphe. It will be easier.