How to convert a Tensorflow 2.x model to Tensorflow 1.14 - deep-learning

I am trying to import a trained tensorflow 2.11 model to labview, but the labview IMAQ DL Create module only supports tensorflow 1.14. So I want to convert my trained model to the old version. How can I do it.
The TF model is simple, it only contains a few Con2D layers. I have tried to import the frozen graph to labview, but I am getting this error:
NodeDef mentions attr 'explicit_paddings' not in Op<name=MaxPool; signature=input:T -> output:T; attr=T:type,default=DT_FLOAT,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE, DT_INT32, DT_INT64, DT_UINT8, DT_INT16, DT_INT8, DT_UINT16, DT_QINT8]; attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW", "NCHW_VECT_C"]>; NodeDef: {{node sequential/max_pooling2d/MaxPool}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

Related

Huggingface TFRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=1) give ValueError

Reproducible on google colab (transformers 4.24.0).
from transformers import TFAutoModelForSequenceClassification
model = TFRobertaForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=1)
I would like to set num_labels=1 because I would like to use it as a regression model. But the above code will give ValueError:
ValueError: cannot reshape array of size 3072 into shape (768,1)
I remembered doing this for the distillbert model worked. Is this the right way of calling this? Or this is not supported by HF for anything than the Bert families?

How to solve EfficientDet with Pytorch Runtime error?

I wanna training EfficientDet with PyTorch-Lightning,
but, i have a problem.
Most of those things were version error.
so, i matched version that model
from effdet.config.model_config import efficientdet_model_param_dict
from effdet import get_efficientdet_config, EfficientDet, DetBenchTrain
from effdet.efficientdet import HeadNet
from effdet.config.model_config import efficientdet_model_param_dict
when i imported effdet, i got runtime error like this
RuntimeError:
object has no attribute nms:
File "/home/ubuntu/anaconda3/envs/pytorch1.7.1_p37/lib/python3.7/site-packages/torchvision/ops/boxes.py", line 35
"""
_assert_has_ops()
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
'nms' is being compiled since it was called from '_batched_nms_vanilla'
# Ideally for GPU we'd use a higher threshold
# keep only top max_det_per_image scoring predictions
batch_detections.append(detections)
return torch.stack(batch_detections, dim=0)
how to solve this problem?
i wanna modeling quickly...
also, this is version status

How to print the trained parameters of a classifier in TensorFlow

I trained a model in TensorFlow, and saved it on disk.
Now I want to load it from checkpoint and print the trained parameters.
Something like:
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=hidden_units,
warm_start_from=checkpoint_path)
print(parameters(classifier))
How do I do that?
I'm using tf version 1.14.
I think you can use these two methods get_variable_names() and get_variable_value() to retrieve the parameters in your classifier.
params = classifier.get_variable_names()
for p in params:
print(p, classifier.get_variable_value(p))

Coefficient in support vector regression (SVR) using grid search (GridSearchCV) and Pipeline in Scikit Learn

I am having trouble to access the coefficients of a support vector regression model (SVR) in scikit learn when the model is embedded in a pipeline and a grid search.
Consider the following example:
from sklearn.datasets import load_iris
import numpy as np
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVR
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
iris = load_iris()
X_train = iris.data
y_train = iris.target
clf = SVR(kernel='linear')
select = SelectKBest(k=2)
steps = [('feature_selection', select), ('svr', clf)]
pipeline = Pipeline(steps)
grid = GridSearchCV(pipeline, param_grid={"svr__C":[10,10,100],"svr__gamma": np.logspace(-2, 2)})
grid.fit(X_train, y_train)
This seems to work fine but when I try to access the coefficient of the best fitting model
grid.best_estimator_.coef_
I get an error message: AttributeError: 'Pipeline' object has no attribute 'coef_'.
I also tried to access the individual steps of the pipeline:
pipeline.named_steps['svr']
but could not find the coefficients there.
Just happened to come across the same problem and this post
had the answer:
grid.best_estimator_ contains an instance of the pipeline, which consists of steps. The last step should always be the estimator, so you should always find the coefficients at:
grid.best_estimator_.steps[-1][1].coef_

How do you export .caffemodels to other applications?

Is it possible to translate the info in a .caffemodel file such that it could be read by (for example) Matlab. That is, is there a way to write your model using something else that prototxt and import the weights trained using Caffe?
If the answer is "Nope, it's a binary file and will always remain that way", is there some documentation regarding the structure of the file so that one could extract the important information somehow?
As you know, .caffemodel consists of weights and biases.
A simple way to read weights and biases for a caffemodel given the prototxt would be to just load the network in Python and read the weights.
You can use:
import caffe
net = caffe.Net(<prototxt-file>,<model-file>,<phase>);
and access the params from net.params
source
I'll take VGG as an example
from caffe.proto import caffe_pb2
net = caffe_pb2.NetParameter()
caffemodel = sys.argv[1]
with open(caffemodel, 'rb') as f:
net.ParseFromString(f.read())
for i in net.layer:
print i.ListFields()[0][-1]
#conv1
#relu1
#norm1
#pool1
#conv2
#relu2
#norm2
#pool2
#conv3
#relu3
#conv4
#relu4
#conv5
#relu5
#pool5
#fc6
#relu6
#drop6
#fc7
#relu7
#drop7
#fc8
#prob