Detectron2 models not generating any results - deep-learning

I am just trying out detectron2 with some basic code as follows
model = model_zoo.get('COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml', trained=True)
im = Image.open('input.jpg')
t = transforms.ToTensor()
model.eval()
with torch.no_grad():
im = t(im)
output = model([{'image':im}])
print(output)
However the model does not produce any meaningful predictions
[{'instances': Instances(num_instances=0, image_height=480, image_width=640, fields=[pred_boxes: Boxes(tensor([], device='cuda:0', size=(0, 4))), scores: tensor([], device='cuda:0'), pred_classes: tensor([], device='cuda:0', dtype=torch.int64)])}]
I don't quite get what went wrong, it was stated in the detectron2 documentation that:
You can also run inference directly like this:
model.eval()
with torch.no_grad():
outputs = model(inputs)
and
For inference of builtin models, only “image” key is required, and “width/height” are optional.
In which case, I can't seem to find the missing link here.

I had the same issue, for me I had two issues to fix. The first was resizing shortest edge. I used the Detectron2 built function from detectron2.data.transforms and imported ResizeShortestEdge. The model values can be found with cfg.INPUT, which will list max/min sizes for test and train. The other issue was matching the color channels with cfg.

Related

BentoML - Seving a CatBoostClassifier with cat_features

I am trying to create a BentoML service for a CatBoostClassifier model that was trained using a column as a categorical feature. If i save the model and I try to make some predictions with the saved model (not as a BentoML service) all works as expected, but when I create the service using BentML I get an error
_catboost.CatBoostError: Bad value for num_feature[non_default_doc_idx=0,feature_idx=2]="Tertiary": Cannot convert 'b'Tertiary'' to float
The value is found in a column named 'road_type' and the model was trained using 'object' as the data type for the column.
If I try to give a float or an integer for the 'road_type' column I get the following error
_catboost.CatBoostError: catboost/libs/data/model_dataset_compatibility.cpp:53: Feature road_type is Categorical in model but marked different in the dataset
If someone has encountered the same issue and found a solution I would appreciate it. Thanks!
I have tried different approaches for saving the model or loading the model but unfortunately it did not worked.
You can try to explicitly pass the cat_features to the bentoml runner.
It would be something like this:
from catboost import Pool
runner = bentoml.catboost.get("bentoml_catboost_model:latest").to_runner()
cat_features = [2] # specify your cat_features indexes
prediction = runner.predict.run(Pool(input_data, cat_features=cat_features))

Keras predict_generator output differs every time

in the last 2 months I was stucked with this issue and it drove me crazy until I realized my "probabilities" vector from predict_generator is simply wrong.
I'm using keras 2, and I've a test folder with sub-directories that contain images (not necessarily same amount of images)
then I import my model, load the weights and do this:
from keras.applications import ResNet50
model = ResNet50(include_top=True, weights=None, input_shape=(3,224,224),classes=N)
model.load_weights(model_path)
probs1 = model.predict_generator(batches, steps=batches.n/64, verbose=1)
probs2 = model.predict_generator(batches, steps=batches.n/64, verbose=1)
and I don't why but probs1 != probs2 when probs2 seems like the "correct" predictions.
P.S.
batches.n/64 is not an integer
What should I do?
Have a look at this thread. But it should be fixed already.
Try to put datagen.reset() before model.predict_generator().
Change
steps=batches.n/64
to
steps=batches.n//64
this convert steps to an integer.

Caffe: Print the softmax score

In the given example of MNIST in the Caffe installation.
For any given test image, how to get the softmax scores for each category and do some processing on them? Say compute the mean and variance of them.
I am newbie so a detail would help me a lot. I am able to train the model and use the testing feature to get the prediction but I am not sure which files are to be edited in order to get the above results.
You can use python interface
import caffe
net = caffe.Net('/path/to/deploy.prototxt', '/path/to/weights.caffemodel', caffe.TEST)
in_ = read_data(...) # this is up to you to read a sample and convert it to numpy array
out_ = net.forward(data=in_) # assuming your net expects "data" in blob
Now you have the output of your net in a dictionary out (keys are names of output blobs). You can run it in a loop on several examples etc.
I can try to answer your question. Assuming in your deploying net, the softmax layer is like below:
layer {
name: "prob"
type : "Softmax"
bottom: "fc6"
top: "prob"
}
In your python code that processes data, combining with the code #Shai provided, you can get the probability of each category by adding code based on #Shai's code:
predicted_prob = net.blobs['prob'].data
predicted_prob will be returned an array that contains the probabilities with all categories.
For example, if you only have two categories, predicted_prob[0][0] will be the probability that this testing data belongs to one category and predicted_prob[0][1] will be the probability of the other one.
PS:
If you don't want to write any additional python script, according to https://github.com/BVLC/caffe/tree/master/examples/mnist
it says this example will automatically do the testing every 500 iterations. "500" is defined in solver, such as https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt
So you need to trace back the caffe source code that processes the solver file. I guess it should be https://github.com/BVLC/caffe/blob/master/src/caffe/solver.cpp
I am not sure solver.cpp is the correct file you need to look at. But in this file, you can see it has functions of testing and calculation of some values. I hope it can give you some ideas if no one else can answer your question.

define theano function with other theano function output

I am new to theano, can anyone help me defining a theano function like this:
Basically, I have a network model looks like this:
y_hat, cost, mu, output_hiddens, cells = nn_f(x, y, in_size, out_size, hidden_size, layer_models, 'MDN', training=False)
here the input x is a tensor:
x = tensor.tensor3('features', dtype=theano.config.floatX)
I want to define two theano functions for later use:
f_x_hidden = theano.function([x], [output_hiddens])
f_hidden_mu = theano.function([output_hiddens], [mu], on_unused_input = 'warn')
the first one is fine. for the second one, the problem is both the input and the output are output of the original function. it gives me error:
theano.gof.fg.MissingInputError: An input of the graph, used to compute Elemwise{identity}(features), was not provided and not given a value.
my understanding is, both of [output_hiddens] and [mu] are related to the input [x], there should be an relation between them. I tried define another theano function from [x] to [mu] like:
f_x_mu = theano.function([x], [mu]),
then
f_hidden_mu = theano.function(f_x_hidden, f_x_mu),
but it still does not work. Does anyone can help me? Thanks.
The simple answer is NO WAY. In here
Because in Theano you first express everything symbolically and afterwards compile this expression to get functions, ...
You can't use the output of theano.function as input/output for another theano.function since they are already a compiled graph/function.
You should pass the symbolic variables, such as x in your example code for f_x_hidden, to build the model.

lme4 glmm model convergence issue

I am trying to use the lme4 package for a glmm and am getting a convergence code of 0 and a statement: Model failed to converge with max|grad| = 0.00791467 (tol = 0.001, component 1). I am interested in using the lme4 package because I would like to have AIC values to determine the appropriate model as I add in additional covariates.
Two weeks ago when I tried the same approach I got a warning message that the model failed to converge because of the max|grad| issue, but am not getting the warning message this time, just the statement at the end of the summary output.
Does this mean that the model is not converging? I also used the glmmPQL method. The coefficient parameter estimates are similar between the two model types.
Here is glmer (lme4) model code. I increased the maxfun to deal with other issues I had when I ran the model last time.
l1<-glmer(Meat_Weight~logsh+SAMS_region_2015+(1|StationID),
family="Gamma"(link="log"),data=datad,control=glmerControl(optCtrl=list(maxfun=100000)))
Here is the glmmPQL code.
m1<-glmmPQL(fixed=Meat_Weight~logsh+SAMS_region_2015,random=~1|StationID,
family=Gamma(link="log"),data=datad)
I am sure this is not information to diagnosis the problem, but if anyone has suggestions I can provide more data.
Thanks
Try to change the optimizer
l1<-glmer(Meat_Weight~logsh+SAMS_region_2015+(1|StationID),
family="Gamma"(link="log"),data=datad, control = glmerControl(optimizer="bobyqa"))