isinstance() arg 2 must be a type or tuple of types in YOLOV5 - yolov5

enter image description here
Getting this issue with yolov5 when i am trying to predict the image any suggestions?

Related

The result of using the catboostclassifier model's output python file to predict is different from the result of using model to predict directly

I want to verify that the predicted results from the exported file are consistent with those predicted directly.
I use the output Python file with the model description of catclassifier to predict result:
But the result which is predicted directly is 2.175615211102761. It is verified that this is true for multiple data. I want to know why and how to solve it.
float_sample and cat_sample look like
Supplementary question: the results predicted by using the model file described in Python language provided by the catboost tutorial are different from those predicted directly by the model

Word2Vec - How can I store and retrieve extra information regarding each instance of corpus?

I need to combine Word2Vec with my CNN model. To this end, I need to persist a flag (a binary one is enough) for each sentence as my corpus has two types (a.k.a. target classes) of sentences. So, I need to retrieve this flag of each vector after creation. How can I store and retrieve this information inside the input sentences of Word2Vec as I need both of them in order to train my deep neural network?
p.s. I'm using Gensim implementation of Word2Vec.
p.s. My corpus has 6,925 sentences, and Word2Vec produces 5,260 vectors.
Edit: More detail regarding my corpus (as requested):
The structure of the corpus is as follows:
sentences (label: positive) -- A Python list
Feature-A: String
Feature-B: String
Feature-C: String
sentences (label: negative) -- A Python list
Feature-A: String
Feature-B: String
Feature-C: String
Then all the sentences were given as the input to Word2Vec.
word2vec = Word2Vec(all_sentences, min_count=1)
I'll feed my CNN with the extracted features (which is the vocabulary in this case) and the targets of sentences. So, I need these labels of the sentences as well.
Because the Word2Vec model doesn't retain any representation of the individual training texts, this is entirely a matter for you in your own Python code.
That doesn't seem like very much data. (It's rather tiny for typical Word2Vec purposes to have just a 5,260-word final vocabulary.)
Unless each text (aka 'sentence') is very long, you could even just use a Python dict where each key is the full string of a sentence, and the value is your flag.
But if, as is likely, your source data has some other unique identifier per text – like a unique database key, or even a line/row number in the canonical representation – you should use that identifier as a key instead.
In fact, if there's a canonical source ordering of your 6,925 texts, you could just have a list flags with 6,925 elements, in order, where each element is your flag. When you need to know the status of a text from position n, you just look at flags[n].
(To make more specific suggestions, you'd need to add more details about the original source of the data, and exactly when/why you'd need to be checking this extra property later.)

Language translation using TorchText (PyTorch)

I have recently started with ML/DL using PyTorch. The following pytorch example explains how we can train a simple model for translating from German to English.
https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html
However I am confused on how to use the model for running inference on custom input. From my understanding so far :
1) We will need to save the "vocab" for both German (input) and English(output) [using torch.save()] so that they can be used later for running predictions.
2) At the time of running inference on a German paragraph, we will first need to convert the German text to tensor using the german vocab file.
3) The above tensor will be passed to the model's forward method for translation
4) The model will again return a tensor for the destination language i.e., English in current example.
5) We will use the English vocab saved in first step to convert this tensor back to English text.
Questions:
1) If the above understanding is correct, can the above steps be treated as a generic approach for running inference on any language translation model if we know the source and destination language and have the vocab files for the same? Or can we use the vocab provided by third party libraries like spacy?
2) How do we convert the output tensor returned from model back to target language? I couldn't find any example on how to do that. The above blog explains how to convert the input text to tensor using source-language vocab.
I could easily find various examples and detailed explanation for image/vision models but not much for text.
Yes globally what you are saying is correct, and of course you can any vocab, e.g. provided by spacy. To convert a tensor into natrual text, one of the most used thechniques is to keep both a dict that maps indexes to words and an other dict that maps words to indexes, the code below can do this:
tok2idx = defaultdict(lambda: 0)
idx2tok = {}
for seq in sequences:
for tok in seq:
if not tok in tok2idx:
tok2idx[tok] = index
idx2tok[index] = tok
index += 1
Here sequences is a list of all the sequences (i.e. sentences in your dataset). You can change the model easily if you have only a list of words or tokens, by only keeping the inner loop.

What format of image annotation is this?

I have a file that has annotation for a image for object detection. I wanted to change this into a COCO format so that I can retrain a yolo model for this, but I dont know how to change this format or if this is another model format. It is saved in a Json format which makes me think of COCO but I am not sure. Any help with this will be appreciated.
This is the file:
{"review_status":"pass","annotated_data":[{"data":[],"label":"Truck","bounding_box_data":[{"x":546,"y":245,"width":63,"height":93},{"x":606,"y":213,"width":48,"height":71}]},{"data":[],"label":"Pedestrian","bounding_box_data":[{"x":486,"y":305,"width":19,"height":48}]},{"data":[],"label":"Bus","bounding_box_data":[{"x":889,"y":226,"width":39,"height":53}]}],"annotation_status":"done"}
Ok thanks to #gameon67 I worked though the issue.
What I had to do is to 1
parse the json file out
2 get the x, y, w, h data from the file
translate that to something yolo neede.
Take the centro id point of the x y by using geometry.
Then making a text file like this
object-class x y width height
I based a lot of this on here
https://medium.com/#manivannan_data/how-to-train-yolov2-to-detect-custom-objects-9010df784f36

Correctly exporting shape keys in blender

I want to export shape keys in blender but the morph target array in the exported json file has array inside an array. I want all the shape keys points to be inside a single array. Any tips on how to export the shape keys correctly.
Screenshot of my exported json file.(https://drive.google.com/open?id=0B6wLPPFE11zoWWNvQy1YbFNXblU)
I want the morphTarget array be like what it is in this knight example (http://threejs.org/examples/models/skinned/knight.js). I want all the points to come inside a single array.
I am using blender 2.76 and io_three expoter. Thank you in advance.
What you are seeing is correct even though it is misleading. The confusion comes from two different types of data stored using the same data block name.
Each morphTarget contains an array referenced as "vertices"
One type of data is the locations of each vertex. This is what you see in the knight example. Each coordinate for each vertex is included in the array, which is the actual vertex location for each frame. The morphTargets for this will have names like "animation_000000", "animation_000001"...
"morphTargets":[{
"name":"animation_000000",
"vertices":[-1,-1,1,-1,1,1,-1,-1,-1,-1,1,-1,1,-1,1,1,
1,1,1,-1,-1,1,1,-1]
},{
"name":"animation_000001",
"vertices":[-1.0149,-1.0149,1.0149,-1.0149,1.0149,1.0149,
-1.0149,-1.0149,-1.0149,-1.0149,1.0149,-1.0149,1.0149,
-1.0149,1.0149,1.0149,1.0149,1.0149,1.0149,-1.0149,-1.0149,
1.0149,1.0149,-1.0149]
},{ ....
To get this animation data you need to export with "Apply Modifiers" enabled and "Blend Shape animation" disabled.
The other type of data is an array of vectors that constitute the shapekey data, so you get an array containing arrays of 3 numbers. A shapekey in blender is a collection of vectors defining the vertex movement relative to the original position and this appears to be what is exported here. The morphTarget names for this data match the shapekey names used, by default that is "Key 1", "Key 2"...
"morphTargets":[{
"name":"Key 1",
"vertices":[[-1.01209,-1.17045,-1.09497],[-1.10082,-1.09175,1.09065],
[-1.085,1.01564,-1.17664],[-1.17373,1.09434,1.00897],
[1.17373,-1.09434,-1.00897],[1.085,-1.01564,1.17664],
[1.10082,1.09175,-1.09065],[1.01209,1.17045,1.09497]]
},{
"name":"Key 2",
"vertices":[[-1.49369,-1.20168,-1.4188],[-1.47777,-1.31454,1.33278],
[-1.26675,1.54064,-1.30763],[-1.25083,1.42778,1.44395],
[1.25083,-1.42778,-1.44395],[1.26675,-1.54064,1.30763],
[1.47777,1.31454,-1.33278],[1.49369,1.20168,1.4188]]
}],
To get the shapekey data in the export you need to enable "Blend Shape animation" and disable "Apply Modifiers". This export doesn't seem to include any keyframed animation relating to the shapekeys.