How can we add extra class with existing YOLOV5 model? - yolov5

I need to add an extra one class with the existing 80 class of YOLOV5. I am aware of custom training , but after that, it will lose the pretrained 80 classes information. My requirement is the existing 80 classes + 1 custom class

Check the answer here: https://github.com/ultralytics/yolov5/issues/1071
you can label your new classes starting from 80 and then simply append
your new data to coco in your data.yaml. See GlobalWheat2020.yaml for
an example of training on multiple datasets.

Related

I am training Yolov5. I have labels.txt files that contain 60 labels, but I want to train the model only on 3 classes, how can I do that?

I am training YOLOv5 on xView dataset, and it contains of 60 classes. The label.txt files contains 60 labels. But I want to train the model on only 3 classes to be faster. anyone knows how can I do that. Should I change the name of classes on data.yaml ?
Delete all the classes you don't want to use from the txt files which belongs to images in datasets. (You can write a shell script to do it) Modify the label.txt and data.yaml with your new (3 classes) situation. It should be work.

How could I create a module with learnable parameters when one set of parameters is from Dataset class

So, I have a model with some parameters that I need to train. And also, there are some parameters in the class Dataset(torch.utils.data.Dataset) which do some preprocessing. I need to train them as well with the model’s parameters. So, can you please let me know if what I am doing is correct:
params = list(model.parameters())
params.extend(list(Dataset.parameters()))
opt = torch.optim.Adam(params,lr=1e-4)
One more question. As Dataset class will generate both the train_ds and val_ds, I only need to train the parameters of the Dataset class when getting train_ds. And to get val_ds, I need to use the trained parameters of the Dataset. So, how do I create the train_ds and val_ds? Should I initially create both of them and then train the model with tarin_ds, and then create them again and use the val_ds for testing?
Dataset class does not have .parameters - it is used only to handle data.
Your model class is derived from nn.Module class that has .parameters and can be trained.
It seems like you need to add the trainable preprocessing to your model, rather than have it as part of your dataset class.

CNTK load pictures with class affiliation in percent

I am trying to build a neuronal network with CNTK to estimate the age of a person.
Currently I want to try an approach using only one class. So every picture gets label 0 but also an affiliation to the class in percent.
So the net should learn that the probability of a 30 year old person to match class 0 is 30% ... 60yo = 60% ... 93yo = 93%.
Currently I am working on a reduced data set of 50k images (.jpg) and use the MiniBatchSourceFromData function.
Since I have a lot more training data available (400k + augmentations) I wanted to load the pictures in chunks for training, due to limited server RAM.
Following THIS CNTK tutorial I have to use the MiniBatchSource function and feed a deserializer with a map_file which includes the paths and labels to my training data. .
My Problem is, that the map_file doesn't support class affiliations. I can only define what picture belongs to which class.
Since I am new to CNTK and deep learning in general, I'd like to know if there is another option to read chunked data as well as tell the network how likely it is that the picture corresponds to a specific class.
Best regards.
You can create a composite reader. One deserializes you images, another can deserialise your numeric data.
Read this, the last section shows you how to use a composite reader

Creating a btBvhTriangleMeshShape in libgdx from node of model

I want to add nodes of a model to btDynamicsWorld with btBvhTriangleMeshShape. I can do this with the whole model by passing model.meshpart, but nodes does not have meshparts.
I need this because only the required nodes need to be added to btDynamicsWorld.
See the wiki: https://github.com/libgdx/libgdx/wiki/Bullet-Wrapper---Using-models
It's one line:
btCollisionShape shape = Bullet.obtainStaticNodeShape(model.nodes);
You can pass in as many nodes you like (or just one, if you prefer).

M2M relationship or 2 FKs?

Which of the following structures would be preferable:
# M2M
class UserProfile(models.Model):
...
groups = models.ManyToManyField(Group)
class Group(models.Model):
...
or -
# 2 FKs
class UserProfile(models.Model):
...
class Group(models.Models):
...
class GroupMember(models.Model):
user = models.ForeignKey(UserProfile)
group = models.ForeignKey(Group)
Which would be better?
You also can combine these 2 variants using through option
groups = models.ManyToManyField(Group, through='GroupMember')
What do you mean by better? Usually you don't need to create intermediate model (except the case when you have to store extra data).
ManyToManyField does his job perfectly, so don't write its functionality by yourself.
The two are essentially the same. When you do a M2M Django automatically creates a intermediary model, which is pretty much exactly like your GroupMember model. However, it also sets up some API hooks allowing you to access the Group model directly from the UserProfile model, without have to mess with the intermediary model.
You can get the same hooks added back by using through as #San4ez explains, but you've only made things more complicated. Creating a custom through model is only beneficial if you need to add additional fields to the relationship. Otherwise, stick with the default.
Long and short, #1 is better, only because it's exactly the same as #2, but simpler and with no extraneous code.