Detectron2 and Imagenet models - deep-learning

I'm bit confused about Detectron models in model zoo.
According to the user guide, all COCO models are pretrained with coco2017, so having about 80 output classes. What about Imagenet models? Imagenet dataset has hundreds of output classes. So, using Detectron Imagenet pretrained models I have only 80 classes? How can I have 1000 and more output classes like Imagenet dataset?
And how can I see number of output classes in model zoo pretrained net?

Related

use weight transfer learning trained in customer data after training in another customer data

I hope you are well I have a question can I train transfer learning like pre-trained model 'vgg16' trained on ImageNet in my customer data and save weight to train another customer data?
How I can do this, please
Thanks for ur time
I do example with Pytorch framework.
When you train any model, you have to define training strategy.
There are two ways to save and load trained weight for transfer learning in Pytorch.
The first is load state dict - only save and load the weight (paramemters) (recommended).
While training, if condition you defined satisfied, let's save the trained weight by this command (refer link)
torch.save(model.state_dict(), PATH)
And then when you train VGG16 model with other custom dataset and want to transfer learning from previous trained weight, use this
model = VGG16(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
The second way is load the whole model (model structure included)
To save model, use (refer link)
torch.save(model, PATH)
And to load it
# Model class must be defined somewhere
model = torch.load(PATH)
model.eval()
You can refer some links [1], [2]

My vgg16 model is pretrained by RGB images but my test data are gray-scaled. would my result improve if my training dataset were gray-scaled too?

I have trained a VGG16 model to classify Chest X-Ray images which are gray-scaled images.
As you know VGG16 model is pre-trained by the IMAGENET dataset in which all the images are RGB.
however, my results were reliable, I was wondering if VGG16 was pre-trained using gray-scaled images, would I achieve better results?

What's the most elegant method to transform models between pytorch and caffe?

Just like title says, models pytorch to caffe, caffe to pytorch.
Not only inference, but also trainable models after transformation.
Any good points?

Can VGG 19 fine-tuned model outperform Inception -V3 fine-tuned model?

I am using pre-trained and fine-tuned models for cell classification. The images were not seen before by any of the transfer learning models. In the process, I found that a fine-tuned VGG19 model outperforms the fine-tuned Inception-V3 mode though both these models are trained on the ImageNet data. What contributes to this difference? is it because of the model architecture?

Pre-trained LDA models on tweets?

I have been looking for a pre-trained LDA model which is trained on tweets but with no avail. I want to use such a model which is trained on a large corpus to infer topics of small documents. Please share if you any?