How to add the last classification layer in EfficieNet pre-trained model in Pytorch? - deep-learning

I'm using the EfficientNet pre-trained model for my image classification project in Pytorch, and my purpose is to change the number of classes which is initially 1000 to 4.
However, for that when I try adding a model._fc layer, I keep on seeing this error "EfficientNet' object has no attribute 'classifier". Here is my code (Config.NUM_CLASSES = 4):
elif Config.MODEL_NAME == 'efficientnet-b3':
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b3')
model._fc= torch.nn.Linear(in_features=model.classifier.in_features, **out_features=Config.NUM_CLASSES**, bias=True)
The situation is different when I add model._fc to the end of the Resnet part, it clearly changes the number of output classes to 4 in Resnet-18. Here is the code for that:
if Config.MODEL_NAME == 'resnet18': model = models.resnet50(pretrained=True) model.fc = torch.nn.Linear(in_features=model.fc.in_features, out_features=Config.NUM_CLASSES, bias=True)
The solution is available for TensorFlow and Keras, and I would really appreciate it if anyone could help me with that in PyTorch.
Regards,
Far

Torchvision >= 0.11 includes EfficientNet, and it does have a classifier attribute. To get the in_features :
import torchvision
model = torchvision.models.efficientnet_b5()
num_ftrs = model.classifier[1].in_features

EfficentNet class doesn't have attribute classifier, you need to change in_features=model.classifier.in_features to in_features=model._fc.in_features.
import torchvision.models as models
NUM_CLASSES = 4
#EfficientNet
from efficientnet_pytorch import EfficientNet
efficientnet = EfficientNet.from_pretrained('efficientnet-b3')
efficientnet ._fc= torch.nn.Linear(in_features=efficientnet._fc.in_features, out_features=NUM_CLASSES, bias=True)
#mobilenet_v2
mobilenet = models.mobilenet_v2(pretrained=True)
mobilenet.classifier = nn.Sequential(nn.Dropout(p=0.2, inplace=False),
nn.Linear(in_features=mobilenet.classifier[1].in_features, out_features=NUM_CLASSES, bias=True))
#inception_v3
inception = models.inception_v3(pretrained=True)
inception.fc = nn.Linear(in_features=inception.fc.in_features, out_features=NUM_CLASSES, bias=True)

Related

How can I apply the model that I built against the different data using stat models?

How can I apply the model that I built against different data using stat models?
For example here I use OLS to model the data in file1. I want to use the modelX against on file2 data. Is that possible?
modelX = sm.OLS(y~ a+b+c, data=file1).fit()
modelY = sm.modelX.apply(y~ a+b+c, data=file2)
tried
import statsmodels.api as sm
import seaborn as sns
mpg = sns.load_dataset("mpg")
model = sm.OLS(mpg.weight, mpg.mpg)
results = model.fit()
results.apply(mpg.weight, mpg.mpg)
error
AttributeError: 'OLSResults' object has no attribute 'apply'

CNN I'm trying to generate confusion Matrix and classification report for Multiclass classification of custom model. But values didn't seems correct

#Confusion Matrix
from sklearn.metrics import confusion_matrix
plt.figure(figsize=(16,9))
y_pred_labels = [ np.argmax(label) for label in predict ]
cm = confusion_matrix(test_set.classes, y_pred_labels)
# show cm
sns.heatmap(cm, annot=True, fmt='d',xticklabels=class_labels, yticklabels=class_labels)
from sklearn.metrics import classification_report
cr= classification_report(test_set.classes, y_pred_labels, target_names=class_labels)
print(cr)
[Load Data from directory](https://i.stack.imgur.com/p87gv.png)
[accuracy](https://i.stack.imgur.com/1dSab.png)
[evaluate](https://i.stack.imgur.com/LEV0X.png)
[predict](https://i.stack.imgur.com/Kiim2.png)
[cm and cr](https://i.stack.imgur.com/sQN9P.png)
[cr](https://i.stack.imgur.com/dMAaB.png)
[cm](https://i.stack.imgur.com/LzqcY.png)
Complete flow is as given screenshots.
Anyone Can find where is actual problem. How can i get the correct values in classification report? While in predictions values are correct by use of model. predict method and pass there a data set.

Two stages transfer learning

I have used MobilenetV2 as architecture with "imagenet" weights to classify between 4 classes of Xrays. I have a very good accuracy so I have saved these weights (Bioiatriki_project.h5). I want to further use the weights for another classification task but without 4 classes this time. I want to classify two classes .So my code in this second part for creating the model is
def create_model(pretrained=True):
mobile_model = MobileNetV2(
weights='/content/drive/MyDrive/Bioiatriki_project.h5',
input_shape=input_img_size,
alpha=1,
include_top=False)
print("mobileNetV2 has {} layers".format(len(mobile_model.layers)))
if pretrained:
for layer in mobile_model.layers[:-50]:
layer.trainable=False
for layer in mobile_model.layers[-50:]:
layer.trainable=True
else:
for layer in mobile_model.layers:
layer.trainable = True
model = mobile_model.layers[-3].output
model = layers.GlobalAveragePooling2D()(model)
model = layers.Dense(num_classes, activation="softmax", kernel_initializer='uniform')(model)
model = Model(inputs=mobile_model.input, outputs=model)
return model
This throws me this error:
ValueError: Weight count mismatch for layer #103 (named Conv_1_bn in the current model, dense in the save file). Layer expects 4 weight(s). Received 2 saved weight(s)
So how can I fix this?

Try to run an NLP model with an Electra instead of a BERT model

I want to run the wl-coref model with an Electra model instead of a Bert model. However, I get an error message with the Electra model and can't find a hint in the Huggingface documentation on how to fix it.
I try different BERT models such like roberta-base, bert-base-german-cased or SpanBERT/spanbert-base-cased. All works.
But if I try an Electra model, like google/electra-base-discriminator or german-nlp-group/electra-base-german-uncased then it doesn't work.
The error that is displayed:
out, _ = self.bert(subwords_batches_tensor, attention_mask=torch.tensor(attention_mask, device=self.config.device))
ValueError: not enough values to unpack (expected 2, got 1)
And this is the method where the error comes from:_bertify in line 349.
Just remove the underscore _. ELECTRA does not return a pooling output like BERT or RoBerta:
from transformers import AutoTokenizer, AutoModel
def bla(model_id:str):
t = AutoTokenizer.from_pretrained(model_id)
m = AutoModel.from_pretrained(model_id)
print(m(**t("this is a test", return_tensors="pt")).keys())
bla("google/electra-base-discriminator")
bla("roberta-base")
Output:
odict_keys(['last_hidden_state'])
odict_keys(['last_hidden_state', 'pooler_output'])

Arbitrary shaped Feedforward Neural Network in Pytorch

I am making a script that has some generative aspect to it, and I need to generate arbitrary shaped feedforward NNs. The idea is to pass a list With the number of number of neurons in each layer, and the number of layers is determined by the length of the list:
shape = [784,64,64,64,10]
I tried something like this:
shapenn = [784,64,64,64,10]
class Net(nn.Module):
def __init__(self, shapenn):
super().__init__()
self.shapenn = shapenn
self.fcl = [] # list with fully conected leyers
for i in range(len(act_funs)):
self.fcl.append(nn.Linear(self.nnarch[i],self.nnarch[i+1]))
net = Net(shapenn)
While the fully connected layers are created correctly in the list fcl, net is not initialized properly for example it has not net.parameters().
I am sure there is a correct way to do this, thank you very much in advance.
You need to use nn.ModuleList in place of the built-in python list (similarly use nn.ModuleDict in place of python dictionaries). These behave like a normal list except that they must only contain instances that subclasses nn.Module and using them signals that the modules contained in the list should be considered submodules of your module. For example
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fcl = nn.ModuleList()
for i in range(5):
self.fcl.append(nn.Linear(10, 10))
net = Net()
print([name for name, val in net.named_parameters()])
prints
['fcl.0.weight', 'fcl.0.bias', 'fcl.1.weight', 'fcl.1.bias', 'fcl.2.weight', 'fcl.2.bias', 'fcl.3.weight', 'fcl.3.bias', 'fcl.4.weight', 'fcl.4.bias']