How can I apply the model that I built against the different data using stat models? - regression

How can I apply the model that I built against different data using stat models?
For example here I use OLS to model the data in file1. I want to use the modelX against on file2 data. Is that possible?
modelX = sm.OLS(y~ a+b+c, data=file1).fit()
modelY = sm.modelX.apply(y~ a+b+c, data=file2)
tried
import statsmodels.api as sm
import seaborn as sns
mpg = sns.load_dataset("mpg")
model = sm.OLS(mpg.weight, mpg.mpg)
results = model.fit()
results.apply(mpg.weight, mpg.mpg)
error
AttributeError: 'OLSResults' object has no attribute 'apply'

Related

CNN I'm trying to generate confusion Matrix and classification report for Multiclass classification of custom model. But values didn't seems correct

#Confusion Matrix
from sklearn.metrics import confusion_matrix
plt.figure(figsize=(16,9))
y_pred_labels = [ np.argmax(label) for label in predict ]
cm = confusion_matrix(test_set.classes, y_pred_labels)
# show cm
sns.heatmap(cm, annot=True, fmt='d',xticklabels=class_labels, yticklabels=class_labels)
from sklearn.metrics import classification_report
cr= classification_report(test_set.classes, y_pred_labels, target_names=class_labels)
print(cr)
[Load Data from directory](https://i.stack.imgur.com/p87gv.png)
[accuracy](https://i.stack.imgur.com/1dSab.png)
[evaluate](https://i.stack.imgur.com/LEV0X.png)
[predict](https://i.stack.imgur.com/Kiim2.png)
[cm and cr](https://i.stack.imgur.com/sQN9P.png)
[cr](https://i.stack.imgur.com/dMAaB.png)
[cm](https://i.stack.imgur.com/LzqcY.png)
Complete flow is as given screenshots.
Anyone Can find where is actual problem. How can i get the correct values in classification report? While in predictions values are correct by use of model. predict method and pass there a data set.

Two stages transfer learning

I have used MobilenetV2 as architecture with "imagenet" weights to classify between 4 classes of Xrays. I have a very good accuracy so I have saved these weights (Bioiatriki_project.h5). I want to further use the weights for another classification task but without 4 classes this time. I want to classify two classes .So my code in this second part for creating the model is
def create_model(pretrained=True):
mobile_model = MobileNetV2(
weights='/content/drive/MyDrive/Bioiatriki_project.h5',
input_shape=input_img_size,
alpha=1,
include_top=False)
print("mobileNetV2 has {} layers".format(len(mobile_model.layers)))
if pretrained:
for layer in mobile_model.layers[:-50]:
layer.trainable=False
for layer in mobile_model.layers[-50:]:
layer.trainable=True
else:
for layer in mobile_model.layers:
layer.trainable = True
model = mobile_model.layers[-3].output
model = layers.GlobalAveragePooling2D()(model)
model = layers.Dense(num_classes, activation="softmax", kernel_initializer='uniform')(model)
model = Model(inputs=mobile_model.input, outputs=model)
return model
This throws me this error:
ValueError: Weight count mismatch for layer #103 (named Conv_1_bn in the current model, dense in the save file). Layer expects 4 weight(s). Received 2 saved weight(s)
So how can I fix this?

Try to run an NLP model with an Electra instead of a BERT model

I want to run the wl-coref model with an Electra model instead of a Bert model. However, I get an error message with the Electra model and can't find a hint in the Huggingface documentation on how to fix it.
I try different BERT models such like roberta-base, bert-base-german-cased or SpanBERT/spanbert-base-cased. All works.
But if I try an Electra model, like google/electra-base-discriminator or german-nlp-group/electra-base-german-uncased then it doesn't work.
The error that is displayed:
out, _ = self.bert(subwords_batches_tensor, attention_mask=torch.tensor(attention_mask, device=self.config.device))
ValueError: not enough values to unpack (expected 2, got 1)
And this is the method where the error comes from:_bertify in line 349.
Just remove the underscore _. ELECTRA does not return a pooling output like BERT or RoBerta:
from transformers import AutoTokenizer, AutoModel
def bla(model_id:str):
t = AutoTokenizer.from_pretrained(model_id)
m = AutoModel.from_pretrained(model_id)
print(m(**t("this is a test", return_tensors="pt")).keys())
bla("google/electra-base-discriminator")
bla("roberta-base")
Output:
odict_keys(['last_hidden_state'])
odict_keys(['last_hidden_state', 'pooler_output'])

How to add the last classification layer in EfficieNet pre-trained model in Pytorch?

I'm using the EfficientNet pre-trained model for my image classification project in Pytorch, and my purpose is to change the number of classes which is initially 1000 to 4.
However, for that when I try adding a model._fc layer, I keep on seeing this error "EfficientNet' object has no attribute 'classifier". Here is my code (Config.NUM_CLASSES = 4):
elif Config.MODEL_NAME == 'efficientnet-b3':
from efficientnet_pytorch import EfficientNet
model = EfficientNet.from_pretrained('efficientnet-b3')
model._fc= torch.nn.Linear(in_features=model.classifier.in_features, **out_features=Config.NUM_CLASSES**, bias=True)
The situation is different when I add model._fc to the end of the Resnet part, it clearly changes the number of output classes to 4 in Resnet-18. Here is the code for that:
if Config.MODEL_NAME == 'resnet18': model = models.resnet50(pretrained=True) model.fc = torch.nn.Linear(in_features=model.fc.in_features, out_features=Config.NUM_CLASSES, bias=True)
The solution is available for TensorFlow and Keras, and I would really appreciate it if anyone could help me with that in PyTorch.
Regards,
Far
Torchvision >= 0.11 includes EfficientNet, and it does have a classifier attribute. To get the in_features :
import torchvision
model = torchvision.models.efficientnet_b5()
num_ftrs = model.classifier[1].in_features
EfficentNet class doesn't have attribute classifier, you need to change in_features=model.classifier.in_features to in_features=model._fc.in_features.
import torchvision.models as models
NUM_CLASSES = 4
#EfficientNet
from efficientnet_pytorch import EfficientNet
efficientnet = EfficientNet.from_pretrained('efficientnet-b3')
efficientnet ._fc= torch.nn.Linear(in_features=efficientnet._fc.in_features, out_features=NUM_CLASSES, bias=True)
#mobilenet_v2
mobilenet = models.mobilenet_v2(pretrained=True)
mobilenet.classifier = nn.Sequential(nn.Dropout(p=0.2, inplace=False),
nn.Linear(in_features=mobilenet.classifier[1].in_features, out_features=NUM_CLASSES, bias=True))
#inception_v3
inception = models.inception_v3(pretrained=True)
inception.fc = nn.Linear(in_features=inception.fc.in_features, out_features=NUM_CLASSES, bias=True)

How to extract images, labels from csv file and create a trainset using torch?

I downloaded a dataset for facial key point detection the image and the labels were in a CSV file I extracted it using pandas but I don't know how to convert it into a tensor and load it into a data loader for training.
dataframe = pd.read_csv("training_facial_keypoints.csv")
dataframe['Image'] = dataframe['Image'].apply(lambda i: np.fromstring(i, sep=' '))
dataframe= dataframe.dropna()
images_array = np.vstack(dataframe['Image'].values)/255.0
images_array = images_array.astype(np.float32)
images_array = images_array.reshape(-1, 96, 96, 1)
print(images_array.shape)
labels_array = dataframe[dataframe.columns[:-1]].values
labels_array = (labels_array-48)/48
labels_array = labels_array.astype(np.float32)
I have the images and labels in two arrays. How do I create a training set from this and use transforms.
Then load it using a dataloader.
Create a subclass of torch.utils.data.Dataset, fill it with your data.
You can pass desired torchvision.transforms to it and apply them to your data in __getitem__(self, index).
Than you can pass it to torch.utils.data.DataLoader which allows multi-threaded loading of data.
And PyTorch has an overwhelming documentation you should first refer to.