How to use HuggingFace nlp library's GLUE for CoLA - deep-learning

I've been trying to use the HuggingFace nlp library's GLUE metric to check whether a given sentence is a grammatical English sentence. But I'm getting an error and is stuck without being able to proceed.
What I've tried so far;
reference and prediction are 2 text sentences
!pip install transformers
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
reference="Security has been beefed across the country as a 2 day nation wide curfew came into effect."
prediction="Security has been tightened across the country as a 2-day nationwide curfew came into effect."
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
#Using BertTokenizer
encoded_reference=tokenizer.encode(reference, add_special_tokens=False)
encoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)
glue_score = glue_metric.compute(encoded_prediction, encoded_reference)
Error I'm getting;
ValueError Traceback (most recent call last)
<ipython-input-9-4c3a3ce7b583> in <module>()
----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
198 predictions = self.data["predictions"]
199 references = self.data["references"]
--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
201 return output
202
/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)
101 return pearson_and_spearman(predictions, references)
102 elif self.config_name in ["mrpc", "qqp"]:
--> 103 return acc_and_f1(predictions, references)
104 elif self.config_name in ["sst2", "mnli", "mnli_mismatched", "mnli_matched", "qnli", "rte", "wnli", "hans"]:
105 return {"accuracy": simple_accuracy(predictions, references)}
/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)
60 def acc_and_f1(preds, labels):
61 acc = simple_accuracy(preds, labels)
---> 62 f1 = f1_score(y_true=labels, y_pred=preds)
63 return {
64 "accuracy": acc,
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)
1097 pos_label=pos_label, average=average,
1098 sample_weight=sample_weight,
-> 1099 zero_division=zero_division)
1100
1101
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)
1224 warn_for=('f-score',),
1225 sample_weight=sample_weight,
-> 1226 zero_division=zero_division)
1227 return f
1228
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)
1482 raise ValueError("beta should be >=0 in the F-beta score")
1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,
-> 1484 pos_label)
1485
1486 # Calculate tp_sum, pred_sum, true_sum ###
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)
1314 raise ValueError("Target is %s but average='binary'. Please "
1315 "choose another average setting, one of %r."
-> 1316 % (y_type, average_options))
1317 elif pos_label not in (None, 1):
1318 warnings.warn("Note that pos_label (set to %r) is ignored when "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
However, I'm able to get results (pearson and spearmanr) for 'stsb' with the same workaround as given above.
Some help and a workaround for(cola) this is really appreciated. Thank you.

In general, if you are seeing this error with HuggingFace, you are trying to use the f-score as a metric on a text classification problem with more than 2 classes. Pick a different metric, like "accuracy".
For this specific question:
Despite what you entered, it is trying to compute the f-score. From the example notebook, you should set the metric name as:
metric_name = "pearson" if task == "stsb" else "matthews_correlation" if task == "cola" else "accuracy"

Related

Error while using cenpy library in python

I am working on a project where I need to use census data for a couple of towns in MA. For that, I am using cenpy library ASC data, but I got a key error. The same error happens even when I try the example code described for Chicago. Here is the example code I use and the error I see:
chicago = products.ACS(2017).from_place('Chicago, IL', level='tract',
variables=['B00002*', 'B01002H_001E'])
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\tiger.py:192, in ESRILayer.query(self, raw, strict, **kwargs)
191 try:
--> 192 features = datadict["features"]
193 except KeyError:
KeyError: 'features'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
Input In [4], in <cell line: 1>()
----> 1 chicago = products.ACS(2017).from_place('Chicago, IL', level='tract',
2 variables=['B00002*', 'B01002H_001E'])
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\products.py:791, in ACS.from_place(self, place, variables, level, return_geometry, place_type, strict_within, return_bounds, replace_missing)
788 variables = self._preprocess_variables(variables)
789 variables.append("GEO_ID")
--> 791 geoms, variables, *rest = super(ACS, self).from_place(
792 place,
793 variables=variables,
794 level=level,
795 return_geometry=return_geometry,
796 place_type=place_type,
797 strict_within=strict_within,
798 return_bounds=return_bounds,
799 replace_missing=replace_missing,
800 )
801 variables["GEOID"] = variables.GEO_ID.str.split("US").apply(lambda x: x[1])
802 return_table = geoms[["GEOID", "geometry"]].merge(
803 variables.drop("GEO_ID", axis=1), how="left", on="GEOID"
804 )
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\products.py:200, in _Product.from_place(self, place, variables, place_type, level, return_geometry, geometry_precision, strict_within, return_bounds, replace_missing)
197 else:
199 placer = "STATE={} AND PLACE={}".format(placerow.STATEFP, placerow.TARGETFP)
--> 200 env = env_layer.query(where=placer)
202 print(
203 "Matched: {} to {} "
204 "within layer {}".format(
(...)
208 )
209 )
211 geoms, data = self._from_bbox(
212 env.to_crs(epsg=4326).total_bounds,
213 variables=variables,
(...)
219 replace_missing=replace_missing,
220 )
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\tiger.py:198, in ESRILayer.query(self, raw, strict, **kwargs)
196 if details is []:
197 details = "Mapserver provided no detailed error"
--> 198 raise KeyError(
199 (
200 r"Response from API is malformed. You may have "
201 r"submitted too many queries, formatted the request incorrectly, "
202 r"or experienced significant network connectivity issues."
203 r" Check to make sure that your inputs, like placenames, are spelled"
204 r" correctly, and that your geographies match the level at which you"
205 r" intend to query. The original error from the Census is:\n"
206 r"(API ERROR {}:{}({}))".format(code, msg, details)
207 )
208 )
209 todf = []
210 for i, feature in enumerate(features):
KeyError: 'Response from API is malformed. You may have submitted too many queries, formatted the request incorrectly, or experienced significant network connectivity issues. Check to make sure that your inputs, like placenames, are spelled correctly, and that your geographies match the level at which you intend to query. The original error from the Census is:\\n(API ERROR 400:Unable to complete operation.([]))'

How to minimise loss with constrain in PyTorch?

How to put constrain while minimising loss?
I am trying to minimise the mse loss with constrain loss but constrain was increasing instead of decreasing.then i tried to only minimise constrain then it throw following error.
class Edge_Detector(nn.Module):
def __init__(self,kernel_size,padding):
torch.manual_seed(1)
super(Edge_Detector,self).__init__()
self.sobelx=nn.Conv2d(1,1,kernel_size=kernel_size,padding=padding,bias=False)
self.relu=nn.ReLU()
self.sobely=nn.Conv2d(1,1,kernel_size=kernel_size,padding=padding,bias=False)
def forward(self,x):
x1=self.sobelx(x)
x2=self.sobely(x)
x=self.relu(x1+x2)
return x
def loss(self,x,y):
x=x.view(x.size(0),-1)
y=y.view(y.size(0),-1).float()
sobelx=self.sobelx.weight.data.squeeze().squeeze()
sobely=self.sobely.weight.data.squeeze().squeeze()
loss_mse=nn.MSELoss()(x,y)
loss_constrain=torch.matmul(sobelx,sobely.transpose(0,1)).trace()
#print('mse_loss : ',loss_mse)
#print('constrain_loss : ',loss_constrain)
#total_loss=loss_mse+loss_constrain
return loss_constrain
#Error Message:
RuntimeError Traceback (most recent call last)
<ipython-input-67-28b5b5719682> in <module>()
----> 1 learn.fit_one_cycle(15, 5e-2) #training for 4 epochs with lr=1e-3
13 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Since you did not post the whole stack trace I can't say for sure, but I am pretty certain what happends is that you call backward on loss_constrain which throws an error because it has requires_grad=False. This is because in you calculation of loss_constrain you call the data attribute of the Parameter class (self.sobelx.weight), the parameter itself has requires_grad set to True but after calling data it is set to False.
Just remove the .data part and see whether it works

what type of model can i use to train this data

I have downloaded and labeled data from
http://archive.ics.uci.edu/ml/datasets/pamap2+physical+activity+monitoring
my task is to gain an insight into the data from what is given, I have round 34 attributes in a data frame(all clean no nan values)
and want to train a model based on one target attribute 'heart_rate' given the rest of the attributes(all are numbers of a participant performing various activities )
I wanted to use Linear regression model but can not use my dataframe for some reason, however, I do not mind starting from 0 if you think I am doing it wrong
my DataFrame columns:
> Index(['timestamp', 'activity_ID', 'heart_rate', 'IMU_hand_temp',
> 'hand_acceleration_16_1', 'hand_acceleration_16_2',
> 'hand_acceleration_16_3', 'hand_gyroscope_rad_7',
> 'hand_gyroscope_rad_8', 'hand_gyroscope_rad_9',
> 'hand_magnetometer_μT_10', 'hand_magnetometer_μT_11',
> 'hand_magnetometer_μT_12', 'IMU_chest_temp', 'chest_acceleration_16_1',
> 'chest_acceleration_16_2', 'chest_acceleration_16_3',
> 'chest_gyroscope_rad_7', 'chest_gyroscope_rad_8',
> 'chest_gyroscope_rad_9', 'chest_magnetometer_μT_10',
> 'chest_magnetometer_μT_11', 'chest_magnetometer_μT_12',
> 'IMU_ankle_temp', 'ankle_acceleration_16_1', 'ankle_acceleration_16_2',
> 'ankle_acceleration_16_3', 'ankle_gyroscope_rad_7',
> 'ankle_gyroscope_rad_8', 'ankle_gyroscope_rad_9',
> 'ankle_magnetometer_μT_10', 'ankle_magnetometer_μT_11',
> 'ankle_magnetometer_μT_12', 'Intensity'],
> dtype='object')
first 5 rows:
timestamp activity_ID heart_rate IMU_hand_temp hand_acceleration_16_1 hand_acceleration_16_2 hand_acceleration_16_3 hand_gyroscope_rad_7 hand_gyroscope_rad_8 hand_gyroscope_rad_9 ... ankle_acceleration_16_1 ankle_acceleration_16_2 ankle_acceleration_16_3 ankle_gyroscope_rad_7 ankle_gyroscope_rad_8 ankle_gyroscope_rad_9 ankle_magnetometer_μT_10 ankle_magnetometer_μT_11 ankle_magnetometer_μT_12 Intensity
2928 37.66 lying 100.0 30.375 2.21530 8.27915 5.58753 -0.004750 0.037579 -0.011145 ... 9.73855 -1.84761 0.095156 0.002908 -0.027714 0.001752 -61.1081 -36.8636 -58.3696 low
2929 37.67 lying 100.0 30.375 2.29196 7.67288 5.74467 -0.171710 0.025479 -0.009538 ... 9.69762 -1.88438 -0.020804 0.020882 0.000945 0.006007 -60.8916 -36.3197 -58.3656 low
2930 37.68 lying 100.0 30.375 2.29090 7.14240 5.82342 -0.238241 0.011214 0.000831 ... 9.69633 -1.92203 -0.059173 -0.035392 -0.052422 -0.004882 -60.3407 -35.7842 -58.6119 low
2931 37.69 lying 100.0 30.375 2.21800 7.14365 5.89930 -0.192912 0.019053 0.013374 ... 9.66370 -1.84714 0.094385 -0.032514 -0.018844 0.026950 -60.7646 -37.1028 -57.8799 low
2932 37.70 lying 100.0 30.375 2.30106 7.25857 6.09259 -0.069961 -0.018328 0.004582 ... 9.77578 -1.88582 0.095775 0.001351 -0.048878 -0.006328 -60.2040 -37.1225 -57.8847 low
if you check the timestamp attribute you will see that the data acquired is in milliseconds so it might be a good idea to use the data from this dataframe as in every 2-5 seconds and train the model
also as an option, I want to use as one of these models for this task Linear,polynomial, multiple linear, agglomerative clustering and kmeans clustering.
my code:
target = subject1.DataFrame(data.target, columns=["heart_rate"])
X = df
y = target[“heart_rate”]
lm = linear_model.LinearRegression()
model = lm.fit(X,y)
predictions = lm.predict(X)
print(predictions)[0:5]
Error:
AttributeError Traceback (most recent call last)
<ipython-input-93-b0c3faad3a98> in <module>()
3 #heart_rate
4 # Put the target (housing value -- MEDV) in another DataFrame
----> 5 target = subject1.DataFrame(data.target, columns=["heart_rate"])
c:\python36\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
5177 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5178 return self[name]
-> 5179 return object.__getattribute__(self, name)
5180
5181 def __setattr__(self, name, value):
AttributeError: 'DataFrame' object has no attribute 'DataFrame'
for fixing the error I have used:
subject1.columns = subject1.columns.str.strip()
but still did not work
Thank you, sorry if I was not precise enough.
Try this:
X = df.drop("heart_rate", axis=1)
y = df[[“heart_rate”]]
X=X.apply(zscore)
test_size=0.30
seed=7
X_train, X_test, y_train, y_test=train_test_split(X, y, test_size=test_size, random_state=seed)
lm = linear_model.LinearRegression()
model = lm.fit(X,y)
predictions = lm.predict(X)
print(predictions)[0:5]

Hyperparameter tuning using tensorboard.plugins.hparams api with custom loss function

I am building a neural network with my own custom loss function (pretty long and complicated). My network is unsupervised so my input and expected output are identical and also at the moment I am using one single input (just trying to optimize the loss for a single input).
I am trying to use tensorboard.plugins.hparams api for hyperparameter tuning and don't know how to incorporate my custom loss function there. I'm trying to follow the code suggested on the Tensorflow 2.0 website.
This is what the website suggests:
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
METRIC_ACCURACY = 'accuracy'
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(
hparams=[HP_NUM_UNITS, HP_DROPOUT, HP_OPTIMIZER],
metrics=[hp.Metric(METRIC_ACCURACY, display_name='Accuracy')],
)
I need to change that as I don't want to use the dropout layer, so I can just delete that. In terms of the METRIC_ACCURACY, I don't want to use accuracy as that has no use in my model but rather use my custom loss function. If I were to do the regular fit model it would look like this:
model.compile(optimizer=adam,loss=dl_tf_loss, metrics=[dl_tf_loss])
So I tried to change the suggested code into the following code but I get an error and am wondering how I should change it so that it suits my needs. Here is what I tried:
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
#METRIC_LOSS = dl_tf_loss
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=
[hp.Metric(dl_tf_loss, display_name='Loss')])
It gives me the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-26-27d079c6be49> in <module>()
5
6 with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
----> 7 hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=[hp.Metric(dl_tf_loss, display_name='Loss')])
8
3 frames
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in hparams_config(hparams, metrics, time_created_secs)
127 hparams=hparams,
128 metrics=metrics,
--> 129 time_created_secs=time_created_secs,
130 )
131 return _write_summary("hparams_config", pb)
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in hparams_config_pb(hparams, metrics, time_created_secs)
161 domain.update_hparam_info(info)
162 hparam_infos.append(info)
--> 163 metric_infos = [metric.as_proto() for metric in metrics]
164 experiment = api_pb2.Experiment(
165 hparam_infos=hparam_infos,
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in <listcomp>(.0)
161 domain.update_hparam_info(info)
162 hparam_infos.append(info)
--> 163 metric_infos = [metric.as_proto() for metric in metrics]
164 experiment = api_pb2.Experiment(
165 hparam_infos=hparam_infos,
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in as_proto(self)
532 name=api_pb2.MetricName(
533 group=self._group,
--> 534 tag=self._tag,
535 ),
536 display_name=self._display_name,
TypeError: <tensorflow.python.eager.def_function.Function object at 0x7f9f3a78e5c0> has type Function, but expected one of: bytes, unicode
I also tried running the following code:
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=
[dl_tf_loss])
but got the following error:
AttributeError Traceback (most recent call last)
<ipython-input-28-6778bdf7f1b1> in <module>()
8
9 with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
---> 10 hp.hparams_config(hparams=[HP_NUM_UNITS, HP_OPTIMIZER],metrics=[dl_tf_loss])
2 frames
/usr/local/lib/python3.6/dist-packages/tensorboard/plugins/hparams/summary_v2.py in <listcomp>(.0)
161 domain.update_hparam_info(info)
162 hparam_infos.append(info)
--> 163 metric_infos = [metric.as_proto() for metric in metrics]
164 experiment = api_pb2.Experiment(
165 hparam_infos=hparam_infos,
AttributeError: 'Function' object has no attribute 'as_proto'
Would greatly appreciate any help.
Thanks in advance!
I figured it out.
The original METRIC_ACCURACY that I changed to METRIC_LOSS is apparently just the name, I needed to write 'tf_dl_loss' as a string and not as the function.
In the proceeding parts of the tuning, I needed to anyway write my fit command, there I inserted the actual loss function as I showed in my example of the regular fit function.
Highly recommend this as a way of tuning the hyperparameters.
You might be interested by this demo. Compiling the model with dl_tf_loss in the metric will waste time. It is possible to let hp.Metric know about other recorded summaries in different directories using the group argument.

Error in eval(expr, envir, enclos) while using Predict function

When I try to run predict() on the dataset, it keeps giving me error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Here is the part of dataset -
LoanRange Loan.Type N WAFICO WALTV WAOrigRev WAPTValue
1 0-99999 Conventional 109 722.5216 63.55385 6068.239 0.6031879
2 0-99999 FHA 30 696.6348 80.00100 7129.650 0.5623650
3 0-99999 VA 13 698.6986 74.40525 7838.894 0.4892977
4 100000-149999 Conventional 860 731.2333 68.25817 6438.330 0.5962638
5 100000-149999 FHA 285 673.2256 82.42225 8145.068 0.5211495
6 100000-149999 VA 125 704.1686 87.71306 8911.461 0.5020074
7 150000-199999 Conventional 1291 738.7164 70.08944 8125.979 0.6045117
8 150000-199999 FHA 403 672.0891 84.65318 10112.192 0.5199632
9 150000-199999 VA 195 694.1885 90.77495 10909.393 0.5250807
10 200000-249999 Conventional 1162 740.8614 70.65027 8832.563 0.6111419
11 200000-249999 FHA 348 667.6291 85.13457 11013.856 0.5374226
12 200000-249999 VA 221 702.9796 91.76759 11753.642 0.5078298
13 250000-299999 Conventional 948 742.0405 72.22742 9903.160 0.6106858
Following is the code used for predicting count data N after determining the overdispersion-
model2=glm(N~Loan.Type+WAFICO+WALTV+WAOrigRev+WAPTValue, family=quasipoisson(link = "log"), data = DF)
summary(model2)
This is what I have done to create a sequence of count and use predict function-
countaxis <- seq (0,1500,150)
Y <- predict(model2, list(N=countaxis, type = "response")
At this step, I get the error -
Error in eval(expr, envir, enclos) : object 'LoanRange' not found
Can someone please point me where is the problem here.
Think about what exactly you are trying to predict. You are providing the predict function values of N (via countaxis), but in fact the way you set up your model, N is your response variable and the remaining variables are the predictors. That's why R is asking for LoanRange. It actually needs values for LoanRange, Loan.Type, ..., WAPTValue in order to predict N. So you need to feed predict inputs that let the model try to predict N.
For example, you could do something like this:
# create some fake data to predict N
newdata1 = data.frame(rbind(c("0-99999", "Conventional", 722.5216, 63.55385, 6068.239, 0.6031879),
c("150000-199999", "VA", 12.5216, 3.55385, 60.239, 0.0031879)))
colnames(newdata1) = c("LoanRange" ,"Loan.Type", "WAFICO" ,"WALTV" , "WAOrigRev" ,"WAPTValue")
# ensure that numeric variables are indeed numeric and not factors
newdata1$WAFICO = as.numeric(as.character(newdata1$WAFICO))
newdata1$WALTV = as.numeric(as.character(newdata1$WALTV))
newdata1$WAPTValue = as.numeric(as.character(newdata1$WAPTValue))
newdata1$WAOrigRev = as.numeric(as.character(newdata1$WAOrigRev))
# make predictions - this will output values of N
predict(model2, newdata = newdata1, type = "response")