I am trying to read a JSON file into Pandas. It's a relatively large file (41k records) mostly text.
{"sanders": [{"date": "February 8, 2016 Monday", "source": "Federal News
Service", "subsource": "MSNBC \"MSNBC Live\" Interview with Sen. Bernie
Sanders (I-VT), Democratic", "quotes": ["Well, it's not very progressive to
take millions of dollars from Wall Street as well.", "That's a very good
question, and I wish I could give her a definitive answer. QUOTE SHORTENED FOR
SPACE"]}, {"date": "February 7, 2016 Sunday", "source": "CBS News Transcripts", "subsource": "SHOW: CBS FACE THE NATION 10:30 AM EST", "quotes":
["Well, John -- John, I think that`s a media narrative that goes around and
around and around. I don`t accept that media narrative.", "Well, that`s what
she said about Barack Obama in 2008. "]},
I tried:
quotes = pd.read_json("/quotes.json")
I expected it to read in cleanly because it was file created in python. However, I got this error:
ValueError Traceback (most recent call last)
<ipython-input-19-c1acfdf0dbc6> in <module>()
----> 1 quotes = pd.read_json("/Users/kate/Documents/99Antennas/Client\
Files/Fusion/data/quotes.json")
/Users/kate/venv/lib/python2.7/site-packages/pandas/io/json.pyc in
read_json(path_or_buf, orient, typ, dtype, convert_axes, convert_dates,
keep_default_dates, numpy, precise_float, date_unit)
208 obj = FrameParser(json, orient, dtype, convert_axes,
convert_dates,
209 keep_default_dates, numpy, precise_float,
--> 210 date_unit).parse()
211
212 if typ == 'series' or obj is None:
/Users/kate/venv/lib/python2.7/site-packages/pandas/io/json.pyc in parse(self)
276
277 else:
--> 278 self._parse_no_numpy()
279
280 if self.obj is None:
/Users/kate/venv/lib/python2.7/site-packages/pandas/io/json.pyc in _
parse_no_numpy(self)
493 if orient == "columns":
494 self.obj = DataFrame(
--> 495 loads(json, precise_float=self.precise_float),
dtype=None)
496 elif orient == "split":
497 decoded = dict((str(k), v)
ValueError: Expected object or value
After reading the documentation and stackoverflow, I also tried adding convert_dates=False to the parameters, but that did not correct the problem. I would welcome suggestions as to how to handle this error.
Try removing the forward slash in the filename. If you run this python code from the same directory where the file is sitting, it should work.
quotes = pd.read_json("quotes.json")
SPKoder mentioned the forward slash. I was looking for an answer when I realized I hadn't added a / when combing filename and path (i.e. c:/path/herefile.json, instead of c:/path/here/file.json). Anyways the error I received was ...
ValueError: Expected object or value
Not a very intuitive error message, but that is what causes it.
Related
I am working on a project where I need to use census data for a couple of towns in MA. For that, I am using cenpy library ASC data, but I got a key error. The same error happens even when I try the example code described for Chicago. Here is the example code I use and the error I see:
chicago = products.ACS(2017).from_place('Chicago, IL', level='tract',
variables=['B00002*', 'B01002H_001E'])
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\tiger.py:192, in ESRILayer.query(self, raw, strict, **kwargs)
191 try:
--> 192 features = datadict["features"]
193 except KeyError:
KeyError: 'features'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
Input In [4], in <cell line: 1>()
----> 1 chicago = products.ACS(2017).from_place('Chicago, IL', level='tract',
2 variables=['B00002*', 'B01002H_001E'])
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\products.py:791, in ACS.from_place(self, place, variables, level, return_geometry, place_type, strict_within, return_bounds, replace_missing)
788 variables = self._preprocess_variables(variables)
789 variables.append("GEO_ID")
--> 791 geoms, variables, *rest = super(ACS, self).from_place(
792 place,
793 variables=variables,
794 level=level,
795 return_geometry=return_geometry,
796 place_type=place_type,
797 strict_within=strict_within,
798 return_bounds=return_bounds,
799 replace_missing=replace_missing,
800 )
801 variables["GEOID"] = variables.GEO_ID.str.split("US").apply(lambda x: x[1])
802 return_table = geoms[["GEOID", "geometry"]].merge(
803 variables.drop("GEO_ID", axis=1), how="left", on="GEOID"
804 )
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\products.py:200, in _Product.from_place(self, place, variables, place_type, level, return_geometry, geometry_precision, strict_within, return_bounds, replace_missing)
197 else:
199 placer = "STATE={} AND PLACE={}".format(placerow.STATEFP, placerow.TARGETFP)
--> 200 env = env_layer.query(where=placer)
202 print(
203 "Matched: {} to {} "
204 "within layer {}".format(
(...)
208 )
209 )
211 geoms, data = self._from_bbox(
212 env.to_crs(epsg=4326).total_bounds,
213 variables=variables,
(...)
219 replace_missing=replace_missing,
220 )
File ~\anaconda3\envs\oxe\lib\site-packages\cenpy\tiger.py:198, in ESRILayer.query(self, raw, strict, **kwargs)
196 if details is []:
197 details = "Mapserver provided no detailed error"
--> 198 raise KeyError(
199 (
200 r"Response from API is malformed. You may have "
201 r"submitted too many queries, formatted the request incorrectly, "
202 r"or experienced significant network connectivity issues."
203 r" Check to make sure that your inputs, like placenames, are spelled"
204 r" correctly, and that your geographies match the level at which you"
205 r" intend to query. The original error from the Census is:\n"
206 r"(API ERROR {}:{}({}))".format(code, msg, details)
207 )
208 )
209 todf = []
210 for i, feature in enumerate(features):
KeyError: 'Response from API is malformed. You may have submitted too many queries, formatted the request incorrectly, or experienced significant network connectivity issues. Check to make sure that your inputs, like placenames, are spelled correctly, and that your geographies match the level at which you intend to query. The original error from the Census is:\\n(API ERROR 400:Unable to complete operation.([]))'
I'm trying to get the vocabulary from some publicly-available pre-trained models (that aren't mine) using the python interface of AllenNLP, using self.vocab. However, I'm running into problems trying to load in the model. I'm looking to get the vocabulary from the dygiepp models, using the following code:
from allennlp.models.model import Model
scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
However, I get the following error:
---------------------------------------------------------------------------
ConfigurationError Traceback (most recent call last)
/tmp/local/63381207/ipykernel_7616/3549263982.py in <module>
----> 1 scierc_model = Model.from_archive('https://s3-us-west-2.amazonaws.com/ai2-s2-research/dygiepp/master/scierc.tar.gz')
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/model.py in from_archive(cls, archive_file, vocab)
480 from allennlp.models.archival import load_archive # here to avoid circular imports
481
--> 482 model = load_archive(archive_file).model
483 if vocab:
484 model.vocab.extend_from_vocab(vocab)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file)
231 # Instantiate model and dataset readers. Use a duplicate of the config, as it will get consumed.
232 dataset_reader, validation_dataset_reader = _load_dataset_readers(
--> 233 config.duplicate(), serialization_dir
234 )
235 model = _load_model(config.duplicate(), weights_path, serialization_dir, cuda_device)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/models/archival.py in _load_dataset_readers(config, serialization_dir)
267
268 dataset_reader = DatasetReader.from_params(
--> 269 dataset_reader_params, serialization_dir=serialization_dir
270 )
271 validation_dataset_reader = DatasetReader.from_params(
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/from_params.py in from_params(cls, params, constructor_to_call, constructor_to_inspect, **extras)
586 "type",
587 choices=as_registrable.list_available(),
--> 588 default_to_first_choice=default_to_first_choice,
589 )
590 subclass, constructor_name = as_registrable.resolve_class_name(choice)
~/anaconda3/envs/dygiepp/lib/python3.7/site-packages/allennlp/common/params.py in pop_choice(self, key, choices, default_to_first_choice, allow_class_names)
322 """{"model": "my_module.models.MyModel"} to have it imported automatically."""
323 )
--> 324 raise ConfigurationError(message)
325 return value
326
ConfigurationError: dygie not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'multitask_shim', 'sequence_tagging', 'sharded', 'text_classification_json']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
The error describes how to fix the error from the command line, but not in the python interface. I additionally tried adding the line import dygie to my code to import the missing package, but that didn't solve the problem.
Wondering if anyone knows how to get around this?
To run this model, you'll need to have the code from this repo: https://github.com/dwadden/dygiepp.
In particular, you need to import the DyGIE dataset reader from here: https://github.com/dwadden/dygiepp/blob/master/dygie/data/dataset_readers/dygie.py#L29
I'm trying to merge various .json files so I can later run sentiment analysis on them.
I tried already other approaches but they always end up with an error.
I checked whether the .json is correctly formatted and can't find any issues there. I also attached an example of a .json file.
Error message is attached below my code.
import glob
import json
# list all files containing News from Guardian API
files = list(glob.iglob('/Users/xxx/tempdata/articles_data/*.json'))
news_data = []
for file in files:
news_file = open(file, "r", encoding = 'utf-8')
# Read in news and store in list: news_data
for line in news_file:
news = json.loads(line)
news_data.append(news)
news_file.close()
Updated Error Output
AttributeError Traceback (most recent call last)
<ipython-input-86-3019ee85b15b> in <module>
12 # Read in news and store in list: news_data
13 for line in news_file:
---> 14 news = json.load(line)
15 news_data.append(news)
16
~/opt/anaconda3/lib/python3.8/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
291 kwarg; otherwise ``JSONDecoder`` is used.
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
295 parse_float=parse_float, parse_int=parse_int,
AttributeError: 'str' object has no attribute 'read'
2019-11-01.json
{
"id":"business/2019/nov/01/google-snaps-up-fitbit-for-21bn",
"type":"article",
"sectionId":"business",
"sectionName":"Business",
"webPublicationDate":"2019-11-01T14:26:19Z",
"webTitle":"Google snaps up Fitbit for $2.1bn",
"webUrl":"https://www.theguardian.com/business/2019/nov/01/google-snaps-up-fitbit-for-21bn",
"apiUrl":"https://content.guardianapis.com/business/2019/nov/01/google-snaps-up-fitbit-for-21bn",
"fields":{
"headline":"Google snaps up Fitbit for $2.1bn",
"standfirst":"<p>Takeover allows web giant to take on Apple in fast-growing smartwatch and wearables business</p>",
"trailText":"Takeover allows web giant to take on Apple in fast-growing smartwatch and wearables business",
"byline":"Kalyeena Makortoff",
"main":"<figure class=\"element element-image\" data-media-id=\"fc8abb0f70105fcab3aee86dea6c89e211337660\"> <img src=\"https://media.guim.co.uk/fc8abb0f70105fcab3aee86dea6c89e211337660/0_158_3571_2143/1000.jpg\" alt=\"The wireless activity tracker Zip by Fitbit Inc\" width=\"1000\" height=\"600\" class=\"gu-image\" /> <figcaption> <span class=\"element-image__caption\">The wireless activity tracker Zip by Fitbit Inc. Google has confirmed it will buy Fitbit for $2.1bn.</span> <span class=\"element-image__credit\">Photograph: Franck Robichon/EPA</span> </figcaption> </figure>",
"body":"<p>Google has snapped up the Fitbit... ",
"newspaperPageNumber":"38",
"wordcount":"679",
"firstPublicationDate":"2019-11-01T14:25:58Z",
"isInappropriateForSponsorship":"false",
"isPremoderated":"false",
"lastModified":"2019-11-01T18:56:38Z",
"newspaperEditionDate":"2019-11-02T00:00:00Z",
"productionOffice":"UK",
"publication":"The Guardian",
"shortUrl":"https://gu.com/p/cjeze",
"shouldHideAdverts":"false",
"showInRelatedContent":"true",
"thumbnail":"https://media.guim.co.uk/fc8abb0f70105fcab3aee86dea6c89e211337660/0_158_3571_2143/500.jpg",
"legallySensitive":"false",
"lang":"en",
"isLive":"true",
"bodyText":"Google has snapped up the Fitbit activity tracker business in a $2.1bn (\u00a31.6bn) deal that will enable the search giant to go toe-to-toe with Apple in the fast-growing smartwatch and wearables business..." ,
"charCount":"4149",
"shouldHideReaderRevenue":"false",
"showAffiliateLinks":"false",
"bylineHtml":"Kalyeena Makortoff"
},
"isHosted":false,
"pillarId":"pillar/news",
"pillarName":"News"
},
If you are reading json from file, you should use json.load instead of json.loads. json.loads is for reading JSON from string. See json.load documentation.
For example:
import json
with open('ts.json', 'r') as f:
content = json.load(f)
print(content)
I've been trying to use the HuggingFace nlp library's GLUE metric to check whether a given sentence is a grammatical English sentence. But I'm getting an error and is stuck without being able to proceed.
What I've tried so far;
reference and prediction are 2 text sentences
!pip install transformers
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
reference="Security has been beefed across the country as a 2 day nation wide curfew came into effect."
prediction="Security has been tightened across the country as a 2-day nationwide curfew came into effect."
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
#Using BertTokenizer
encoded_reference=tokenizer.encode(reference, add_special_tokens=False)
encoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)
glue_score = glue_metric.compute(encoded_prediction, encoded_reference)
Error I'm getting;
ValueError Traceback (most recent call last)
<ipython-input-9-4c3a3ce7b583> in <module>()
----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
198 predictions = self.data["predictions"]
199 references = self.data["references"]
--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
201 return output
202
/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)
101 return pearson_and_spearman(predictions, references)
102 elif self.config_name in ["mrpc", "qqp"]:
--> 103 return acc_and_f1(predictions, references)
104 elif self.config_name in ["sst2", "mnli", "mnli_mismatched", "mnli_matched", "qnli", "rte", "wnli", "hans"]:
105 return {"accuracy": simple_accuracy(predictions, references)}
/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)
60 def acc_and_f1(preds, labels):
61 acc = simple_accuracy(preds, labels)
---> 62 f1 = f1_score(y_true=labels, y_pred=preds)
63 return {
64 "accuracy": acc,
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)
1097 pos_label=pos_label, average=average,
1098 sample_weight=sample_weight,
-> 1099 zero_division=zero_division)
1100
1101
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)
1224 warn_for=('f-score',),
1225 sample_weight=sample_weight,
-> 1226 zero_division=zero_division)
1227 return f
1228
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)
1482 raise ValueError("beta should be >=0 in the F-beta score")
1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,
-> 1484 pos_label)
1485
1486 # Calculate tp_sum, pred_sum, true_sum ###
/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)
1314 raise ValueError("Target is %s but average='binary'. Please "
1315 "choose another average setting, one of %r."
-> 1316 % (y_type, average_options))
1317 elif pos_label not in (None, 1):
1318 warnings.warn("Note that pos_label (set to %r) is ignored when "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
However, I'm able to get results (pearson and spearmanr) for 'stsb' with the same workaround as given above.
Some help and a workaround for(cola) this is really appreciated. Thank you.
In general, if you are seeing this error with HuggingFace, you are trying to use the f-score as a metric on a text classification problem with more than 2 classes. Pick a different metric, like "accuracy".
For this specific question:
Despite what you entered, it is trying to compute the f-score. From the example notebook, you should set the metric name as:
metric_name = "pearson" if task == "stsb" else "matthews_correlation" if task == "cola" else "accuracy"
I am trying to read in a CSV file in TensorFlow.
record_defaults = [[0.0], [0.0]]
data = tf.decode_csv(r"C:\Users\USER.NAME\Desktop\tmp.txt", record_defaults=record_defaults)
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
sess.run(tf.global_variables_initializer())
print(sess.run(data))
sess.close()
Where tmp.txt is a simple CSV:
1.0,4.0
-.3,1.2
Note that I am running windows and Notepad++ shows that my lines end with '\r\n' (CRLF).
I get the following error when running the above code, which suggests to me that tensorflow isnt recognizing the end of line character:
InvalidArgumentError Traceback (most recent call last)
C:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in
_do_call(self, fn, *args)
1021 try:
-> 1022 return fn(*args)
1023 except errors.OpError as e:
C:\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1003 feed_dict, fetch_list, target_list,
-> 1004 status, run_metadata)
1005
C:\Anaconda3\Lib\contextlib.py in __exit__(self, type, value, traceback)
65 try:
---> 66 next(self.gen)
67 except StopIteration:
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status()
465 compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466 pywrap_tensorflow.TF_GetCode(status))
467 finally:
InvalidArgumentError: Expect 2 fields but have 1 in record 0
[[Node: DecodeCSV = DecodeCSV[OUT_TYPE=[DT_FLOAT, DT_FLOAT], field_delim=",", _device="/job:localhost/replica:0/task:0/cpu:0"](DecodeCSV/records, DecodeCSV/record_defaults_0, DecodeCSV/record_defaults_1)]]
The error persists even when I change the delimiter to a space or tab.
I've searched across Google/StackOverflow but haven't been able to find a similar error. Any help is appreciated. Thank you!
Convert your file to unix format. I am assuming you are working on windows. Either way, in Notepad++, change the type of file as below:
From the "Edit" menu, select "EOL Conversion" -> "UNIX/OSX Format".
Convert to Unix