JSONDecodeError: Expecting value: line 1 column 2 (char 1) - json

I am getting this error while importing a JSON dataset from a website.
JSONDecodeError: Expecting value: line 1 column 2 (char 1)
I am working in colaboratory and wanted to import the sarcastic dataset, but since I don't know JSON, I am stuck. I have tried different placements of slash() character and also changing the -o parameter but nothing works correctly...my code[reprex]:=====>
!wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json -o /tmp/sarcasm.json
import json
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
#importing the Sarcasm dataset from !wget --no-check-certificate \ https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json \
#-o /tmp/sarcasm.json
with open("/tmp/sarcasm.json", 'r') as f:
datastore = json.load(f)
datastore = json.detect_encoding()
print (datastore)
sentences = []
labels = []
urls = []
I think the problem might be the fact the the data is being imported in HTML format, which has to be converted in JSON format(or something compatible with it). Any help would be appreciated! :)

In my case, i was able to resolve this by replacing single quotes with double quotes.
a = "['1','2']"
json.loads(a.replace("'",'"'))

I suspect you are saving the log of the transaction(instead of the doc itself) to /tmp/sarcasm.json.
Try --output-document=sarcasm.json instead
wget --no-check-certificate "https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json" --output-document=sarcasm.json

There is no need to detect the encoding, json library will take care of it
Remove the below line and try,
datastore = json.detect_encoding()

try using -O instead of -o
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/sarcasm.json -O /tmp/sarcasm.json

Related

dataframe results are not returned while reading csv file

I m trying to read a csv file, below is the code i used , its not returning any results. In the specified path , the csv file has data in it. I had some issue when i used ValidFile = spark.read.csv(ValidationFileDest, header = True) , for this the result is returned but the data for the columns were interchanges and nulls were assinged thats the reason i applied mode DROPMALFORMED in my code. But it is not returning any result.
parquetextension=".parquet"
BronzeStage_Path = "dbfs:/mnt/bronze/stage/" +parentname+"/" +filename
#validated_path="dbfs:/mnt/bronze/landing/ClaimDenialsSouce/"+parentname+"/"+"current/"+"Valid/"+todayDate+"_"+"CDAValidFile"+extension
# df_sourcefilevalid.repartition(1).write.format(write_format).option("header", "true").save(BronzeStagePath)
# ValidFileSrc_BS= get_csv_files(exception_path)
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.master("local") \
.appName("parquet_example") \
.getOrCreate()
spark.conf.set("spark.sql.csv.parser.columnPruning.enabled",False)
ValidFile = spark.read.format('csv').option("mode","DROPMALFORMED").options(header='true', inferSchema='true').load(ValidationFileDest)
display(ValidFile)
Make sure to check if you are providing the correct file path or the variable of your CSV file. I have repro'd in our environment and was able read the CSV file without any issue
Reading CSV file :
filepath="dbfs:/FileStore/test11-1.csv"
df11 = spark.read.format("csv").option("mode", "DROPMALFORMED").option("header", "true").load(filepath)
display(df11)

How to read a gzip jsonl from a byte offset to debug a BigQuery error?

I am exporting json newline files into BigQuery and the BigQuery errors give me a byte offset of the original gziped jsonl file such as
JSON parsing error in row starting at position 727720: Repeated field must be imported as a JSON array. Field: named_entities.alt_form."
I have tried using the Python package indexed gzip to read from the offset but indexed gzip mangles the lines sadly. I have also tried using the builtin python gzip package to try and get the relevant line unsuccessfully:
import gzip
import ujson as json
f = open('myfile.json.gz', 'rb')
g = gzip.GzipFile(fileobj=f)
fasz = g.read()
byte_offset_to_line = {}
for line in g:
byte_offset = f.tell()
byte_offset_to_line[byte_offset] = line
target = 727720
ls = sorted([(abs(target-k),k) for k in byte_offset_to_line.keys() if k < target])
line_of_interest = byte_offset_to_line[ls[0]]
text = str(line_of_interest)
malformed_json = json.loads(text[2:-3])
With the above snippet I can get the nearest line's byte offset. But then when I tried just uploading that line to a test table in BQ it works sadly so I think I am not getting the correct line.
I was wondering if there's a better approach to solve this problem? I am not sure why my above snippet doesn't work to be honest.

Unable to print output of JSON code into a .csv file

I'm getting the following errors when trying to decode this data, and the 2nd error after trying to compensate for the unicode error:
Error 1:
write.writerows(subjects)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u201c' in position 160: ordinal not in range(128)
Error 2:
with open("data.csv", encode="utf-8", "w",) as writeFile:
SyntaxError: non-keyword arg after keyword arg
Code
import requests
import json
import csv
from bs4 import BeautifulSoup
import urllib
r = urllib.urlopen('https://thisiscriminal.com/wp-json/criminal/v1/episodes?posts=10000&page=1')
data = json.loads(r.read().decode('utf-8'))
subjects = []
for post in data['posts']:
subjects.append([post['title'], post['episodeNumber'],
post['audioSource'], post['image']['large'], post['excerpt']['long']])
with open("data.csv", encode="utf-8", "w",) as writeFile:
write = csv.writer(writeFile)
write.writerows(subjects)
Using requests and with the correction to the second part (as below) I have no problem running. I think your first problem is due to the second error (is a consequence of that being incorrect).
I am on Python3 and can run yours with my fix to open line and with
r = urllib.request.urlopen('https://thisiscriminal.com/wp-json/criminal/v1/episodes?posts=10000&page=1')
I personally would use requests.
import requests
import csv
data = requests.get('https://thisiscriminal.com/wp-json/criminal/v1/episodes?posts=10000&page=1').json()
subjects = []
for post in data['posts']:
subjects.append([post['title'], post['episodeNumber'],
post['audioSource'], post['image']['large'], post['excerpt']['long']])
with open("data.csv", encoding ="utf-8", mode = "w",) as writeFile:
write = csv.writer(writeFile)
write.writerows(subjects)
For your second, looking at documentation for open function, you need to use the right argument names and add the name of the mode argument if not positional matching.
with open("data.csv", encoding ="utf-8", mode = "w") as writeFile:

python 3 read csv UnicodeDecodeError

I have a very simple bit of code that takes in a CVS and puts it into a 2D array. It runs fine on Python2 but in Python3 I get the error below. Looking through the documentation,I think I need to use .decode() Could someone please explain how to use it in the context of my code and why I don't need to do anything in Python2
Error:
line 21, in
for row in datareader:
File "/usr/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa9 in position 5002: invalid start byte
import csv
import sys
fullTable = sys.argv[1]
datareader = csv.reader(open(fullTable, 'r'), delimiter=',')
full_table = []
for row in datareader:
full_table.append(row)
print(full_table)
open(argv[1], encoding='ISO-8859-1')
CSV contained characters where were not UTF-8 which seemed like the default. I am however surprised that python2 dealt with this issue without any problems.

How do you read a file inside a zip file as text, not bytes?

A simple program for reading a CSV file inside a ZIP archive:
import csv, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
for row in csv.DictReader(items_file):
pass
works in Python 2.7:
$ python2.7 test_zip_file_py3k.py ~/data.zip
$
but not in Python 3.2:
$ python3.2 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 8, in <module>
for row in csv.DictReader(items_file):
File "/somedir/python3.2/csv.py", line 109, in __next__
self.fieldnames
File "/somedir/python3.2/csv.py", line 96, in fieldnames
self._fieldnames = next(self.reader)
_csv.Error: iterator should return strings, not bytes (did you open the file
in text mode?)
The csv module in Python 3 wants to see a text file, but zipfile.ZipFile.open returns a zipfile.ZipExtFile that is always treated as binary data.
How does one make this work in Python 3?
I just noticed that Lennart's answer didn't work with Python 3.1, but it does work with Python 3.2. They've enhanced zipfile.ZipExtFile in Python 3.2 (see release notes). These changes appear to make zipfile.ZipExtFile work nicely with io.TextWrapper.
Incidentally, it works in Python 3.1, if you uncomment the hacky lines below to monkey-patch zipfile.ZipExtFile, not that I would recommend this sort of hackery. I include it only to illustrate the essence of what was done in Python 3.2 to make things work nicely.
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
# items_file.readable = lambda: True
# items_file.writable = lambda: False
# items_file.seekable = lambda: False
# items_file.read1 = items_file.read
items_file = io.TextIOWrapper(items_file)
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0} -- row = {1}'.format(idx, row))
If I had to support py3k < 3.2, then I would go with the solution in my other answer.
Update for 3.6+
Starting w/3.6, support for mode='U' was removed^1:
Changed in version 3.6: Removed support of mode='U'. Use io.TextIOWrapper for reading compressed text files in universal newlines mode.
Starting w/3.8, a Path object was added which gives us an open() method that we can call like the built-in open() function (passing newline='' in the case of our CSV) and we get back an io.TextIOWrapper object the csv readers accept. See Yuri's answer, here.
You can wrap it in a io.TextIOWrapper.
items_file = io.TextIOWrapper(items_file, encoding='your-encoding', newline='')
Should work.
And if you just like to read a file into a string:
with ZipFile('spam.zip') as myzip:
with myzip.open('eggs.txt') as myfile:
eggs = myfile.read().decode('UTF-8'))
Lennart's answer is on the right track (Thanks, Lennart, I voted up your answer) and it almost works:
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(items_file, encoding='iso-8859-1', newline='')
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Traceback (most recent call last):
File "test_zip_file_py3k.py", line 7, in <module>
items_file = io.TextIOWrapper(items_file,
encoding='iso-8859-1',
newline='')
AttributeError: readable
The problem appears to be that io.TextWrapper's first required parameter is a buffer; not a file object.
This appears to work:
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
This seems a little complex and also it seems annoying to have to read in a whole (perhaps huge) zip file into memory. Any better way?
Here it is in action:
$ cat test_zip_file_py3k.py
import csv, io, sys, zipfile
zip_file = zipfile.ZipFile(sys.argv[1])
items_file = zip_file.open('items.csv', 'rU')
items_file = io.TextIOWrapper(io.BytesIO(items_file.read()))
for idx, row in enumerate(csv.DictReader(items_file)):
print('Processing row {0}'.format(idx))
$ python3.1 test_zip_file_py3k.py ~/data.zip
Processing row 0
Processing row 1
Processing row 2
...
Processing row 250
Starting with Python 3.8, the zipfile module has the Path object, which we can use with its open() method to get an io.TextIOWrapper object, which can be passed to the csv readers:
import csv, sys, zipfile
# Give a string path to the ZIP archive, and
# the archived file to read from
items_zipf = zipfile.Path(sys.argv[1], at='items.csv')
# Then use the open method, like you'd usually
# use the built-in open()
items_f = items_zipf.open(newline='')
# Pass the TextIO-like file to your reader as normal
for row in csv.DictReader(items_f):
print(row)
Here's a minimal recipe to open a zip file and read a text file inside that zip. I found the trick to be the TextIOWrapper read() method, not mentioned in any answers above (BytesIO.read() was mentioned above, but Python docs recommend TextIOWrapper).
import zipfile
import io
# Create the ZipFile object
zf = zipfile.ZipFile('my_zip_file.zip')
# Read a file that is inside the zip...reads it as a binary file-like object
my_file_binary = zf.open('my_text_file_inside_zip.txt')
# Convert the binary file-like object directly to text using TextIOWrapper and it's read() method
my_file_text = io.TextIOWrapper(my_file_binary, encoding='utf-8', newline='').read()
I wish they kept the mode='U' parameter in the ZipFile open() method to do this same thing since that was so succinct but, alas, that is obsolete.