Error parsing JSON file in python 3.4 - json

I am trying to load a Json file from a url and parse it on Python3.4 but i get a few errors and I've no idea what they are pointing to. I did verify the json file on the url from jsonlint.com and the file seems fine. The data.read() is returning 'byte' file and i've type casted it. The code is
import urllib.request
import json
inp = input("enter url :")
if len(inp)<1: inp ='http://python-data.dr-chuck.net/comments_42.json'
data=urllib.request.urlopen(inp)
data_str = str(data.read())
print(type(data_str))
parse_data = json.loads(data_str)
print(type(parse_data))
The error that i'm getting is:

The expression str(data.read()) doesn't "cast" your bytes into a string, it just produces a string representation of them. This can be seen if you print data_str: it's a str beginning with b'.
To actually decode the JSON, you need to do data_str = data.read().decode('utf=8')

Related

TypeError: Object of type bytes is not JSON serializable - python 3 - try to post base64 image data

i received this error after try convert data to json to post request
TypeError: Object of type 'bytes' is not JSON serializable
my code
dict_data: dict = {
'img': base64.b64encode(urlopen(obj['recognition_image_path']).read())
}
json_data: str = json.dumps(dict_data)
i read image from url, convert it to base64, after i received error when try convert data to json.
Please help
You need to convert to string first by calling .decode, since you can't JSON-serialize a bytes without knowing its encoding.
(base64.b64encode returns a bytes, not a string.)
import base64
from urllib.request import urlopen
import json
dict_data: dict = {
'img': base64.b64encode(urlopen(obj['recognition_image_path']).read()).decode('utf8')
}
json_data: str = json.dumps(dict_data)
edit: rewrite answer to address actual question, encode/decode
I will do it in a two step process:
First encode the image file into BASE64
Then decode the encoded file
And then transmit back the JSON data using the decoded file.
Here is an example:
Let's say the image file is is_image_file
Encode the image file by:
enc_image_file = base64.b64encode(is_image_file.read())
Next decode it by:
send_image_file = enc_image_file.decode()
Finally transmit the data using send_image_file as JsonResponse to wherever it would be used.
Of course, add import base64 before calling the function.
Note: Using json.dumps(dict_data) one gets a string which will not load the image/s.

Parse a JSON file with ISODate in Python

I have a JSON file with some lines like:
"updatedAt" : ISODate("2018-11-20T09:32:16.732+0000"),
I tried json.loads but it has an error json.decoder.JSONDecodeError: Expecting value: line 2 column 13 (char 15).
I believe that the problem is at ISODate () but how could I handle that with Python?
Many thanks
This is not valid JSON, to begin with. I guess the ISODATE("...") is generated from MongoDB, maybe dumping the ISODate() helper directly instead of its string representation into the JSON?
In any case, you could use a regex on the whole JSON-string to get rid of the ISODate("..."), retrieve the date as a string and then use python-dateutil to parse the value to a datetime.datetime.
Something to the tune of
import json
import dateutil.parse
import re
json_str = ....
clean_json = re.compile('ISODate\(("[^"]+")\)').sub('\\1', json_str)
json_obj = json.loads(clean_json)
# use dateutil.parser.parse(s) to parse each date into a datetime.datetime

Python3 PUT string, not bytes

I am trying to get Python3 to PUT json info in string format to an API. And I want to do it without
import requests
Thus far I am stuck with this code:
import urllib.request
import urllib.parse
import json
url = "http://www.example.com"
DATA = json.dumps({'grades': {"math": "92", "chem": "39"}})
req = urllib.request.Request(url, data=DATA, method='PUT')
response = urllib.request.urlopen(req)
Naturally this raises the error:
raise TypeError(msg)
TypeError: POST data should be bytes or an iterable of bytes. It cannot be of type str.
To get rid of the error I can do:
DATA= str.encode(DATA)
But this turns my data into bytes format, instead of string that I want to put up. Is there a way to PUT up strings without importing "requests"?(importing anything that comes with python install is OK). Or can I PUT up a *.json file?
Essentially I am trying to do the opposite of this.

HTTPResponse' object has no attribute 'decode

I was getting the following error initially when I was trying to run the code below-
Error:-the JSON object must be str, not 'bytes'
import urllib.request
import json
search = '230 boulder lane cottonwood az'
search = search.replace(' ','%20')
places_api_key = 'AIzaSyDou2Q9Doq2q2RWJWncglCIt0kwZ0jcR5c'
url = 'https://maps.googleapis.com/maps/api/place/textsearch/json?query='+search+'&key='+places_api_key
json_obj = urllib.request.urlopen(url)
data = json.load(json_obj)
for item in data ['results']:
print(item['formatted_address'])
print(item['types'])
After making some troubleshooting changes like:-
json_obj = urllib.request.urlopen(url)
obj = json.load(json_obj)
data = json_obj .readall().decode('utf-8')
Error - 'HTTPResponse' object has no attribute 'decode'
I am getting the error above, I have tried multiple posts on stackoverflow nothing seem to work. I have uploaded the entire working code if anyone can get it to work I'll be very grateful. What I don't understand is that why the same thing worked for others and not me.
Thanks!
urllib.request.urlopen returns an HTTPResponse object which cannot be directly json decoded (because it is a bytestream)
So you'll instead want:
# Convert from bytes to text
resp_text = urllib.request.urlopen(url).read().decode('UTF-8')
# Use loads to decode from text
json_obj = json.loads(resp_text)
However, if you print resp_text from your example, you'll notice it is actually xml, so you'll want an xml reader:
resp_text = urllib.request.urlopen(url).read().decode('UTF-8')
(Pdb) print(resp_text)
<?xml version="1.0" encoding="UTF-8"?>
<PlaceSearchResponse>
<status>OK</status>
...
update (python3.6+)
In python3.6+, json.load can take a byte stream (and json.loads can take a byte string)
This is now valid:
json_obj = json.load(urllib.request.urlopen(url))

Parsing HTTP Response in Python

I want to manipulate the information at THIS url. I can successfully open it and read its contents. But what I really want to do is throw out all the stuff I don't want, and to manipulate the stuff I want to keep.
Is there a way to convert the string into a dict so I can iterate over it? Or do I just have to parse it as is (str type)?
from urllib.request import urlopen
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = urlopen(url)
print(response.read()) # returns string with info
When I printed response.read() I noticed that b was preprended to the string (e.g. b'{"a":1,..). The "b" stands for bytes and serves as a declaration for the type of the object you're handling. Since, I knew that a string could be converted to a dict by using json.loads('string'), I just had to convert the byte type to a string type. I did this by decoding the response to utf-8 decode('utf-8'). Once it was in a string type my problem was solved and I was easily able to iterate over the dict.
I don't know if this is the fastest or most 'pythonic' way of writing this but it works and theres always time later of optimization and improvement! Full code for my solution:
from urllib.request import urlopen
import json
# Get the dataset
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = urlopen(url)
# Convert bytes to string type and string type to dict
string = response.read().decode('utf-8')
json_obj = json.loads(string)
print(json_obj['source_name']) # prints the string with 'source_name' key
You can also use python's requests library instead.
import requests
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = requests.get(url)
dict = response.json()
Now you can manipulate the "dict" like a python dictionary.
json works with Unicode text in Python 3 (JSON format itself is defined only in terms of Unicode text) and therefore you need to decode bytes received in HTTP response. r.headers.get_content_charset('utf-8') gets your the character encoding:
#!/usr/bin/env python3
import io
import json
from urllib.request import urlopen
with urlopen('https://httpbin.org/get') as r, \
io.TextIOWrapper(r, encoding=r.headers.get_content_charset('utf-8')) as file:
result = json.load(file)
print(result['headers']['User-Agent'])
It is not necessary to use io.TextIOWrapper here:
#!/usr/bin/env python3
import json
from urllib.request import urlopen
with urlopen('https://httpbin.org/get') as r:
result = json.loads(r.read().decode(r.headers.get_content_charset('utf-8')))
print(result['headers']['User-Agent'])
TL&DR: When you typically get data from a server, it is sent in bytes. The rationale is that these bytes will need to be 'decoded' by the recipient, who should know how to use the data. You should decode the binary upon arrival to not get 'b' (bytes) but instead a string.
Use case:
import requests
def get_data_from_url(url):
response = requests.get(url_to_visit)
response_data_split_by_line = response.content.decode('utf-8').splitlines()
return response_data_split_by_line
In this example, I decode the content that I received into UTF-8. For my purposes, I then split it by line, so I can loop through each line with a for loop.
I guess things have changed in python 3.4. This worked for me:
print("resp:" + json.dumps(resp.json()))