Attempting to parse a JSON file with Python - json

So I've been beating my head against a wall for days now and have been diving down the google/SO rabbit hole in search of answers. I've been debating on how to phrase this question as the API that I am pulling from, may or may not contain some sensitive information that gets uncomfortably close to HIPPA laws for my liking. For that reason I will not be providing the direct link/auth for the my code. That being said I will be providing a made up JSON script to help with the explaining.
import requests
import json
import urllib3
r = requests.get('https://madeup.url.com/api/vi/information here', auth=('123456789', '1111111111222222222223333333333444444455555555'))
payload = {'query': 'firstName'}
response = requests.get(r, params=payload)
json_response = response.json()
print(json.dumps(json_response))
The JSON file that I'm trying to parse looks in part like this:
"{\"id\": 123456789, \"firstName\": \"NAME\", \"lastName\": \"NAME\", \"phone\": \"NUMBER\", \"email\": \"EMAIL#gmail.com\", \"date\": \"December 16, 2021\", \"time\": \"9:50am\", \"endTime\": \"10:00am\",.....
When I run the code I am getting a "urllib3.exceptions.LocationParseError: Failed to parse: <Response [200]>" traceback and I can not for the life of me figure out what is going on. urllib3 is installed and updated according to the console.
Any help would be much appreciated. TIA

That is not a JSON file. It is a string containing escaped characters. It needs to be unescaped before parsing can work.

youre passing r to requests.get() (line 9) , but r is a response to another requests.get() (line 5)... shouldn't you be passing params=payload in line 5? then getting de response from there, in one single request
import requests
import json
import urllib3
payload = {'query': 'firstName'}
response = requests.get('{YOUR_URL}', auth=('{USER}', '{PASS}'), params=payload)
json_response = response.json()
print(json.dumps(json_response))

That is not a JSON file. It is a string containing escaped characters. It needs to be unescaped before parsing can work.
Well now I'm even more confused. I'm trying to self teach myself python and clearly struggling. To get the "JSON" I posted I used the following code:
r = requests.get('URL', 'auth = ('user', 'pass'))
Data = r.json()
packages_str = json.dumps(Data[0])
with open('Data.json', 'w') as f:
json.dump(packages_str, f)
So basically I'm even more lost now...

Okay, update: Good news! kinda... so my code now reads as follows;
import requests
import json
import urllib3
payload = {
'query1'= 'firstName',
'query2'= 'lastName'
}
response = requests.get("url", auth= ("user","pass"), params=payload)
Data = response.json()
packages_str = json.dumps(Data, ensure_ascii=False, indent=2)
with open('Data.json), 'w') as f:
json.dump(packages_str,f)
f.write(packages_str)
And when I then open the JOSN file, the first line of is the entire API in a string but below that, is a properly formatted JSON file. Unfortunately its the entire API and not a parsed JSON file looking for the the information That I need...
Continuing down the google/youtube/SO rabbit hole and will update at a later date if i find a work around.

Related

how to write r.headers from different urls into one json?

I would like to crawl several urls, while using the requests library in python. I am scrutinizing the GET requests as well as the response headers. However, when crawling and getting the data from different urls I am facing the problem, that I don't know all 'key:values', which are coming in. Thus writing those data to a valid csv file is not really possible, in my point of view. Therefore I want to write the data into a json file.
The problem is similar to the following thread from 2014, but not the same:
Get a header with Python and convert in JSON (requests - urllib2 - json)
import requests, json
urls = ['http://www.example.com/', 'http://github.com']
with open('test.json', 'w') as f:
for url in urls:
r = requests.get(url)
rh = r.headers
f.write(json.dumps(dict(rh), sort_keys=True, separators=(',', ':'), indent=4))
I expect a json file, with the headers for each URL. I get a Json file with those data, but my IDE (PyCHarm) is showing an Error, which states out that
JSON standard allows only one top-level value. I have read the documentation:https://docs.python.org/3/library/json.html#repeated-names-within-an-object; but did not get it. Any hint would be appreciated.
EDIT: The only thing which is missing in the outcome is another comma. But where do I enter it and what command do I need for this?
You need to add it to an array and then finally do the json dump to a file. This will work.
urls = ['http://www.example.com/', 'http://github.com']
headers = []
for url in urls:
r = requests.get(url)
header_dict = dict(r.headers)
header_dict['source_url'] = url
headers.append(header_dict)
with open('test.json', 'w', encoding='utf-8') as f:
json.dump(headers, f, sort_keys=True, separators=(',', ':'), indent=4)
You still can write it to a csv:
import pandas as pd
df = pd.DataFrame(headers)
df.to_csv('test.csv')

Unable to understand and parse the JSON URL response

I have a json url and I am trying to extract data from the response. below is my code
url = urllib2.urlopen("https://i1.adis.ws/s/foo/M0011126_001_SET.js?func=app.mjiProduct.handleJSON&protocol=https")
content = url.read()
soup = BeautifulSoup(content, "html.parser")
print(soup.prettify())
print(soup.items)
newDictionary=json.loads(str(soup))
Below is the response.content
app.mjiProduct.handleJSON({"name":"M0011126_001_SET","items":[{"type":"img","src":"https://i1.adis.ws/i/foo/M0011126_001_MAIN","width":3200,"height":4800,"format":"TIFF","opaque":"true"},{"type":"img","src":"https://i1.adis.ws/i/foo/M0011126_001_ALT1","width":3200,"height":4800,"format":"TIFF","opaque":"true"},{"type":"img","src":"https://i1.adis.ws/i/foo/M0011126_001_ALT2","width":3200,"height":4800,"format":"TIFF","opaque":"true"}]});
I am new to JSON and unable to understand the response. In addition, I need to parse the response in json or in some form to extract image sources. But the above code gives me below error.
No JSON object could be decoded
Can Anyone please guide me ? Thanks
first of all your url isn't working it returns app.mjiProduct.handleJSON({"status":"error","errorMsg":"Failed to get set"});
the second thing is that you don't have to pass the content to Beautifulsoup, you could pass it directly to json like I did in my code bellow without the Beautifulsoup object.
I used httpbin to test but this should work in your url. I used python3 tho
from urllib.request import urlopen
import json
url = urlopen("http://httpbin.org/get")
content = url.read()
newDictionary=json.loads(content)
print(newDictionary)
output: {'args': {}, 'headers': {'Accept-Encoding': 'identity', 'Connection': 'close', 'Host': 'httpbin.org', 'User-Agent': 'Python-urllib/3.6'}, 'origin': '', 'url': 'http://httpbin.org/get'}
Below is the code that worked for me.
json_data=url.read()
purify_data = json_data.split('handleJSON(')[1].split(');')[0]
loaded_json = json.dumps(json_data)
print(loaded_json['items'][0]['src'])
actually, I figured out that json_data was of type string and I was unable to decode because of the format of that string, that was
app.mjiProduct.handleJSON(REQUIRED JSON)
So, first I filtered my string and then loaded it with json and the problem is solved.
The response doesn't contain valid JSON. It looks like a executable code (probably JavaScript). But the part {"name":"M0011126_001_SET","items":[...]} is valid JSON. So if you know for sure that response has always this format you can strip the function call like this:
content = url.read()[26:-2] # Cut first 26 characters and last two
newDictionary=json.loads(str(content))
I don't know much the Beautiful Soup but what I find it's a library for processing HTML files while your response is not HTML so I think you shouldn't use it for it.

How to get data from rest api and save JSON to txt file?

I am trying to get some data from a rest api and save a JSON to a txt-file. Here is what I do:
#random rest api
a = 'https://thiswouldbemyurl.com'
#urllib3 + poolmanager for requests
import urllib3
http = urllib3.PoolManager()
import json
r = http.request('GET', a)
json.loads(r.data.decode('utf-8'))
with open('data.txt', 'w') as f:
json.dump(data, f, ensure_ascii=False)
I get an error already with json.load. What am I doing wrong?
EDIT: This is how the JSON looks like
{
"success":true,
"data":[
{
"id":26,
"name":"A",
"comment":"",
"start_time_plan":null,
"start_time_actual":"2016-09-13 00:00:00",
"start_time_delta":null,
"start_time_score":null,
"start_time_score_achievement":null,
"start_time_traffic_light":null,
"end_time_plan":null,
"end_time_actual":"2016-09-13 00:00:00",
"end_time_delta":null,
"end_time_score":null,
"end_time_score_achievement":null,
"end_time_traffic_light":null,
"status":0,
"measure_schedule_revision_id":63,
"responsible_user_id":3,
"created_time":"2016-09-13 11:29:14",
"created_user_id":3,
"modified_time":"2016-09-21 16:33:41",
"modified_user_id":3,
"model":"Activity"
}
It sounds like you're trying to json.load(...) something that is not actually JSON.
Looking at the URL you're using, https://jsonplaceholder.typicode.com/ returns HTML rather than JSON.
If you use something like https://jsonplaceholder.typicode.com/posts which does return JSON, then that particular error should go away.

How do I get Python requests.get "application/json" from a HTML page?

Is there any way to get the JSON Code from a HTML Website? If I use a code like those:
r = requests.get(url)
if r.status_code == 200:
r.json()
result = json.loads(r)
I will always have an error at HTML pages. What modules should I use for getting HTML pages to an Python-dictionary?
You only have one error in your code.
Once you did
r.json()
You didn't assign it to anything. To correct this problem just change your previous line with the line below and you should be good :).
r = r.json()
Not all webpages responds with JSON data. But you can use json.loads to print data in json string. You can also use r.contents or r.text to know the type of data coming from webpage. Most of the time it will be just HTML Content
import requests
import json
r = requests.get('http://www.google.com')
# you can use r.content to print the webpage data
print r.content
# json.loads(data) `json_loads` is to convert data into `json string`
print json.loads(r.content)
json.loads will go into ValueError if the data cannot be decoded into JSON Object

POSTed JSON encoding problems

I recieve a POSTed JSON with mod_wsgi on Apache. I have to forward the JSON to some API (using POST), take API's response and respond back to where the initial POST came from.
Here goes the python code
import requests
import urllib.parse
def application(environ, start_response):
url = "http://texchange.nowtaxi.ru/api/secret_api_key/"
query = environ['QUERY_STRING']
if query == "get":
url += "tariff/list"
r = requests.get(url)
response_headers = [('Content-type', 'application/json')]
else:
url += "order/put"
input_len = int(environ.get('CONTENT_LENGTH', '0'))
data = environ['wsgi.input'].read(input_len)
decoded = data.decode('utf-8')
unquoted = urllib.parse.unquote(decoded)
print(decoded) # 'from%5Baddress%5D=%D0%'
print(unquoted) # 'from[address]=\xd0\xa0'
r = requests.post(url,data)
output_len = sum(len(line) for line in r.text)
response_headers = [('Content-type', 'application/json'),
('Content-Length', str(output_len))]
status = "200 OK"
start_response(status, response_headers)
return [r.text.encode('utf-8')]
The actual JSON starts "{"from":{"address":"Россия
I thought those \x's are called escaped symbols, so I tried ast.literal_eval and codecs.getdecoder("unicode_escape"), but it didn't help. I can't properly google the case, because I feel like I misunderstood wtf is happening here. Maybe I have to somehow change the $.post() call in the .js file that sends POST to the wsgi script?
UPD: my bro said that it's totally unclear what I need. I'll clarify. I need to get the string that represents the recieved JSON in it's initial form. With cyrillic letters, "s, {}s, etc. What I DO get after decoding recieved byte-sequence is 'from%5Baddress%5D=%D0%'. If I unquote it, it converts into 'from[address]=\xd0\xa0', but that's still not what I want