how to write r.headers from different urls into one json? - json

I would like to crawl several urls, while using the requests library in python. I am scrutinizing the GET requests as well as the response headers. However, when crawling and getting the data from different urls I am facing the problem, that I don't know all 'key:values', which are coming in. Thus writing those data to a valid csv file is not really possible, in my point of view. Therefore I want to write the data into a json file.
The problem is similar to the following thread from 2014, but not the same:
Get a header with Python and convert in JSON (requests - urllib2 - json)
import requests, json
urls = ['http://www.example.com/', 'http://github.com']
with open('test.json', 'w') as f:
for url in urls:
r = requests.get(url)
rh = r.headers
f.write(json.dumps(dict(rh), sort_keys=True, separators=(',', ':'), indent=4))
I expect a json file, with the headers for each URL. I get a Json file with those data, but my IDE (PyCHarm) is showing an Error, which states out that
JSON standard allows only one top-level value. I have read the documentation:https://docs.python.org/3/library/json.html#repeated-names-within-an-object; but did not get it. Any hint would be appreciated.
EDIT: The only thing which is missing in the outcome is another comma. But where do I enter it and what command do I need for this?

You need to add it to an array and then finally do the json dump to a file. This will work.
urls = ['http://www.example.com/', 'http://github.com']
headers = []
for url in urls:
r = requests.get(url)
header_dict = dict(r.headers)
header_dict['source_url'] = url
headers.append(header_dict)
with open('test.json', 'w', encoding='utf-8') as f:
json.dump(headers, f, sort_keys=True, separators=(',', ':'), indent=4)
You still can write it to a csv:
import pandas as pd
df = pd.DataFrame(headers)
df.to_csv('test.csv')

Related

Get information out of large JSON file

I am new to JSON file and i'm strugeling to get any information out of it.
The structure of the JSON file is as following:
Json file Structure
Now what I need is to access the "batches", to get the data from each variable.
I did try codes (shown below) i've found to reach deeper keys but somehow i still didnt get any results.
1.
def safeget(dct, *keys):
for key in keys:
try:
dct = dct[key]
except KeyError:
return None
return dct
safeget(mydata,"batches")
def dict_depth(mydata):
if isinstance(mydata, dict):
return 1 + (max(map(dict_depth, mydata.values()))
if mydata else 0)
return 0
print(dict_depth(mydata))
The final goal then would be to create a loop to extract all the information but thats something for the future.
Any help is highly appreciated, also any recommendations how i should ask things here in the future to get the best answers!
As far as I understood, you simply want to extract all the data without any ordering?
Then this should work out:
# Python program to read
# json file
import json
# Opening JSON file
f = open('data.json',)
# returns JSON object as
# a dictionary
data = json.load(f)
# Iterating through the json
# list
for i in data['emp_details']:
print(i)
# Closing file
f.close()

python api json dict in dataframe

I want to scrape data at the county level from https://apidocs.covidactnow.org
However I could only get a dataframe with one line for each county, and data for each date is stored within a dictionary in each row/county. I would like to access this data and store it in long format (= have one row per county-date).
import requests
import pandas as pd
import os
if __name__ == '__main__':
os.chdir('/home/username/Desktop/')
url = 'https://api.covidactnow.org/v2/counties.timeseries.json?apiKey=ENTER_YOUR_KEY'
response = requests.get(url).json()
data = pd.DataFrame(response)
This seems like a trivial question, but I've tried for hours. What would be the best way to achieve that ?
Do you mean something like that?
import requests
url = 'https://api.covidactnow.org/v2/states.timeseries.csv?apiKey=YOURAPIKEY'
response = requests.get(url)
csv_response = (response.text)
# Then you can transform STRING to CSV
Check this fo string to CSV --> python parsing string to csv format

Attempting to parse a JSON file with Python

So I've been beating my head against a wall for days now and have been diving down the google/SO rabbit hole in search of answers. I've been debating on how to phrase this question as the API that I am pulling from, may or may not contain some sensitive information that gets uncomfortably close to HIPPA laws for my liking. For that reason I will not be providing the direct link/auth for the my code. That being said I will be providing a made up JSON script to help with the explaining.
import requests
import json
import urllib3
r = requests.get('https://madeup.url.com/api/vi/information here', auth=('123456789', '1111111111222222222223333333333444444455555555'))
payload = {'query': 'firstName'}
response = requests.get(r, params=payload)
json_response = response.json()
print(json.dumps(json_response))
The JSON file that I'm trying to parse looks in part like this:
"{\"id\": 123456789, \"firstName\": \"NAME\", \"lastName\": \"NAME\", \"phone\": \"NUMBER\", \"email\": \"EMAIL#gmail.com\", \"date\": \"December 16, 2021\", \"time\": \"9:50am\", \"endTime\": \"10:00am\",.....
When I run the code I am getting a "urllib3.exceptions.LocationParseError: Failed to parse: <Response [200]>" traceback and I can not for the life of me figure out what is going on. urllib3 is installed and updated according to the console.
Any help would be much appreciated. TIA
That is not a JSON file. It is a string containing escaped characters. It needs to be unescaped before parsing can work.
youre passing r to requests.get() (line 9) , but r is a response to another requests.get() (line 5)... shouldn't you be passing params=payload in line 5? then getting de response from there, in one single request
import requests
import json
import urllib3
payload = {'query': 'firstName'}
response = requests.get('{YOUR_URL}', auth=('{USER}', '{PASS}'), params=payload)
json_response = response.json()
print(json.dumps(json_response))
That is not a JSON file. It is a string containing escaped characters. It needs to be unescaped before parsing can work.
Well now I'm even more confused. I'm trying to self teach myself python and clearly struggling. To get the "JSON" I posted I used the following code:
r = requests.get('URL', 'auth = ('user', 'pass'))
Data = r.json()
packages_str = json.dumps(Data[0])
with open('Data.json', 'w') as f:
json.dump(packages_str, f)
So basically I'm even more lost now...
Okay, update: Good news! kinda... so my code now reads as follows;
import requests
import json
import urllib3
payload = {
'query1'= 'firstName',
'query2'= 'lastName'
}
response = requests.get("url", auth= ("user","pass"), params=payload)
Data = response.json()
packages_str = json.dumps(Data, ensure_ascii=False, indent=2)
with open('Data.json), 'w') as f:
json.dump(packages_str,f)
f.write(packages_str)
And when I then open the JOSN file, the first line of is the entire API in a string but below that, is a properly formatted JSON file. Unfortunately its the entire API and not a parsed JSON file looking for the the information That I need...
Continuing down the google/youtube/SO rabbit hole and will update at a later date if i find a work around.

Parse Twitter JSON Content in Ptython3

I searched for all similar questions and yet couldn't resolve below issue.
Here's my json file content:
https://objectsoftconsultants.s3.amazonaws.com/tweets.json
Code to get a particular element is as below:
import json
testsite_array = []
with open('tweets.json') as json_file:
testsite_array = json_file.readlines()
for text in testsite_array:
json_text = json.dumps(text)
resp = json.loads(json_text)
print(resp["created_at"])
Keep getting below error:
print(resp["created_at"])
TypeError: string indices must be integers
Thanks much for your time and help, well in advance.
I have to guess what you're trying to do and can only hope that this will help you:
with open('tweets.json') as f:
tweets = json.load(f)
print(tweets['created_at'])
It doesn't make sense to read a json file with readlines, because it is unlikely that each line of the file represents a complete json document.
Also I don't get why you're dumping the string only to load it again immediately.
Update:
Try this to parse your file line by line:
with open('tweets.json') as f:
lines = f.readlines()
for line in lines:
try:
tweet = json.loads(line)
print(tweet['created_at'])
except json.decoder.JSONDecodeError:
print('Error')
I want to point out however, that I do not recommend this approach. A file should contain only one json document. If the file does not contain a valid json document, the source for the file should be fixed.

How do I get Python requests.get "application/json" from a HTML page?

Is there any way to get the JSON Code from a HTML Website? If I use a code like those:
r = requests.get(url)
if r.status_code == 200:
r.json()
result = json.loads(r)
I will always have an error at HTML pages. What modules should I use for getting HTML pages to an Python-dictionary?
You only have one error in your code.
Once you did
r.json()
You didn't assign it to anything. To correct this problem just change your previous line with the line below and you should be good :).
r = r.json()
Not all webpages responds with JSON data. But you can use json.loads to print data in json string. You can also use r.contents or r.text to know the type of data coming from webpage. Most of the time it will be just HTML Content
import requests
import json
r = requests.get('http://www.google.com')
# you can use r.content to print the webpage data
print r.content
# json.loads(data) `json_loads` is to convert data into `json string`
print json.loads(r.content)
json.loads will go into ValueError if the data cannot be decoded into JSON Object