Python Request Post Loop through set of Json - json

I am trying to build a script that will take each json object I have and execute a requests.post successfully until it is finished. The problem I feel I may be having is running a successful loop that handles that task for me. It has been awhile since I coded some python so any insights will be helpful. Below is my data and code sets
new_df = {
"id": 1,
"name": "memeone",
"smartphoneWidth": 0,
"isHtmlCompatible": true,
"instancesPerPage": 1,
"isArchived": false
}, {
"id": 1,
"name": "memetwo",
"smartphoneWidth": 0,
"isHtmlCompatible": true,
"instancesPerPage": 1,
"isArchived": false
}
I realize it is not in an a list or within brackets [], but this is the only I can get the data to successfully post in my experience. Do I need to put it in a dataframe?
Below is my code that I am using -
test_df = pd.read_csv('dummy_format_no_id_v4.csv').rename_axis('id')
test_two_df = test_df.reset_index().to_json(orient='records')
test_three_df = test_two_df[1:-1]
new_df = test_three_df
for item in new_df:
try:
username = 'username'
password = 'password'
headers = {"Content-Type": "application/json; charset=UTF-8"}
response = requests.post('https://api.someurl.com/1234/thispath', data=new_df, headers=headers, auth=(username, password))
print(response.text)
except:
print('ERROR')
the issue here is it will post first json object ("name": "memeone") successfully, but won't post the next one ("name":"memetwo")? How can I get it to iterate and also post the next json object? Must it be in a dataframe?
Thank you for any advice in advance. Apologies if my code is bad.

Actually, package requests itself has a json parameter that has been provided, and you can use that. instead of using data or saving it in dataframe, you can use like this :
for item in new_df:
try:
headers = {"Content-Type": "application/json; charset=UTF-8"}
response = requests.post(
f"https://httpbin.org/anything/{new_df}", json=new_df, headers=headers
)
# put break statement to terminate the loop
print(response.text)
break
except Exception as e:
raise Exception(f"Uncaught exception error {e}")

Related

Using GraphQL machinery, but return CSV

A normal REST API might let you request the same data in different formats, with a different Accept header, e.g. application/json, or text/html, or a text/csv formatted response.
However, if you're using GraphQL, it seems that JSON is the only acceptable return content type. However, I need my API to be able to return CSV data for consumption by less sophisticated clients that won't understand JSON.
Does it make sense for a GraphQL endpoint to return CSV data if given an Accept: text/csv header? If not, is there a better practise way to do this?
This is more of a conceptual question, but I'm specifically using Graphene to implement my API. Does it provide any mechanism for handling custom content types?
Yes, you can, but it's not built in and you have to override some things. It's more like a work around.
Take these steps and you will get csv output:
Add csv = graphene.String() to your queries and resolve it to whatever you want.
Create a new class inheriting GraphQLView
Override dispatch function to look like this:
def dispatch(self, request, *args, **kwargs):
response = super(CustomGraphqlView, self).dispatch(request, *args, **kwargs)
try:
data = json.loads(response.content.decode('utf-8'))
if 'csv' in data['data']:
data['data'].pop('csv')
if len(list(data['data'].keys())) == 1:
model = list(data['data'].keys())[0]
else:
raise GraphQLError("can not export to csv")
data = pd.json_normalize(data['data'][model])
response = HttpResponse(content_type='text/csv')
response['Content-Disposition'] = 'attachment; filename="output.csv"'
writer = csv.writer(response)
writer.writerow(data.columns)
for value in data.values:
writer.writerow(value)
except GraphQLError as e:
raise e
except Exception:
pass
return response
Import all necessary modules
Replace the default GraphQLView in your urls.py file with your new view class.
Now if you include "csv" in your GraphQL query, it will return raw csv data and then you can save the data into a csv file in your front-end. A sample query is like:
query{
items{
id
name
price
category{
name
}
}
csv
}
Remember that it is a way to get raw data in csv format and you have to save it. You can do that in JavaScript with the following code:
req.then(data => {
let element = document.createElement('a');
element.setAttribute('href', 'data:text/csv;charset=utf-8,' + encodeURIComponent(data.data));
element.setAttribute('download', 'output.csv');
element.style.display = 'none';
document.body.appendChild(element);
element.click();
document.body.removeChild(element);
})
This approach flattens the JSON data so no data is lost.
I have to implement the functionality of exporting list query into a CSV file. Here is how I implement extending #Sina method.
my graphql query for retriving list of users (with limit pagination) is
query userCsv{
userCsv{
csv
totalCount
results(limit: 50, offset: 50){
id
username
email
userType
}
}
}
Make CustomGraphQLView view by inheriting from GraphQLView and overide dispatch function to see if query has a csv also make sure you update graphql url pointing to this custom GraphQLView.
class CustomGraphQLView(GraphQLView):
def dispatch(self, request, *args, **kwargs):
try:
query_data = super().parse_body(request)
operation_name = query_data["operationName"]
except:
operation_name = None
response = super().dispatch(request, *args, **kwargs)
csv_made = False
try:
data = json.loads(response.content.decode('utf-8'))
try:
csv_query = data['data'][f"{operation_name}"]['csv']
csv_query = True
except:
csv_query = None
if csv_query:
csv_path = f"{settings.MEDIA_ROOT}/csv_{datetime.now()}.csv"
results = data['data'][f"{operation_name}"]['results']
# header = results[0].keys()
results = json_normalize(results)
results.to_csv(csv_path, index=False)
data['data'][f"{operation_name}"]['csv'] = csv_path
csv_made = True
except GraphQLError as e:
raise e
except Exception:
pass
if csv_made:
return HttpResponse(
status=200, content=json.dumps(data), content_type="application/json"
)
return response
Operation name is the query name by which you are calling. In previous example given it is userCsv and it is required because the final result as a response came with this key. Response obtained is django http response object. using above operation name we check if csv is present in the query if not present return response as it is but if csv is present then extract query results and make a csv file and store it and attach its path in response.
Here is the graphql schema for the query
class UserListCsvType(DjangoListObjectType):
csv = graphene.String()
class Meta:
model = User
pagination = LimitOffsetGraphqlPagination(default_limit=25, ordering="-id")
class DjangoListObjectFieldUserCsv(DjangoListObjectField):
#login_required
def list_resolver(self, manager, filterset_class, filtering_args, root, info, **kwargs):
return super().list_resolver(manager, filterset_class, filtering_args, root, info, **kwargs)
class Query(graphene.ObjectType):
user_csv = DjangoListObjectFieldUserCsv(UserListCsvType)
Here is the sample response
{
"data": {
"userCsv": {
"csv": "/home/shishir/Desktop/sample-project/media/csv_2021-11-22 15:01:11.197428.csv",
"totalCount": 101,
"results": [
{
"id": "51",
"username": "kathryn",
"email": "candaceallison#gmail.com",
"userType": "GUEST"
},
{
"id": "50",
"username": "bridget",
"email": "hsmith#hotmail.com",
"userType": "GUEST"
},
{
"id": "49",
"username": "april",
"email": "hoffmanzoe#yahoo.com",
"userType": "GUEST"
},
{
"id": "48",
"username": "antonio",
"email": "laurahall#hotmail.com",
"userType": "PARTNER"
}
]
}
}
}
PS: Data generated above are from faker library and I'm using graphene-django-extras and json_normalize is from pandas. CSV file can be download from the path obtained in response.
GraphQL relies on (and shines because of) responding nested data. To my understanding CSV can only display flat key value pairs. This makes CSV not really suitable for GraphQL responses.
I think the cleanest way to achieve what you want to do would be to put a GraphQL client in front of your clients:
+------+ csv +-------+ http/json +------+
|client|<----->|adapter|<----------->|server|
+------+ +-------+ +------+
The good thing here is that your adapter would only have to be able to translate the queries it specifies to CSV.
Obviously you might not always be able to do so (but how are you making them send GraphQL queries then). Alternatively you could build a middleware that translates JSON to CSV. But then you have to deal with the whole GraphQL specification. Good luck translating this response:
{
"__typename": "Query",
"someUnion": [
{ "__typename": "UnionA", "numberField": 1, "nested": [1, 2, 3, 4] },
{ "__typename": "UnionB", "stringField": "str" },
],
"otherField": 123.34
}
So if you can't get around having CSV transported over HTTP GraphQL is simply the wrong choice because it was not built for that. And if you disallow those GraphQL features that are hard to translate to CSV you don't have GraphQL anymore so there is no point in calling it GraphQL then.

JSON Payload in Matlab webwrite

I am trying to send the following POST request with Matlab webwrite:
POST https://url.to.com/hello/world
HEADERS {"API_KEY": "abc123"}
JSON PAYLOAD
{
"return_type": "hello",
"entities": ["ent1"],
"events": ["legal"],
"fields": [],
"filters": {},
"start_date": "2015-01-01 00:00:00",
"end_date": "2016-01-01 00:00:00",
"format": "csv",
"compressed": false
}
In Matlab, I tried the following:
API_KEY = 'abc123';
url = 'https://url.to.com/hello/world';
options = weboptions(...
'MediaType', 'application/json', ...
'HeaderFields', {...
'API_KEY', API_KEY; ...
'Content-Type' 'application/json'});
payload.('return_type') = 'hello';
payload.('entities') = ['ent1'];
payload.('events') = ['legal'];
payload.('fields') = [];
payload.('filters') = {};
payload.('start_date') = '2015-01-01 00:00:00';
payload.('end_date') = '2016-01-01 00:00:00';
payload.('format') = 'csv';
payload.('compressed') = 'false';
response = webwrite(url, payload, options);
However, this returns the error:
The server returned the status 400 with message "Bad Request" in
response to the request to URL
I tried the request above with Postman and it worked. I have also verified that my Matlab headers are properly setup. So it must be my Matlab setup for the JSON payload part. What is wrong there?
Update 1:
I noticed that when you run jsonencode(payload) that it does not return the desired format. Moreover, the "[ .. ]" gets dropped out. I think that the problem starts there as then the request becomes indeed invalid. So we need a way to incorporate the brackets where necessary.
Found the answer on another forum. The problem was indeed the double brackets. We need to set it as follows:
payload.('entities') = {{'ent1'}};
Read more here: https://nl.mathworks.com/matlabcentral/answers/217716-how-to-pass-single-element-json-arrays-using-webwrite

KeyError reading a JSON file

EDIT: Here's a bit more context to how the JSON is received. I'm using the ApiAI API to generate a request to their platform, and they have a method to retrieve it, like this:
# instantiate ApiAI
ai = apiai.ApiAI(CLIENT_ACCESS_TOKEN)
# declare a request obect, fill in in lower lines
request = ai.text_request()
# send ApiAI the request
request.query = "{}".format(textobject.body)
# get response from ApiAI
response = request.getresponse()
response_decode = response.read().decode("utf-8")
response_data = json.loads(response_decode)
I'm coding a webapp in Django and trying to read through a JSON response POSTed to a webhook. The code to read through the JSON, after it has been decoded, is:
if response_data['result']['action'] != "":
Request.objects.create(
request = response_data['result']['resolvedQuery']
)
When I try to run this code, I get this error:
KeyError: 'result'
on the line
if response_data['result']['action'] != "":
I'm confused because it looks to me like 'result' should be a valid key to this JSON that is being read:
{
'id':'65738806-eb8b-4c9a-929f-28dc09d6a333',
'timestamp':'2017-07-10T04:59:46.345Z',
'lang':'en',
'result':{
'source':'agent',
'resolvedQuery':'Foobar',
'action':'Baz'
},
'alternateResult':{
'source':'domains',
'resolvedQuery':'abcdef',
'actionIncomplete':False,
},
'status':{
'code':200,
'errorType':'success'
}
}
Is there another way I should be reading this JSON in my program?
Try:
import JSON
if 'action' in response_data:
parsed_data = json.loads(response_data)
if parsed_data['result']['action'] != "":
Request.objects.create(request = parsed_data['result']['resolvedQuery'])
Thanks for everyone's thoughts. It turned out there was an another error with how I was trying to implement the ApiAI API, and that was causing this error. It now reads through the JSON fine, and I'm using #sasuke's suggestion.

Get JSON in POST in kemal

What I want is a POST request in kemal where the body has a certain number of keys/values that I want to access and then an arbitrary JSON Object that I just want to stringify and pass on and later parse back to JSON.
My problem is that I apparently can't get the types right.
Think of a potential JSON body like this:
{
"endpoint": "http://example.com",
"interval": 500,
"payload": {
"something": "else",
"more": {
"embedded": 1
}
}
}
Now what I've been trying to do is the following:
require "kemal"
post "/schedule" do |env|
endpoint = env.params.json["endpoint"].as(String)
interval = env.params.json["interval"].as(Int64)
payload = String.from_json(env.params.json["payload"].as(JSON::Any))
# ... move things along
env.response.content_type = "application/json"
{ id: id }.to_json
end
Kemal.run
Now apparently what I seem to be getting when accessing "payload" is something of type Hash(String, JSON::Type), which confuses me a bit.
Any ideas how I'd be able to just get a sub-JSON from the request body, transform it to String and back to JSON?
Updated: payload is a type of JSON::Type. Casting and then calling .to_json does the trick.
require "kemal"
post "/schedule" do |env|
endpoint = env.params.json["endpoint"].as(String)
interval = env.params.json["interval"].as(Int64)
payload = env.params.json["payload"].as(JSON::Type)
env.response.content_type = "application/json"
payload.to_json
end
Kemal.run

Tornado POST request not detecting json input as argument

I have written a service which takes a json as input. I am using the website hurl.it to send post requests to check. Below is my code snippet:
class BatchSemanticSimilarityHandler(tornado.web.RequestHandler):
def post(self):
self.set_header('Access-Control-Allow-Origin', '*')
self.set_header('Access-Control-Allow-Credentials', 'true')
self.set_header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS')
self.set_header('Access-Control-Allow-Headers','Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token')
data = json.loads(self.request.body)
apikey = data["apikey"]
try:
UA = self.request.headers["User-Agent"]
except:
UA = "NA"
if bool(usercoll.find_one({"apikey":apikey})) == True:
sentence = data["sentence"]
sentence_array = data["sentence_array"]
n = data["num_of_results"]
if sentence is None or sentence_array is [] or apikey is None or n is None:
self.set_status(200)
output = {"error":[{"code":334,"message":"Bad Input data"}]}
misscoll.insert({"apitype":"batchsemanticsimilarity","timestamp":datetime.datetime.now(), "ip":self.request.remote_ip, "useragent":UA, "uri":self.request.uri,"apikey":apikey, "output":output, "input":{"s1":sentence,"s2":sentence_array}})
self.write(output)
return
results = nb.get_similar(sentence, sentence_array, apikey, n)
print "results is",results
output = {"similar_sentences": results, 'credits':'ParallelDots'}
hitscoll.insert({"apitype":"batchsemanticsimilarity","timestamp":datetime.datetime.now(), "ip":self.request.remote_ip, "useragent":UA, "uri":self.request.uri,"apikey":apikey, "output":output, "input":{"s1":sentence,"s2":sentence_array}})
self.write(output)
return
else:
rejectcoll.insert({"apitype":"batchsemanticsimilarity","apikey":apikey,"timestamp":datetime.datetime.now(), "ip":self.request.remote_ip, "useragent":UA, "url":self.request.uri})
self.write({"error":[{"code":333,"message": "Bad Authentication data"}]})
return
The json that I am giving as the body of the request is as below:
{
"sentence": "BJP leads in Bengaluru civic body`s poll, all eyes on JD(S)",
"sentence_array": [
"Narendra Modi is the prime minister",
"Sonia Gandhi runs Congress",
"Sachin is a good batsman"
],
"apikey": "DyMe1gSNhvMV1I1b20a7KARYIwuQX5GAQ",
"num_of_results": 2
}
I have verified on jsonlint that this is a valid JSON.
However while sending the request it gives me below error:
ValueError: No JSON object could be decoded
Can anyone please help me sort this out!!
The JSON object that you are passing in POST request is encoded into the url.
JSON library cannot read the encoded data.So you need to decode the url first.
Decoding of url can be done using urlparse library in python.so you need something like this.
post_data=urlparse.parse_qsl(self.request.body)
According to your need of final format to read there are various methods in urlparse.check this
or
As specified in the docs you can override a method to enable JSON parsing
def prepare(self):
if self.request.headers["Content-Type"].startswith("application/json"):
self.json_args = json.loads(self.request.body)
else:
self.json_args = None
check this