Handling application/json data with bottle - json

I'm trying to write a simple server frontend to a python3 application, using a restful JSON-based protocol. So far, bottle seems the best suited framework for the task (it supports python3, handles method dispatching in a nice way, and easily returns JSON.) The problem is parsing the JSON in the input request.
The documentation only mention request.fields and request.files, both I assume refer to multipart/form-data data. No mention of accessing the request data directly.
Peeking at the source code, I can see a request.body object of type BytesIO. json.load refuses to act on it directly, dying in the json lib with can't use a string pattern on a bytes-like object. The proper way to do it may be to first decode the bytes to unicode characters, according to whichever charset was specified in the Content-Type HTTP header. I don't know how to do that; I can see a StringIO class and assume it may hold a buffer of characters instead of bytes, but see no way of decoding a BytesIO to a StringIO, if this is even possible at all.
Of course, it may also be possible to read the BytesIO object into a bytestring, then decode it into a string before passing it to the JSON decoder, but if I understand correctly, that breaks the nice buffering behavior of the whole thing.
Or is there any better way to do it ?

It seems that io.TextIOWrapper from the standard library does the trick !
def parse(request):
encoding = ... #get encoding from headers
return json.load(TextIOWrapper(request.body, encoding=encoding))

Here's what I do to read in json on a RESTful service with Python3 and Bottle:
import bson.json_util as bson_json
#app.post('/location/API')
def post_json_example():
"""
param: _id, value
return: I usually return something like {"status": "successful", "message": "discription"}
"""
query_string = bottle.request.query.json
query_dict = bson_json.loads(query_string)
_id = query_dict['_id']
value = query_dict['value']
Then to Test
from python3 interpreter, import requests
s = request.Session()
r = s.post('http://youserver.com:8080/location/API?json
{"_id":"540a16663dafb492a0a7626c","value":"test"}')
use r.text to verify what was returned.

I wrote an helper to use the good idea of b0fh.
After 2 weeks on response.json analyzing, I connect to StackOver Flow and understand that we need a work around
Here is:
def json_app_rqt():
# about request
request.accept = 'application/json, text/plain; charset=utf-8'
def json_app_resp():
# about response
response.headers['Access-Control-Allow-Origin'] = _allow_origin
response.headers['Access-Control-Allow-Methods'] = _allow_methods
# response.headers['Access-Control-Allow-Headers'] = _allow_headers
response.headers['Content-Type'] = 'application/json; charset=utf-8'
def json_app():
json_app_rqt()
json_app_resp()
def get_json_request(rqt):
with TextIOWrapper(rqt.body, encoding = "UTF-8") as json_wrap:
json_text = ''.join(json_wrap.readlines())
json_data = json.loads(json_text)
return json_data
For the using, we cand do:
if __name__ == "__main__":
json_app()
#post("/train_control/:control")
def do_train_control(control):
json_app_resp()
data = get_json_request(request)
print(json.dumps(data))
return data
Thanks to all

Related

POSTed JSON encoding problems

I recieve a POSTed JSON with mod_wsgi on Apache. I have to forward the JSON to some API (using POST), take API's response and respond back to where the initial POST came from.
Here goes the python code
import requests
import urllib.parse
def application(environ, start_response):
url = "http://texchange.nowtaxi.ru/api/secret_api_key/"
query = environ['QUERY_STRING']
if query == "get":
url += "tariff/list"
r = requests.get(url)
response_headers = [('Content-type', 'application/json')]
else:
url += "order/put"
input_len = int(environ.get('CONTENT_LENGTH', '0'))
data = environ['wsgi.input'].read(input_len)
decoded = data.decode('utf-8')
unquoted = urllib.parse.unquote(decoded)
print(decoded) # 'from%5Baddress%5D=%D0%'
print(unquoted) # 'from[address]=\xd0\xa0'
r = requests.post(url,data)
output_len = sum(len(line) for line in r.text)
response_headers = [('Content-type', 'application/json'),
('Content-Length', str(output_len))]
status = "200 OK"
start_response(status, response_headers)
return [r.text.encode('utf-8')]
The actual JSON starts "{"from":{"address":"Россия
I thought those \x's are called escaped symbols, so I tried ast.literal_eval and codecs.getdecoder("unicode_escape"), but it didn't help. I can't properly google the case, because I feel like I misunderstood wtf is happening here. Maybe I have to somehow change the $.post() call in the .js file that sends POST to the wsgi script?
UPD: my bro said that it's totally unclear what I need. I'll clarify. I need to get the string that represents the recieved JSON in it's initial form. With cyrillic letters, "s, {}s, etc. What I DO get after decoding recieved byte-sequence is 'from%5Baddress%5D=%D0%'. If I unquote it, it converts into 'from[address]=\xd0\xa0', but that's still not what I want

Django request Post json

I try to test a view, I receive a json request from the IPad, the format is:
req = {"custom_decks": [
{
"deck_name": "deck_test",
"updates_last_applied": "1406217357",
"created_date": 1406217380,
"slide_section_ids": [
1
],
"deck_id": 1
}
],
"custom_decks_to_delete": []
}
I checked this in jsonlint and it passed.
I post the req via:
response = self.client.post('/library/api/6.0/user/'+ uuid +
'/store_custom_dec/',content_type='application/json', data=req)
The view return "creation_success": false
The problem is the post method in view doesn't find the key custom_decks.
QueryDict: {u'{"custom_decks": [{"deck_id": 1, "slide_section_ids": [1],
"created_date":1406217380, "deck_name": "deck_test"}],
"custom_decks_to_delete": []}': [u'']}>
The problem is the post method in view doesn't find the key custom_decks.
Because it is converting my dict to QueryDict with one key.
I appreciate all helps.
Thanks
You're posting JSON, which is not the same as form-encoded data. You need to get the value of request.body and deserialize it:
data = json.loads(request.body)
custom_decks = data['custom_decks']
As I was having problems with getting JSON data from HttpRequest directly with the code of the other answer:
data = json.loads(request.body)
custom_decks = data['custom_decks']
error:
the JSON object must be str, not 'bytes'
Here is an update of the other answer for Python version >3:
json_str=((request.body).decode('utf-8'))
json_obj=json.loads(json_str)
Regarding decode('utf-8'), as mention in:
RFC 4627:
"JSON text shall be encoded in Unicode. The default encoding is
UTF-8."
I attached the Python link referred to this specific problem for version >3.
http://bugs.python.org/issue10976
python 3.6 and django 2.0 :
post_json = json.loads(request.body)
custom_decks = post_json.get("custom_decks")
json.loads(s, *, encoding=None,...)
Changed in version 3.6: s can now be of type bytes or bytearray. The input encoding should be UTF-8, UTF-16 or UTF-32.
From python 3.6 NO need request.body.decode('utf-8') .
Since HttpRequest has a read() method loading JSON from request is actually as simple as:
def post(self, request, *args, **kwargs):
import json
data = json.load(request)
return JsonResponse(data=data)
If you put this up as a view, you can test it and it'll echo any JSON you send back to you.

HTTPResponse object -- JSON object must be str, not 'bytes'

I've been trying to update a small Python library called libpynexmo to work with Python 3.
I've been stuck on this function:
def send_request_json(self, request):
url = request
req = urllib.request.Request(url=url)
req.add_header('Accept', 'application/json')
try:
return json.load(urllib.request.urlopen(req))
except ValueError:
return False
When it gets to this, json responds with:
TypeError: the JSON object must be str, not 'bytes'
I read in a few places that for json.load you should pass objects (In this case an HTTPResponse object) with a .read() attached, but it doesn't work on HTTPResponse objects.
I'm at a loss as to where to go with this next, but being that my entire 1500 line script is freshly converted to Python 3, I don't feel like going back to 2.7.
Facing the same problem I solve it using decode()
...
rawreply = connection.getresponse().read()
reply = json.loads(rawreply.decode())
I recently wrote a small function to send Nexmo messages. Unless you need the full functionality of the libpynexmo code, this should do the job for you. And if you want to continue overhauling libpynexmo, just copy this code. The key is utf8 encoding.
If you want to send any other fields with your message, the full documentation for what you can include with a nexmo outbound message is here
Python 3.4 tested Nexmo outbound (JSON):
def nexmo_sendsms(api_key, api_secret, sender, receiver, body):
"""
Sends a message using Nexmo.
:param api_key: Nexmo provided api key
:param api_secret: Nexmo provided secrety key
:param sender: The number used to send the message
:param receiver: The number the message is addressed to
:param body: The message body
:return: Returns the msgid received back from Nexmo after message has been sent.
"""
msg = {
'api_key': api_key,
'api_secret': api_secret,
'from': sender,
'to': receiver,
'text': body
}
nexmo_url = 'https://rest.nexmo.com/sms/json'
data = urllib.parse.urlencode(msg)
binary_data = data.encode('utf8')
req = urllib.request.Request(nexmo_url, binary_data)
response = urllib.request.urlopen(req)
result = json.loads(response.readall().decode('utf-8'))
return result['messages'][0]['message-id']
I met the problem as well and now it pass
import json
import urllib.request as ur
import urllib.parse as par
html = ur.urlopen(url).read()
print(type(html))
data = json.loads(html.decode('utf-8'))
Since you are getting a HTTPResponse, you can use Tornado.escape and its json_decode() to convert the JSON string into a dictionary:
from tornado import escape
body = escape.json_decode(body)
From the manual:
tornado.escape.json_decode(value)
Returns Python objects for the given JSON string.

Parsing HTTP Response in Python

I want to manipulate the information at THIS url. I can successfully open it and read its contents. But what I really want to do is throw out all the stuff I don't want, and to manipulate the stuff I want to keep.
Is there a way to convert the string into a dict so I can iterate over it? Or do I just have to parse it as is (str type)?
from urllib.request import urlopen
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = urlopen(url)
print(response.read()) # returns string with info
When I printed response.read() I noticed that b was preprended to the string (e.g. b'{"a":1,..). The "b" stands for bytes and serves as a declaration for the type of the object you're handling. Since, I knew that a string could be converted to a dict by using json.loads('string'), I just had to convert the byte type to a string type. I did this by decoding the response to utf-8 decode('utf-8'). Once it was in a string type my problem was solved and I was easily able to iterate over the dict.
I don't know if this is the fastest or most 'pythonic' way of writing this but it works and theres always time later of optimization and improvement! Full code for my solution:
from urllib.request import urlopen
import json
# Get the dataset
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = urlopen(url)
# Convert bytes to string type and string type to dict
string = response.read().decode('utf-8')
json_obj = json.loads(string)
print(json_obj['source_name']) # prints the string with 'source_name' key
You can also use python's requests library instead.
import requests
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = requests.get(url)
dict = response.json()
Now you can manipulate the "dict" like a python dictionary.
json works with Unicode text in Python 3 (JSON format itself is defined only in terms of Unicode text) and therefore you need to decode bytes received in HTTP response. r.headers.get_content_charset('utf-8') gets your the character encoding:
#!/usr/bin/env python3
import io
import json
from urllib.request import urlopen
with urlopen('https://httpbin.org/get') as r, \
io.TextIOWrapper(r, encoding=r.headers.get_content_charset('utf-8')) as file:
result = json.load(file)
print(result['headers']['User-Agent'])
It is not necessary to use io.TextIOWrapper here:
#!/usr/bin/env python3
import json
from urllib.request import urlopen
with urlopen('https://httpbin.org/get') as r:
result = json.loads(r.read().decode(r.headers.get_content_charset('utf-8')))
print(result['headers']['User-Agent'])
TL&DR: When you typically get data from a server, it is sent in bytes. The rationale is that these bytes will need to be 'decoded' by the recipient, who should know how to use the data. You should decode the binary upon arrival to not get 'b' (bytes) but instead a string.
Use case:
import requests
def get_data_from_url(url):
response = requests.get(url_to_visit)
response_data_split_by_line = response.content.decode('utf-8').splitlines()
return response_data_split_by_line
In this example, I decode the content that I received into UTF-8. For my purposes, I then split it by line, so I can loop through each line with a for loop.
I guess things have changed in python 3.4. This worked for me:
print("resp:" + json.dumps(resp.json()))

How to get Slurpable data from REST client in Groovy?

I have code that looks like this:
def client = new groovyx.net.http.RESTClient('myRestFulURL')
def json = client.get(contentType: JSON)
net.sf.json.JSON jsonData = json.data as net.sf.json.JSON
def slurper = new JsonSlurper().parseText(jsonData)
However, it doesn't work! :( The code above gives an error in parseText because the json elements are not quoted. The overriding issue is that the "data" is coming back as a Map, not as real Json. Not shown, but my first attempt, I just passed the parseText(json.data) which gives an error about not being able to parse a HashMap.
So my question is: how do I get JSON returned from the RESTClient to be parsed by JsonSlurper?
The RESTClient class automatically parses the content and it doesn't seem possible to keep it from doing so.
However, if you use HTTPBuilder you can overload the behavior. You want to get the information back as text, but if you only set the contentType as TEXT, it won't work, since the HTTPBuilder uses the contentType parameter of the HTTPBuilder.get() method to determine both the Accept HTTP Header to send, as well was the parsing to do on the object which is returned. In this case, you need application/json in the Accept header, but you want the parsing for TEXT (that is, no parsing).
The way you get around that is to set the Accept header on the HTTPBuilder object before calling get() on it. That overrides the header that would otherwise be set on it. The below code runs for me.
#Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.6')
import static groovyx.net.http.ContentType.TEXT
def client = new groovyx.net.http.HTTPBuilder('myRestFulURL')
client.setHeaders(Accept: 'application/json')
def json = client.get(contentType: TEXT)
def slurper = new groovy.json.JsonSlurper().parse(json)
The type of response from RESTClient will depend on the version of :
org.codehaus.groovy.modules.http-builder:http-builder
For example, with version 0.5.2, i was getting a net.sf.json.JSONObject back.
In version 0.7.1, it now returns a HashMap as per the question's observations.
When it's a map, you can simply access the JSON data using the normal map operations :
def jsonMap = restClientResponse.getData()
def user = jsonMap.get("user")
....
Solution posted by jesseplymale workes for me, too.
HttpBuilder has dependencies to some appache libs,
so to avoid to add this dependencies to your project,
you can take this solution without making use of HttpBuilder:
def jsonSlurperRequest(urlString) {
def url = new URL(urlString)
def connection = (HttpURLConnection)url.openConnection()
connection.setRequestMethod("GET")
connection.setRequestProperty("Accept", "application/json")
connection.setRequestProperty("User-Agent", "Mozilla/5.0")
new JsonSlurper().parse(connection.getInputStream())
}