How to display PIL Image dynamically - jinja2

I want to crop my image file using PIL library and display it using Flask and Jinja.
I have tried this code:
#bp.route('/media/<fname>')
def fetch_media(fname):
...
image = Image.open(path)
cropped_image = image.crop(box)
return cropped_image
This gives a TypeError:
The view function did not return a valid response. The return type
must be a string, dict, tuple, Response instance, or WSGI callable,
but it was a Image.
How can I return the Image to html page?

Untested, but certainly pretty close to this:
import io
from PIL import Image
from flask import Response
....
....
buffer = io.BytesIO()
cropped_image.save(buffer, format="PNG")
return Response(buffer.getvalue(), mimetype='image/png')

Related

How to not only open a HTML file from python CGI script but also pass data like strings and JSON files to HTML script?

I've been trying to figure out how to do this for a while without using a framework (I can make this work with Flask for example) but I haven't found anything as of yet. I have two html scripts and a python cgi script. In essence I have the first html file wherein the user enters a string that I read into my python cgi script which in turn does a number of things to finally give me a bunch of strings and json file that I need to pass to another html file and be able to read them there as well.
So far the first half works, and I can open the second html with a redirect which is not elegant but nothing else has worked with the following code:
#!/Users/<username>/opt/anaconda3/bin//python
import pandas as pd
import numpy as np
import cgi
import cgitb
import sys
cgitb.enable()
# Create instance of FieldStorage
form = cgi.FieldStorage()
# Get data from fields
protein_name = form.getvalue('protein_name')
####### function search_results takes in protein_name and gives me the data ###
####### I need to pass to the html file: results.html #########################
if ((search_results(protein_name)!="No protein entered")&(search_results(protein_name)!="No results found")):
all_vars = search_results(protein_name)
##### all_vars is a tuple of strings like gene_name, json files and integers
print("Content-type: text/html","\n\n")
print ('''
<head><meta http-equiv="refresh" content="0;URL='http://localhost/results.html'" /></head>
''')
Any suggestions on how to proceed? Any help is appreciated, thanks!
As you have probably discovered, your current technique issues a redirect to 'results.html', but otherwise discards any results. I don't know exactly what your goals are, but one approach would be to treat 'results.html' as a simple template. Your script will populate it and return it in response to each request. In the example below, 'results.html' can contain arbitrary HTML, along with the line '##RESULTS##', which will be replaced with your output.
#!/Users/<username>/opt/anaconda3/bin/python
import sys
import pandas as pd
import numpy as np
import cgi
import cgitb
cgitb.enable()
def process_results(results):
if results=='No protein entered' or results=='No results found':
return results
# else do something with results, e.g., format into an HTML table
buf = '<table>\n'
for result in results:
buf += f'<tr><td>{cgi.escape(str(result))}</td></tr>\n'
buf += '</table>\n'
return buf
form = cgi.FieldStorage()
protein_name = form.getvalue('protein_name')
results = search_results(protein_name)
print("Content-type: text/html\n")
with open('results.html') as template:
for line in (x.rstrip() for x in template):
if line == '##RESULTS##':
print(process_results(results))
else:
print(line)
Good luck.

How to navigate through a json file with Python 3? TypeError: list indices must be integers or slices, not str

I am trying to get as many profile links as I can on khanacademy.org. I am using their api.
I am struggling navigating through the json file to get the desired data.
Here is my code :
from urllib.request import urlopen
import json
with urlopen("https://www.khanacademy.org/api/internal/discussions/video/what-are-algorithms/questions?casing=camel&limit=10&page=0&sort=1&lang=en&_=190422-1711-072ca2269550_1556031278137") as response:
source = response.read()
data= json.loads(source)
for item in data['feedback']:
print(item['authorKaid'])
profile_answers = item['answers']['authorKaid']
print(profile_answers)
My goal is to get as many authorKaid as possible en then store them (to create a database later).
When I run this code I get this error :
TypeError: list indices must be integers or slices, not str
I don't understand why, on this tutorial video : https://www.youtube.com/watch?v=9N6a-VLBa2I at 16:10 it is working.
the issue is item['answers'] are lists and you are trying to access by a string rather than an index value. So when you try to get item['answers']['authorKaid'] there is the error:
What you really want is
print (item['answers'][0]['authorKaid'])
print (item['answers'][1]['authorKaid'])
print (item['answers'][2]['authorKaid'])
etc...
So you're actually wanting to iterate through those lists. Try this:
from urllib.request import urlopen
import json
with urlopen("https://www.khanacademy.org/api/internal/discussions/video/what-are-algorithms/questions?casing=camel&limit=10&page=0&sort=1&lang=en&_=190422-1711-072ca2269550_1556031278137") as response:
source = response.read()
data= json.loads(source)
for item in data['feedback']:
print(item['authorKaid'])
for each in item['answers']:
profile_answers = each['authorKaid']
print(profile_answers)

Read JSON response in Python

I am trying to read json response from this link. But its not working! I get the following error:
ValueError: No JSON object could be decoded.
Here is the code I've tried:
import urllib2, json
a = urllib2.urlopen('https://www.googleapis.com/pagespeedonline/v3beta1/mobileReady?key=AIzaSyDkEX-f1JNLQLC164SZaobALqFv4PHV-kA&screenshot=true&snapshots=true&locale=en_US&url=https://www.economicalinsurance.com/en/&strategy=mobile&filter_third_party_resources=false&callback=_callbacks_._DElanZU7Xh1K')
data = json.loads(a)
I made these changes:
import requests, json
r=requests.get('https://www.googleapis.com/pagespeedonline/v3beta1/mobileReady?key=AIzaSyDkEX-f1JNLQLC164SZaobALqFv4PHV-kA&screenshot=true&snapshots=true&locale=en_US&url=https://www.economicalinsurance.com/en/&strategy=mobile&filter_third_party_resources=false')
json_data = json.loads(r.text)
print json_data['ruleGroups']['USABILITY']['score']
A Quick question - Construct Image link .
I able to get here : -
from selenium import webdriver
txt = json_data['screenshot']['data']
txt = str(txt).replace('-','/').replace('_','/')
#then in order to construct the image link i tried : -
image_link = 'data:image/jpeg;base64,'+txt
driver = webdriver.Firefox()
driver.get(image_link)
The problem is i am not getting the image, also the len(object_original) as compared len(image_link) differs . Could anybody please advise the right elements missing in my constructed image link ?. Thank you
Here is API link - https://www.google.co.uk/webmasters/tools/mobile-friendly/ Sorry added it late .
Two corrections need to be made to your code:
The url was corrected (as mentioned by Felix Kling here). You have to remove the callback parameter from the GET request you were sending.
Also, if you check the type of the response that you were fetching earlier you'll notice that it wasn't a string. It was <type 'instance'>. And since json.loads() accepts a string as a parameter variable you would've got another error. Therefore, use a.read() to fetch the response data in string.
Hence, this should be your code:
import urllib2, json
a = urllib2.urlopen('https://www.googleapis.com/pagespeedonline/v3beta1/mobileReady?key=AIzaSyDkEX-f1JNLQLC164SZaobALqFv4PHV-kA&screenshot=true&snapshots=true&locale=en_US&url=https://www.economicalinsurance.com/en/&strategy=mobile&filter_third_party_resources=false')
data = json.loads(a.read())
Answer to your second query (regarding the image) is:
from base64 import decodestring
arr = json_data['screenshot']['data']
arr = arr.replace("_", "/")
arr = arr.replace("-","+")
fh = open("imageToSave.jpeg", "wb")
fh.write(str(arr).decode('base64'))
fh.close()
Here, is the image you were trying to fetch - Link
Felix Kling is right about the address, but I also created a variable that holds the URL. You can try this out to and it should work:
import urllib2, json
url = "https://www.googleapis.com/pagespeedonline/v3beta1/mobileReady?key=AIzaSyDkEX-f1JNLQLC164SZaobALqFv4PHV-kA&screenshot=true&snapshots=true&locale=en_US&url=https://www.economicalinsurance.com/en/&strategy=mobile&filter_third_party_resources=false"
response = urllib2.urlopen(url)
data = json.loads(response.read())
print data

Django send object as json

Is there a way to send with json (or anything else other than render) an object_list made with paginator? The browser is making a getjson jquery request and the views.py function is supposed to return the object. The reason I want to return a json object rather than render a new page is because I don't want the page to reload
The following views.py code:
searchresults = form.search()#this is a call to a haystack form template
results = Paginator(searchresults, 20)
page = results.page(1)
return HttpResponse(json.dumps(page), content_type='application/json')
Gets this error:
TypeError: <Page 1 of 1> is not JSON serializable
Just use django serialization https://docs.djangoproject.com/en/dev/topics/serialization/
from django.core import serializers
...
return HttpResponse(serializers.serialize("json", [q.object for q in results.page(1).object_list]), content_type='application/json')
You need to create a dictionary that is serializable as #evilx commented or make your own Json by hand.

Parsing HTTP Response in Python

I want to manipulate the information at THIS url. I can successfully open it and read its contents. But what I really want to do is throw out all the stuff I don't want, and to manipulate the stuff I want to keep.
Is there a way to convert the string into a dict so I can iterate over it? Or do I just have to parse it as is (str type)?
from urllib.request import urlopen
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = urlopen(url)
print(response.read()) # returns string with info
When I printed response.read() I noticed that b was preprended to the string (e.g. b'{"a":1,..). The "b" stands for bytes and serves as a declaration for the type of the object you're handling. Since, I knew that a string could be converted to a dict by using json.loads('string'), I just had to convert the byte type to a string type. I did this by decoding the response to utf-8 decode('utf-8'). Once it was in a string type my problem was solved and I was easily able to iterate over the dict.
I don't know if this is the fastest or most 'pythonic' way of writing this but it works and theres always time later of optimization and improvement! Full code for my solution:
from urllib.request import urlopen
import json
# Get the dataset
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = urlopen(url)
# Convert bytes to string type and string type to dict
string = response.read().decode('utf-8')
json_obj = json.loads(string)
print(json_obj['source_name']) # prints the string with 'source_name' key
You can also use python's requests library instead.
import requests
url = 'http://www.quandl.com/api/v1/datasets/FRED/GDP.json'
response = requests.get(url)
dict = response.json()
Now you can manipulate the "dict" like a python dictionary.
json works with Unicode text in Python 3 (JSON format itself is defined only in terms of Unicode text) and therefore you need to decode bytes received in HTTP response. r.headers.get_content_charset('utf-8') gets your the character encoding:
#!/usr/bin/env python3
import io
import json
from urllib.request import urlopen
with urlopen('https://httpbin.org/get') as r, \
io.TextIOWrapper(r, encoding=r.headers.get_content_charset('utf-8')) as file:
result = json.load(file)
print(result['headers']['User-Agent'])
It is not necessary to use io.TextIOWrapper here:
#!/usr/bin/env python3
import json
from urllib.request import urlopen
with urlopen('https://httpbin.org/get') as r:
result = json.loads(r.read().decode(r.headers.get_content_charset('utf-8')))
print(result['headers']['User-Agent'])
TL&DR: When you typically get data from a server, it is sent in bytes. The rationale is that these bytes will need to be 'decoded' by the recipient, who should know how to use the data. You should decode the binary upon arrival to not get 'b' (bytes) but instead a string.
Use case:
import requests
def get_data_from_url(url):
response = requests.get(url_to_visit)
response_data_split_by_line = response.content.decode('utf-8').splitlines()
return response_data_split_by_line
In this example, I decode the content that I received into UTF-8. For my purposes, I then split it by line, so I can loop through each line with a for loop.
I guess things have changed in python 3.4. This worked for me:
print("resp:" + json.dumps(resp.json()))