As per the qrcode docs, I generate an SVG QR code:
import qrcode
import qrcode.image.svg
def get_qrcode_svg( uri ):
img = qrcode.make( uri, image_factory = qrcode.image.svg.SvgImage)
return img
And that's all fine. I get a <qrcode.image.svg.SvgImage object at 0x7f94d84aada0> object.
I'd like to display this now in Django html. I pass this object as context (as qrcode_svg) and try to display with <img src="{{qrcode_svg}}"/> but don't get really anywhere with this. The error shows it's trying to get the img url, but isn't there a way I can do this without saving the img etc.? Terminal output:
>>> UNKNOWN ?????? 2020-06-16 07:38:28.295038 10.0.2.2 GET
/user/<qrcode.image.svg.SvgImage object at 0x7f94d84aada0>
Not Found: /user/<qrcode.image.svg.SvgImage object at 0x7f94d84aada0>
"GET /user/%3Cqrcode.image.svg.SvgImage%20object%20at%200x7f94d84aada0%3E HTTP/1.1" 404 32447
You can write it to the response stream:
import qrcode
from qrcode.image.svg import SvgImage
from django.http import HttpResponse
from bytesIO import bytesIO
def get_qrcode_svg(uri):
stream = bytesIO()
img = qrcode.make( uri, image_factory=SvgImage)
img.save(stream)
return stream.getvalue().decode()
This will pass the svg source, not a URI with the source code. In the template, you thus render this with:
{{ qrcode_svg|safe }}
To solve this I transformed the <qrcode.image.svg.SvgImage> into a base64 string, this can then be used as an img src="{{variable}}" in HTML.
import io
import base64
from qrcode import make as qr_code_make
from qrcode.image.svg import SvgPathFillImage
def get_qr_image_for_user(qr_url: str) -> str:
svg_image_obj = qr_code_make(qr_url, image_factory=SvgPathFillImage)
image = io.BytesIO()
svg_image_obj.save(stream=image)
base64_image = base64.b64encode(image.getvalue()).decode()
return 'data:image/svg+xml;utf8;base64,' + base64_image
If this is not a good solution I would love some feedback. Cheers
Related
I want to load the images using Pytorch
I have a dataset of image_urls with its corresponding labels(offer_id are labels.)
Is there any efficient way of doing it in Pytorch?.
This should work if image url is a public URL using pillow, requests and torch together
from PIL import Image
import requests
from torchvision.io import read_image
import torchvision.transforms as transforms
url = "https://example.jpg"
image = Image.open(requests.get(url, stream=True).raw)
transform = transforms.Compose([
transforms.PILToTensor()])
torch_image = transform(image)
You can use the requests package:
import requests
from PIL import Image
import io
import cv2
response = requests.get(df1.URL[0]).content
im = Image.open(io.BytesIO(response))
You may convert your image URLs to files first by downloading them to specific folder representing the label. You will certainly find a way to do so. Then you may do like this to check what you have:
%%time
import glob
f=glob.glob('/content/imgs/**/*.png')
print(len(f), f)
There is a need to create a image loader that will read the image from disk. In here the pil_loader:
def pil_loader(path):
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGB')
ds = torchvision.datasets.DatasetFolder('/content/imgs',
loader=pil_loader,
extensions=('.png'),
transform=t)
print(ds)
You may check how I did that for Cifar10.
Check the section "From PNGs to dataset".
For some reason this code will say it has downloaded my picture but nothing will pop up in the directory, I thought it might be because you can't access i.redd.it files in where I live so I used a proxy, this still did not fix my problems.
This is my code:
import json
import urllib.request
proxy = urllib.request.ProxyHandler({'http': '176.221.34.7'})
opener = urllib.request.build_opener(proxy)
urllib.request.install_opener(opener)
with open('/Users/eucar/Documents/Crawler/Crawler/Crawler/image_links.json') as images:
images = json.load(images)
for idx, image_url in enumerate(images):
try :
image_url = image_url.strip()
file_name = '/Users/eucar/Desktop/Instagrammemes/{}.{}'.format(idx, image_url.strip().split('.')[-1])
print('About to download {} to file {}'.format(image_url, file_name))
urllib.request.urlretrieve(image_url, file_name)
except :
print("All done")
This is the json file:
["https://i.redd.it/9q7r48kd2dh21.jpg",
"https://i.redd.it/yix3rq5t5dh21.jpg",
"https://i.redd.it/1vm3bd2vvch21.jpg",
"https://i.redd.it/wy7uszuigch21.jpg",
"https://i.redd.it/4gunzkkghch21.jpg",
"https://i.redd.it/4sd2hbe5sch21.jpg", "https://i.redd.it/bv3qior3ybh21.jpg"]
I have a Django view that renders a list of uploaded files, and the user can click on them to begin the download.
When the project was deployed, we found there's one file that browsers open instead of download it. It seems related to the .dxf extension.
This is how the link is created:
...
As a result:
http://localhost:8003/media/folder/whatever.dxf
So, why the same browser behaves differently? if I run it on localhost, then it downloads the file. But accessing the real server, it'd open it. Can I prevent the server to open them in browsers?
You can try adding new Django view that will handle download.
urls.py
from django.conf.urls import url
import views
urlpatterns = [
url(r'^download/$', views.DownloadView.as_view(), name='download')
]
views.py
import urllib
from django.http import HttpResponse
from django.views.generic.base import View
class DownloadView(View):
def get(self, request):
location = request.GET.get('location')
file = urllib.urlretrieve(location)
contents = open(file[0], 'r')
content_type = '%s' % file[1].type
response = HttpResponse(contents, content_type=content_type)
response['Content-Disposition'] = 'attachment; filename="%s"' % location.split('/')[-1]
return response
template.html
...
I'm trying to search a webpage (http://www.phillyhistory.org/historicstreets/). I think the relevent source html is this:
<input name="txtStreetName" type="text" id="txtStreetName">
You can see the rest of the source html at the website. I want to go into the that text box and enter an street name and download an output (ie enter 'Jefferson' in the search box of the page and see historic street names with Jefferson). I have tried using requests.post, and tried typing ?get=Jefferson in the url to test if that works with no luck. Anyone have any ideas how to get this page? Thanks,
Cameron
code that I currently tried (some imports are unused as I plan to parse etc):
import requests
from bs4 import BeautifulSoup
import csv
from string import ascii_lowercase
import codecs
import os.path
import time
arrayofstreets = []
arrayofstreets = ['Jefferson']
for each in arrayofstreets:
url = 'http://www.phillyhistory.org/historicstreets/default.aspx'
payload = {'txtStreetName': each}
r = requests.post(url, data=payload).content
outfile = "raw/" + each + ".html"
with open(outfile, "w") as code:
code.write(r)
time.sleep(2)
This did not work and only gave me the default webpage downloaded (ie Jefferson not entered in the search bar and retrieved.
I'm guessing your reference to 'requests.post' relates to the requests module for python.
As you have not specified what you want to scrape from the search results I will simply give you a snippet to get the html for a given search query:
import requests
query = 'Jefferson'
url = 'http://www.phillyhistory.org/historicstreets/default.aspx'
post_data = {'txtStreetName': query}
html_result = requests.post(url, data=post_data).content
print html_result
If you need to further process the html file to extract some data, I suggest you use the Beautiful Soup module to do so.
UPDATED VERSION:
#!/usr/bin/python
import requests
from bs4 import BeautifulSoup
import csv
from string import ascii_lowercase
import codecs
import os.path
import time
def get_post_data(html_soup, query):
view_state = html_soup.find('input', {'name': '__VIEWSTATE'})['value']
event_validation = html_soup.find('input', {'name': '__EVENTVALIDATION'})['value']
textbox1 = ''
btn_search = 'Find'
return {'__VIEWSTATE': view_state,
'__EVENTVALIDATION': event_validation,
'Textbox1': '',
'txtStreetName': query,
'btnSearch': btn_search
}
arrayofstreets = ['Jefferson']
url = 'http://www.phillyhistory.org/historicstreets/default.aspx'
html = requests.get(url).content
for each in arrayofstreets:
payload = get_post_data(BeautifulSoup(html, 'lxml'), each)
r = requests.post(url, data=payload).content
outfile = "raw/" + each + ".html"
with open(outfile, "w") as code:
code.write(r)
time.sleep(2)
The problem in my/your first version was that we weren't posting all the required parameters. To find out what you need to send, open the network monitor in your browser (Ctrl+Shitf+Q in Firefox) and make that search as you would normally. If you select the POST request in the network log, on the right you should see 'parameters tab' where the post parameters your browser sent.
I am using this code and i am using url=http://money.moneygram.com.au/forex-tools/currency-converter-widget-part
from __future__ import absolute_import
#import __init__
#from scrapy.spider import BaseSpider
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.http import Request
from scrapy.http import FormRequest
from scrapy.http import Response
from scrapy.selector import HtmlXPathSelector
import MySQLdb
class DmozSpider(CrawlSpider):
name = "moneygram"
allowed_domains = ["moneygram.com"]
start_urls = ["http://money.moneygram.com.au/forex-tools/currency-converter-widget-part"]
def parse(self,response):
# yield FormRequest.from_response(response,formname='firstSelector',formdata="FromCurrency=USD&ToCurrency=INR&FromCurrency_dropDown=USD&ToCurrency_dropDown=INR",callback=self.parse1)
# request_with_cookies = Request(url="http://money.moneygram.com.au",
# cookies={'FromCurrency': 'USD', 'ToCurrency': 'INR'},callback=self.parse1)
yield FormRequest.from_response(response,formname=None,formnumber=0,formpath=None,formdata="FromCurrency=AED&ToCurrency=VND&FromCurrency_dropDown=AED&ToCurrency_dropDown=VND&FromAmount=2561&ToAmount=&X-Requested-With=XMLHttpRequest",callback=self.parse1)
To send the form data as required to me but is giving error that
raise ValueError("No <form> element found in %s" % response)
exceptions.ValueError: No <form> element found in <200 http://money.moneygram.com.au/forex-tools/currency-converter-widget-part>
How can I convert from usd to inr ?
The page you mentioned is not a valid HTML. Looks it's a SSI block which should be part of another page.
You the error because This page uses Javascript heavily. It doesn't have a
FormRequest.from_response tries to make the request using an existing form, but the form is not there.
You should either work with the whole page, or make up the FormRequest filling it's attributes manually and then submitting it to the URL from the whole page.