My database contains an image(BLOB) named LOGO. I want to display the image on python TkInter window
import Tkinter
window= Tk()
db= MySQLdb.connect("localhost","root","anup","NursecallDB")
cursor=db.cursor()
sql= "SELECT LOGO FROM SYSTEMDETAILS"
cursor.execute(sql)
logo=cursor.fetchone()
img =PhotoImage(logo)
panel = Tkinter.Label(window, image = img)
panel.grid(row=0,rowspan=5,columnspan=2)
window.mainloop()
when I am running this program it shows error at
panel = Tkinter.Label(window, image = img)
TypeError:_str_returned non-string(type tuple)
upload the image to a folder and insert the path into database. If logo is the path of the image. then program will work.
PhotoImage take a string into argument which is the filename of the image you want to load or y Python image object. It cannot take blob as an argument. What you need is to load the image from a buffer (see the method here http://effbot.org/imagingbook/image.htm) and then pass the image to the PhotoImage constructor
If you, like me, don't want to pass the image through disk this is what the code looks like for python3 (following DARK_DUCK suggestion):
from io import BytesIO
from PIL import Image, ImageTk
...
logo=cursor.fetchone()
img = Image.open(BytesIO(logo))
phimg = ImageTk.PhotoImage(img)
panel = Tkinter.Label(window, image = phimg)
panel.grid(row=0,rowspan=5,columnspan=2)
I think that for python 2.7 you'll have to use StringIO instead of BytesIO. See:
Python PIL reading PNG from STDIN
Related
I am trying to create a PDF document using a series of PDF images and a series of CSV tables using the python package reportlab. The tables are giving me a little bit of grief.
This is my code so far:
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from reportlab.lib.pagesizes import letter
from reportlab.platypus import SimpleDocTemplate
from reportlab.pdfgen.canvas import Canvas
from reportlab.platypus import *
from reportlab.platypus.tables import Table
from PIL import Image
from matplotlib.backends.backend_pdf import PdfPages
# Set the path to the folder containing the images and tables
folder_path = 'Files'
# Create a new PDF document
pdf_filename = 'testassessment.pdf'
canvas = Canvas(pdf_filename)
# Iterate through the files in the folder
for file in os.listdir(folder_path):
file_path = os.path.join(folder_path, file)
# If the file is an image, draw it on the PDF
if file.endswith('.png'):
canvas.drawImage(file_path, 105, 148.5, width=450, height=400)
canvas.showPage() #ends page
# If the file is a table, draw it on the PDF
elif file.endswith('.csv'):
df = pd.read_csv(file_path)
table = df.to_html()
canvas.drawString(10, 10, table)
canvas.showPage()
# Save the PDF
canvas.save()
The tables are not working. When I use .drawString it ends up looking like this:
Does anyone know how I can get the table to be properly inserted into the PDF?
According to the reportlab docs, page 14, "The draw string methods draw single lines of text on the canvas.". You might want to have a look at "The text object methods" on the same page.
You might want to consider using PyMuPDF with Stories it allows for more flexibility of layout from a data input. For an example of something very similar to what you are trying to achieve see: https://pymupdf.readthedocs.io/en/latest/recipes-stories.html#how-to-display-a-list-from-json-data
Recently, I want to make a tools for Table Recognition. I have tried tesseract ocr, but I can't get any output, can anyone give me the answer?
Highly recommand paddleocr for table recognition! It can output textfile and excel file using just a few lines of code.
import os
import cv2
from paddleocr import PPStructure,save_structure_res
table_engine = PPStructure(layout=False, show_log=True, use_gpu=False)
save_folder = './output'
img_path = 'PaddleOCR_pub/ppstructure/docs/table/table.jpg'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])
for line in result:
line.pop('img')
print(line
The output files are as follows, which can help you more.
you can experience it here: https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.6/ppstructure/docs/quickstart_en.md#214-table-recognition
I have a Python script where I extract data from a website
https://www.ema.europa.eu/en/search/search/field_ema_web_topics%253Aname_field/Scientific%20guidelines/field_ema_web_categories%253Aname_field/Human?sort=field_ema_computed_date_field&order=desc
and save only the "ecl-list-item__title ecl-heading" and "small" into excel for all 670 results.
Currently, I have a script that saves the collected information in an excel file after looping the entire URLs available in excel, but no results appended. Please find below my script which I'm currently bugged. Please help me with changes to make it work.
import requests
# Import required modules
import bs4
# Import required module
from bs4 import BeautifulSoup
# Import required module
import pandas as pd
# Input Web URL
URL = "https://www.ema.europa.eu/en/search/search/field_ema_web_topics%253Aname_field/Scientific%20guidelines/field_ema_web_categories%253Aname_field/Human?sort=field_ema_computed_date_field&order=desc"
result = requests.get(URL)
# Creating soap object
soup = bs4.BeautifulSoup(result.text,'lxml')
# Searching div tags having maincounter-number class
cases = soup.find_all('div' ,class_= 'view view-search-solr-sitewide-search view-id-search_solr_sitewide_search view-display-id-ema_sitewide_search view-dom-id-99ccfcd90732eb90b270257a1c29fd39 jquery-once-1-processed')
data = []
# Get data from it
for i in cases:
span = i.find('div' ,class_= 'ecl-list-item__title ecl-heading')
data.append(span.text)
# Display number of cases
print(data)```
Please let me know if you need further clarification.
Thanks
I am trying to make sure that the relative links are saved as absolute links into this CSV. (URL parse) I am also trying to remove duplicates, which is why I created the variable "ddupe".
I keep getting all the relative URLs saved when I open the csv in the desktop.
Can someone please help me figure this out? I thought about calling the "set" just like this page: How do you remove duplicates from a list whilst preserving order?
#Importing the request library to make HTTP requests
#Importing the bs4 library to extract / parse html and xml files
#utlize urlparse to change relative URL to absolute URL
#import csv (built in package) to read / write to Microsoft Excel
from bs4 import BeautifulSoup
import requests
from urllib.parse import urlparse
import csv
#create the page variable
#associate page to request to obtain the information from raw_html
#store the html information in a text
page = requests.get('https://www.census.gov/programs-surveys/popest.html')
parsed = urlparse(page)
raw_html = page.text # declare the raw_html variable
soup = BeautifulSoup(raw_html, 'html.parser') # parse the html
#remove duplicate htmls
ddupe = open(‘page.text’, ‘r’).readlines()
ddupe_set = set(ddupe)
out = open(‘page.text’, ‘w’)
for ddupe in ddupe_set:
out.write(ddupe)
T = [["US Census Bureau Links"]] #Title
#Finds all the links
links = map(lambda link: link['href'], soup.find_all('a', href=True))
with open("US_Census_Bureau_links.csv","w",newline="") as f:
cw=csv.writer(f, quoting=csv.QUOTE_ALL) #Create a file handle for csv writer
cw.writerows(T) #Creates the Title
for link in links: #Parses the links in the csv
cw.writerow([link])
f.close() #closes the program
The function you're looking for is urljoin, not urlparse (both from the same package urllib.parse). It should be used somewhere after this line:
links = map(lambda link: link['href'], soup.find_all('a', href=True))
Use a list comprehension or map + lambda like you did here to join the relative URLs with base paths.
I'm extracting a certain part of a HTML document (to be fair: basis for this is an iXBRL document which means I do have a lot of written formatting code inside) and write my output, the original file without the extracted part, to a .txt file. My aim is to measure the difference in document size (how much KB of the original document refers to the extracted part). As far as I know there shouldn't be any difference in HTML to text format, so my difference should be reliable although I am comparing two different document formats. My code so far is:
import glob
import os
import contextlib
import re
#contextlib.contextmanager
def stdout2file(fname):
import sys
f = open(fname, 'w')
sys.stdout = f
yield
sys.stdout = sys.__stdout__
f.close()
def extractor():
os.chdir(r"F:\Test")
with stdout2file("FileShortened.txt"):
for file in glob.iglob('*.html', recursive=True):
with open(file) as f:
contents = f.read()
extract = re.compile(r'(This is the beginning of).*?Until the End', re.I | re.S)
cut = extract.sub('', contents)
print(file.split(os.path.sep)[-1], end="| ")
print(cut, end="\n")
extractor()
Note: I am NOT using BS4 or lxml because I am not only interested in HTML text but actually in ALL lines between my start and end-RegEx incl. all formatting code lines.
My code is working without problems, however as I have a lot of files my FileShortened.txt document is quickly going to be massive in size. My problem is not with the file or the extraction, but with redirecting my output to various txt-file. For now, I am getting everything into one file, what I would need is some kind of a "for each file searched, create new txt-file with the same name as the original document" condition (arcpy module?!)?
Somehting like:
File1.html --> File1Short.txt
File2.html --> File2Short.txt
...
Is there an easy way (without changing my code too much) to invert my code in the sense of printing the "RegEx Match" to a new .txt file instead of "everything except my RegEx match"?
Any help appreciated!
Ok, I figured it out.
Final Code is:
import glob
import os
import re
from os import path
def extractor():
os.chdir(r"F:\Test") # the directory containing my html
for file in glob.glob("*.html"): # iterates over all files in the directory ending in .html
with open(file) as f, open((file.rsplit(".", 1)[0]) + ".txt", "w") as out:
contents = f.read()
extract = re.compile(r'Start.*?End', re.I | re.S)
cut = extract.sub('', contents)
out.write(cut)
out.close()
extractor()