I want to display a picture I already saved on the table img, but it gives me an error
cannot identify image file
When it try to open the file_like.
Cursor and connection use the connection and password to mysql database.
With the following code I wanted to display the picture. What's wrong with it, or is there even a better/easier way?
sql1='select * from img'
connection.commit()
cursor.execute(sql1)
data2=cursor.fetchall()
file_like=cStringIO.StringIO(data2[0][0])
img1=PIL.Image.open(file_like,mode='r').convert('RGB')
img1.show()
cursor.close()
When using io.BytesIO instead of cstringIO it works fine, also without decoding and encoding. And I also changed type from blob to mediumblob, which allows bigger pictures.
import pymysql
import io
from PIL import Image
connection=pymysql.connect(host="localhost",
user="root",
passwd="root",
db="test")
cursor=connection.cursor()
sql1 = 'select * from table'
cursor.execute(sql1)
data2 = cursor.fetchall()
file_like2 = io.BytesIO(data2[0][0])
img1=Image.open(file_like2)
img1.show()
cursor.close()
connection.close()
I tested your code and got the same error. So first I saved an image to my db. When I saved it I used base64 encoding and then got the same error when I tried to read. To save I used the code from Inserting and retrieving images into mysql through python, and your code also looks like you got it from the same question/answer.
In this case, the solution is simple. You have to decode the data, that's the part missing in the other answer.
So do a base64.b64decode(data2[0][0]):
import MySQLdb
import base64
from PIL import Image
import cStringIO
db = MySQLdb.connect(host="localhost",
user="root",
passwd="root",
db="test")
# select statement with explicit select list and where clause instead of select * ...
sql1='select img from images where id=1'
cursor = db.cursor()
cursor.execute(sql1)
data2=cursor.fetchall()
cursor.close()
db.close()
file_like=cStringIO.StringIO(base64.b64decode(data2[0][0]))
img1=Image.open(file_like,mode='r').convert('RGB')
img1.show()
Related
I have a Python database file, with MySQL Pooling setup like below
import mysql.connector
from mysql.connector import Error
from mysql.connector.connection import MySQLConnection
from mysql.connector import pooling
import pandas as pd
import datetime as dt
from contextlib import closing
#Outside of any function :
connection_pool = mysql.connector.pooling.MySQLConnectionPool(pool_name="database_pool",
pool_size=25,
pool_reset_session=True,
host='XXXXX',
database='XXXXX',
user='XXXXX',
password='XXXX')
In order to get a pooled connection I use the blow function located within the same file
def getDBConnection():
try:
connection_obj = connection_pool.get_connection()
cursor = connection_obj.cursor()
return connection_obj, cursor
except Error as e:
print(f"Error while connecting to MySQL using Connection pool : {e}")
Now lets say I want to preform a simple select function using a pooled connection (still within the same database file) - and then return the connection :
def dbGetDataHeadersForRunMenuBySubSet(strSubset):
connection_object, cursor = getDBConnection()
if connection_object.is_connected():
query = 'SELECT * FROM someTable'
cursor.execute(query)
#Now close the connection
closeDBConnection(connection_object, cursor)
code to attempt to close the Pool :
def closeDBConnection(connection_obj,cursor):
if connection_obj.is_connected():
connection_obj.close()
cursor.close()
However after 25 runs I get the error back saying
Error while connecting to MySQL using Connection pool : Failed getting connection; pool exhausted
Using the debugger I can see that the closeDBConnection is been run , and that it appears to hit every step with no errors.
So my question is :
Why am I running out of pools if I am closing them each time ?
All in all, I am actually looking to make a persistent connection , but in Python I cant find any realy examples on persistence , and all the examples I have looked at seem to point towards pooling. I am new (ish) to Python - so I have no issues here syaing that I know I have made a mistake somewhere.
Having played with this further :
adding "connection_object.close()" at the end of each individual function will free the connection_pool e.g
def dbGetDataHeadersForRunMenuBySubSet(strSubset):
connection_object, cursor = getDBConnection()
if connection_object.is_connected():
query = 'SELECT * FROM someTable'
cursor.execute(query)
#Now close the connection
#closeDBConnection(connection_object, cursor) <--- Not working for me
connection_object.close() <---- This WILL work instead. –
But the DB stuff is just so slow in comparrision to Excel mySQL Connecter (Excel is almost 3 times faster doing the same thing. I think this is because its easy to get a persistent connection in EXCEL - something which I cant do in python (I am new remember ;-) )
I work as a Business Analyst and new to Python.
In one of my project, I want to extract data from .csv file and load that data into my MySQL DB (Staging).
Can anyone guide me with a sample code and frameworks I should use?
Simple program to create sqllite. You can read the CSV file and use dynamic_entry to insert into your desired target table.
import sqlite3
import time
import datetime
import random
conn = sqlite3.connect('test.db')
c = conn.cursor()
def create_table():
c.execute('create table if not exists stuffToPlot(unix REAL, datestamp TEXT, keyword TEXT, value REAL)')
def data_entry():
c.execute("INSERT INTO stuffToPlot VALUES(1452549219,'2016-01-11 13:53:39','Python',6)")
conn.commit()
c.close()
conn.close()
def dynamic_data_entry():
unix = time.time();
date = str(datetime.datetime.fromtimestamp(unix).strftime('%Y-%m-%d %H:%M:%S'))
keyword = 'python'
value = random.randrange(0,10)
c.execute("INSERT INTO stuffToPlot(unix,datestamp,keyword,value) values(?,?,?,?)",
(unix,date,keyword,value))
conn.commit()
def read_from_db():
c.execute('select * from stuffToPlot')
#data = c.fetchall()
#print(data)
for row in c.fetchall():
print(row)
read_from_db()
c.close()
conn.close()
You can iterate through the data in CSV and load into sqllite3. Please refer below link as well.
Quick easy way to migrate SQLite3 to MySQL?
If that's a properly formatted CSV file you can use the LOAD DATA INFILE MySQL command and you won't need any python. Then after it is loaded in the staging area (without processing) you can continue transforming it using sql/etl tool of choice.
https://dev.mysql.com/doc/refman/8.0/en/load-data.html
A problem with that is that you need to add all columns but still even if you have data you don't need you might prefer to load everything in the staging.
I got a MYSQL database and a table in it. I want to convert this table into XML and be able to access the data in it using GET method, but the problem I am facing is, I want to know is it possible to use the Python Flask for this process to convert the table to an XML and store it in the memory (like JSON), can someone help me to give an idea of how to achieve this.
Thanks,
First get your results as a dictionary. Then apply dicttoxml to it, as in the following example:
import MySQLdb
import MySQLdb.cursors
import dicttoxml
conn = MySQLdb.Connect(
host='localhost', user='user',
passwd='secret', db='test')
cursor = conn.cursor(cursorclass=MySQLdb.cursors.DictCursor)
cursor.execute("SELECT col1, col2 FROM tbl")
rows = cursor.fetchall()
xml = dicttoxml.dicttoxml({'results': rows}, attr_type=False)
Hope that helps...
I am trying to export the output of a query in a MYSQL database to a CSV file in the local system using Python. There are 2 issues. First of all using fetchall() I am not getting any data( The same query in database produces more than 5000 rows of data), though I got data output initially. Secondly I would like to know the code to put the username and password in a separate file which the user cannot access but will be imported to this file when the script runs.
import os
import csv
import pymysql
import pymysql.cursors
d=open('c:/Users/dasa17/Desktop/pylearn/Roster.csv', 'w')
c=csv.writer(d)
Connection = pymysql.connect(host='xxxxx', user='xxxxx', password='xxxx',
db='xxxx',charset='utf8mb4',cursorclass=pymysql.cursors.DictCursor )
a=Connection.cursor()
a.execute("select statement")
data=a.fetchall()
for item in data:
c.writerow(item)
a.close()
d.close()
Connection.close()
I am trying to connect to a mysql database,
works fine with Option 1:
from sqlalchemy import create_engine
engine = create_engine('mysql://root:root#localhost/lend', echo=True)
cnx = engine.connect()
x = cnx.execute("SELECT * FROM user")
but breaks down here:
from pandas.io import sql
xx = sql.read_frame("SELECT * FROM user", cnx)
cnx.close()
with
AttributeError: 'Connection' object has no attribute 'rollback'
You need to have a raw database connection, and not an instance of Connection. In order to get it call either engine.raw_connection() or engine.connect().connection:
from pandas.io import sql
#cnx = engine.connect().connection # option-1
cnx = engine.raw_connection() # option-2
xx = sql.read_frame("SELECT * FROM user", cnx)
cnx.close()
Use the MySQLdb module to create the connection. There is ongoing progress toward better SQL support, including sqlalchemy, but it's not ready yet.
If you are comfortable installing the development version of pandas, you might want to keep an eye on that linked issue and switch to using the development version of pandas as soon as it is merged. While pandas' SQL support is usable, there are some bugs around data types, missing values, etc., that are likely to come up if you use Pandas + SQL extensively.
This is an old question but still relevant apparently. So past 2018 the way to solve this is simply use the engine directly:
xx = sql.read_sql("SELECT * FROM user", engine)
(originally posted by Midnighter in a comment)