I got a MYSQL database and a table in it. I want to convert this table into XML and be able to access the data in it using GET method, but the problem I am facing is, I want to know is it possible to use the Python Flask for this process to convert the table to an XML and store it in the memory (like JSON), can someone help me to give an idea of how to achieve this.
Thanks,
First get your results as a dictionary. Then apply dicttoxml to it, as in the following example:
import MySQLdb
import MySQLdb.cursors
import dicttoxml
conn = MySQLdb.Connect(
host='localhost', user='user',
passwd='secret', db='test')
cursor = conn.cursor(cursorclass=MySQLdb.cursors.DictCursor)
cursor.execute("SELECT col1, col2 FROM tbl")
rows = cursor.fetchall()
xml = dicttoxml.dicttoxml({'results': rows}, attr_type=False)
Hope that helps...
Related
So I have a MySQL database, let's call it "MySQLDB". When trying to create a new table (let's call it datatable) and insert data from a pandas dataframe, my code keeps adding rows to the SQL table, and I'm not sure if they are duplicates or not. For reference, there are around 50,000 rows in my pandas dataframe, but after running my code, the SQL table contains over 1 million rows. Note that I am using XAMPP to run a local MySQL server on which the database "MYSQLDB" is stored. Below is a simplified/generic version of what I am running. Note I have removed the port number and replaced it with generic [port] in this post.
import pandas as pd
from sqlalchemy import create_engine
import mysql.connector
pandas_db = pd.read_csv('filename.csv', index_col = [0])
engine = create_engine('mysql+mysqlconnector://root:#localhost:[port]/MySQLDB', echo=False)
pandas_db.to_sql(name='datatable', con=engine, if_exists = 'replace', chunksize = 100, index=False)
Is something wrong with the code? Or could it be something to do with XAMPP or the way I set up my database? If there is anything I could improve, please let me know.
I haven't found any other good posts that describe having the same issue.
I work as a Business Analyst and new to Python.
In one of my project, I want to extract data from .csv file and load that data into my MySQL DB (Staging).
Can anyone guide me with a sample code and frameworks I should use?
Simple program to create sqllite. You can read the CSV file and use dynamic_entry to insert into your desired target table.
import sqlite3
import time
import datetime
import random
conn = sqlite3.connect('test.db')
c = conn.cursor()
def create_table():
c.execute('create table if not exists stuffToPlot(unix REAL, datestamp TEXT, keyword TEXT, value REAL)')
def data_entry():
c.execute("INSERT INTO stuffToPlot VALUES(1452549219,'2016-01-11 13:53:39','Python',6)")
conn.commit()
c.close()
conn.close()
def dynamic_data_entry():
unix = time.time();
date = str(datetime.datetime.fromtimestamp(unix).strftime('%Y-%m-%d %H:%M:%S'))
keyword = 'python'
value = random.randrange(0,10)
c.execute("INSERT INTO stuffToPlot(unix,datestamp,keyword,value) values(?,?,?,?)",
(unix,date,keyword,value))
conn.commit()
def read_from_db():
c.execute('select * from stuffToPlot')
#data = c.fetchall()
#print(data)
for row in c.fetchall():
print(row)
read_from_db()
c.close()
conn.close()
You can iterate through the data in CSV and load into sqllite3. Please refer below link as well.
Quick easy way to migrate SQLite3 to MySQL?
If that's a properly formatted CSV file you can use the LOAD DATA INFILE MySQL command and you won't need any python. Then after it is loaded in the staging area (without processing) you can continue transforming it using sql/etl tool of choice.
https://dev.mysql.com/doc/refman/8.0/en/load-data.html
A problem with that is that you need to add all columns but still even if you have data you don't need you might prefer to load everything in the staging.
I'm not familiar with SQL(or python to be honest), and i was wondering how to add variables into tables. This is what i tried:
import sqlite3
conn = sqlite3.connect('TESTDB.db')
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS table(number real, txt text)''')
r=4
c.execute('''INSERT INTO table(r,'hello')''')
conn.commit()
conn.close()
This doesn't work.I get the error, "TypeError: 'str' object is not callable"How do i make the table insert the variable r??
Thanks
You have to bind values to placeholders in a prepared statement. See examples in the documentation.
Trying to build a query string on the fly with unknown values inserted directly in it is a great way to get a SQL injection attack and should be avoided.
Your problem is that you don't inject r properly into your query. This should work:
c.execute('''INSERT INTO table(''' + str(r) + ''','hello')''')
cheers
I'm creating a shiny app in which I need to create a plot from the data returned by the sql query. Now I'm trying to do this by creating a dataframe and storing the value in it. When I run this shiny app it gives me an error cannot coerce class "structure("MySQLResult", package = "RMySQL")" to a data.frame
How can I store the database query result in a dataframe.
If you work with dplyr, you can use dplyr::collect() to save query results into the data frame. Please visit RStudio website on working with databases to see more ways to do it.
I am not sure I got your question right so correct me if I am wrong.
This is what am thinnking:
frame <- dbGetQuery(con, statement= paste("select col1
from table1")) `
con is your dB connection.
Convert year into a dataframe:
year_new<-data.frame(year)
If I have not answered your question, let me know.
Also kindly post how you are doing it so it will be easier to understand what the problem is.`
I want to display a picture I already saved on the table img, but it gives me an error
cannot identify image file
When it try to open the file_like.
Cursor and connection use the connection and password to mysql database.
With the following code I wanted to display the picture. What's wrong with it, or is there even a better/easier way?
sql1='select * from img'
connection.commit()
cursor.execute(sql1)
data2=cursor.fetchall()
file_like=cStringIO.StringIO(data2[0][0])
img1=PIL.Image.open(file_like,mode='r').convert('RGB')
img1.show()
cursor.close()
When using io.BytesIO instead of cstringIO it works fine, also without decoding and encoding. And I also changed type from blob to mediumblob, which allows bigger pictures.
import pymysql
import io
from PIL import Image
connection=pymysql.connect(host="localhost",
user="root",
passwd="root",
db="test")
cursor=connection.cursor()
sql1 = 'select * from table'
cursor.execute(sql1)
data2 = cursor.fetchall()
file_like2 = io.BytesIO(data2[0][0])
img1=Image.open(file_like2)
img1.show()
cursor.close()
connection.close()
I tested your code and got the same error. So first I saved an image to my db. When I saved it I used base64 encoding and then got the same error when I tried to read. To save I used the code from Inserting and retrieving images into mysql through python, and your code also looks like you got it from the same question/answer.
In this case, the solution is simple. You have to decode the data, that's the part missing in the other answer.
So do a base64.b64decode(data2[0][0]):
import MySQLdb
import base64
from PIL import Image
import cStringIO
db = MySQLdb.connect(host="localhost",
user="root",
passwd="root",
db="test")
# select statement with explicit select list and where clause instead of select * ...
sql1='select img from images where id=1'
cursor = db.cursor()
cursor.execute(sql1)
data2=cursor.fetchall()
cursor.close()
db.close()
file_like=cStringIO.StringIO(base64.b64decode(data2[0][0]))
img1=Image.open(file_like,mode='r').convert('RGB')
img1.show()