I have a problem creating tables. I use the code below and on one machine it works perfectly well. On another machine, it does not give any error but also does not create the tables. I believe it has something to do with the conda environment but I made a new environment and I still get the same error. There is no difference in library versions between the machine where it works and where it does not work
python=3.7
mysql-connector-python=8.0.18. The funny thing is if I execute a select statement I get valid results.
import mysql.connector
import configparser
config = configparser.RawConfigParser()
config.read('config.ini')
conn = mysql.connector.connect(host=config['mysql report server 8']['host'],
port= config['mysql report server 8']['port'],
user=config['mysql report server 8']['user'],
password=config['mysql report server 8']['password'],
allow_local_infile=True,
autocommit=1
)
mycursor = conn.cursor()
def create_tables(mycursor,name_of_import:str):
with open(r"../SupportFiles/Table_Create_Query.sql") as f:
create_tables_str = f.read()
create_tables_str = create_tables_str .replace("xxx_replaceme",name_of_import)
mycursor.execute(create_tables_str,multi=True)
create_tables(mycursor,"my_test_import")
conn.commit()
conn.close()
the file Table_Create_Query.sql has the following contents
use cb_bht3_0_20_048817_raw;
create table xxx_replaceme_categories (
cid int,
variable varchar(255),
name varchar(255),
value int,
ordr int,
label varchar(255)
);
Related
Has anyone ever setup a SQL connection for Orange? The API (https://docs.biolab.si//3/data-mining-library/reference/data.sql.html) does not provide any decent examples, from my read of things. If you could point me to a link or show me an example connection object in Python, that would be great. I am trying to do some CN2 classification on a table in my MySQL database.
It is possible using a ODBC connector:
from pyodbc import connect
connector = connect('Driver={MySQL ODBC 5.3 Unicode Driver};'
'Server=server name or IP;'
'Database=database name;'
'UID=User;'
'PWD=password;')
cursor = connector.cursor() # Creation of the cursor for data swept.
# Execution of the SQL Query.
cursor.execute("SELECT id, data1, data2 FROM table1")
# All data of "table1" are saved in "data".
data = cursor.fetchall()
I am trying to write a script to populate a mySQL database with multiple pandas dataframes. For the sake of simplicity, I will demonstrate here the code to populate the db with a single pandas df
I am connecting to the db as follows:
import mysql.connector
import pandas as pd
# create the cursor and the connector
conn = mysql.connector.connect(
host='localhost',
user='root',
password='my_password')
c = conn.cursor(buffered=True)
# Create the database
c.execute('CREATE DATABASE IF NOT EXISTS ss_json_interop')
# Connect now to the ss_json_interop database
conn = mysql.connector.connect(
host='localhost',
user='root',
password='my_password',
database='ss_json_interop')
c = conn.cursor(buffered=True)
#### Create the table
c.execute("""CREATE TABLE IF NOT EXISTS sample_sheet_stats_json (
ss_ID int NOT NULL AUTO_INCREMENT,
panel text,
run_ID text,
sample_ID text,
i7_index_ID text,
i7_index_seq text,
i5_index_ID text,
i5_index_seq text,
number_reads_lane1 varchar(255),
number_reads_lane2 varchar(255),
total_reads varchar(255),
PRIMARY KEY (ss_ID)
)""")
#### create the engine
# more here: https://stackoverflow.com/questions/16476413/how-to-insert-pandas-dataframe-via-mysqldb-into-database
database_username = 'root'
database_password = 'my_password'
database_ip = '127.0.0.1'
database_name = 'ss_json_interop'
database_connection = sqlalchemy.create_engine('mysql+mysqlconnector://{0}:{1}#{2}/{3}'.
format(database_username, database_password,
database_ip, database_name))
# define the engine
engine = create_engine("mysql+mysqldb://root:my_password#localhost/sample_sheet_stats_json")
I am trying to populate my df into a table called sample_sheet_stats_json. If I do:
df.to_sql('sample_sheet_stats_json', con=database_connection, if_exists='replace')
the command works and the table in the db is correctly populated. However, if I replace the if_exists='replace' by if_exists='append':
df.to_sql('sample_sheet_stats_json', con=database_connection, if_exists='append')
I get a long error message, like so: (the error message is not complete. it continues replicating the structure of my df
(mysql.connector.errors.ProgrammingError) 1054 (42S22): Unknown column 'index' in 'field list' [SQL: 'INSERT INTO sample_sheet_stats_json
Strange enough, I can do df.to_sql('sample_sheet_stats_json', con=database_connection, if_exists='append') as long as I run df.to_sql('sample_sheet_stats_json', con=database_connection, if_exists='replace before') i.e. if the table is already populated.
The same problem was already reported here. However, If I do:
df.to_sql('sample_sheet_stats_json', engine, if_exists='append')
I get the following error message:
(_mysql_exceptions.OperationalError) (2002, "Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)") (Background on this error at: http://sqlalche.me/e/e3q8)
which does not make much sense, as I could already connect to the database with other commands, as shown above.
Does anyone know how can I fix it?
I have figure out what happened. The error message is telling that there is no column index in the pandas dataframe, which is in fact true.
Therefore I have to simply pass the argument index=False with the command df.to_sql('sample_sheet_stats_json', con=database_connection, if_exists='append'):
df.to_sql('sample_sheet_stats_json', con=database_connection, if_exists='append', index=False)
And that solves the problem.
I am trying to utilize dbWriteTable to write from R to MySQL. When I connect to MySQL I created an odbc connection so I can just utilize the command:
abc <- DBI::dbConnect(odbc::odbc(),
dsn = "mysql_conn")
In which I can see all my schemas for the MySQL instance. This is great when I want to read in data such as:
test_query <- dbSendQuery(abc, "SELECT * FROM test_schema.test_file")
test_query <- dbFetch(test_query)
The problem I have is when I want to create a new table in one of the schema's how to declare the schema I want to write to in
dbWriteTable(abc, value = new_file, name = "new_file", overwrite=T)
I imagine I have to define the test_schema in the dbWriteTable portion but haven't been able to get it to work. Thoughts?
This line of code says that it works (green check) but I don't see an image inserted. The file path should be correct because I got it from the file data.
UPDATE `inventory`
SET bookImage = LOAD_FILE('C:\xampp\htdocs\1059\homework\books\wuthering.jpg')
WHERE isbn = '978-0141040356';
one thing u should have know that if you're connecting to a remote database server, the path is relative to the server that the DB is on, not your local machine.
UPDATE inventory
SET bookImage =
(SELECT BulkColumn FROM OPENROWSET(BULK N'C:\wuthering.jpg', SINGLE_BLOB) AS x)
WHERE isbn = '978-0141040356';
I want to create DB structure for my application in mysql, I have some 100 scripts which will create tables , sp, functions in different schemas.
Please suggest how can i run script only one after other and how can i stop if previous script failed. I am using MySQL 5.6 version.
I am currrently runnning them using a text file.
mysql> source /mypath/CreateDB.sql
which contains
tee /logout/session.txt
source /mypath/00-CreateSchema.sql
source /mypath/01-CreateTable1.sql
source /mypath/01-CreateTable2.sql
source /mypath/01-CreateTable3.sql
But they are running simultaniously and I have Foreign key in below tables due to which it is giving error.
The scripts are not running simultaneously. The mysql client does not execute in a multi-threaded manner.
But it's possible that you are sourcing the scripts in an order that causes foreign keys to reference tables that you haven't defined yet, and this is a problem.
You have two possible fixes for this problem:
Create the tables in the order to avoid this problem.
Create all the tables without their foreign keys, then run another script that contains ALTER TABLE ADD FOREIGN KEY... statements.
I wrote a Python function to execute SQL files:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Download it at http://sourceforge.net/projects/mysql-python/?source=dlp
# Tutorials: http://mysql-python.sourceforge.net/MySQLdb.html
# http://zetcode.com/db/mysqlpython/
import MySQLdb as mdb
import datetime, time
def run_sql_file(filename, connection):
'''
The function takes a filename and a connection as input
and will run the SQL query on the given connection
'''
start = time.time()
file = open(filename, 'r')
sql = s = " ".join(file.readlines())
print "Start executing: " + filename + " at " + str(datetime.datetime.now().strftime("%Y-%m-%d %H:%M")) + "\n" + sql
cursor = connection.cursor()
cursor.execute(sql)
connection.commit()
end = time.time()
print "Time elapsed to run the query:"
print str((end - start)*1000) + ' ms'
def main():
connection = mdb.connect('127.0.0.1', 'root', 'password', 'database_name')
run_sql_file("my_query_file.sql", connection)
connection.close()
if __name__ == "__main__":
main()
I haven't tried it with stored procedure or large SQL statements. Also if you have SQL files containing several SQL queries, you might have to split(";") to extract each query and call cursor.execute(sql) for each query. Feel free to edit this answer to incorporate these improvements.