i want to put a Panda Dataframe to a IBM i Series / AS400. I already researched a much, but now I am stuck.
I already made a lot of queries, where I use pyodbc. For df.to_sql() I should use, as readed on other stacks, sqlalchemy with the ibm_db_sa dialect.
My actual code is:
CONNECTION_STRING = (
"driver={iSeries Access ODBC Driver};"
"System=111.111.111.111;"
"database=TESTDB;"
"uid=USER;"
"pwd=PASSW;"
)
quoted = urllib.parse.quote_plus(CONNECTION_STRING)
engine = create_engine('ibm_db_sa+pyodbc:///?odbc_connect={}'.format(quoted))
create_statement = df.to_sql("TABLETEST", engine, if_exists="append")
the following packages are installed
python 3.9
ibm-db 3.1.3
ibm-db-sa 0.3.7
ibm-db-sa-py3 0.3.1.post1
pandas 1.3.5
pip 22.0.4
setuptools 57.0.0
SQLAlchemy 1.4.39
when I run, i get the following error:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42S02', '[42S02] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0204 - COLUMNS in SYSCAT type *FILE not found. (-204) (SQLPrepare)')
[SQL: SELECT "SYSCAT"."COLUMNS"."COLNAME", "SYSCAT"."COLUMNS"."TYPENAME", "SYSCAT"."COLUMNS"."DEFAULT", "SYSCAT"."COLUMNS"."NULLS", "SYSCAT"."COLUMNS"."LENGTH", "SYSCAT"."COLUMNS"."SCALE", "SYSCAT"."COLUMNS"."IDENTITY", "SYSCAT"."COLUMNS"."GENERATED"
FROM "SYSCAT"."COLUMNS"
WHERE "SYSCAT"."COLUMNS"."TABSCHEMA" = ? AND "SYSCAT"."COLUMNS"."TABNAME" = ? ORDER BY "SYSCAT"."COLUMNS"."COLNO"]
[parameters: ('USER', 'TABLETEST')]
(Background on this error at: https://sqlalche.me/e/14/f405)
I think, the dialect could be wrong, because the parameters are the username and the table for the ODBC connection?
AND: I am not really sure, whats the difference between ibm_db_sa and ibm_db?
I tried a few days again, before someone is trying to do this via sqlalchemy should do it via pyodbc.
Here is my working example
refering the df_to_sql_bulk_insert function to this
(and now I am currently using my system-DSN):
def df_to_sql_bulk_insert(df: pd.DataFrame, table: str, **kwargs) -> str:
df = df.copy().assign(**kwargs)
columns = ", ".join(df.columns)
tuples = map(str, df.itertuples(index=False, name=None))
values = re.sub(r"(?<=\W)(nan|None)(?=\W)", "NULL", (",\n" + " " * 7).join(tuples))
return f"INSERT INTO {table} ({columns})\nVALUES {values}"
cnxn = pyodbc.connect("DSN=XXX")
cursor = cnxn.cursor()
sqlstr = df_to_sql_bulk_insert(df,"DBXXX.TBLXXX")
cursor.execute(sqlstr)
cnxn.commit()
Related
This question expands on this question
Here, I'm using the custom function created by #Simon.S.A. shown in the answer to this question. I'm attempting to save a tbl_sql object in R to MySQL as a new table without first saving it locally. Here, the database and schema in my MySQL are named "test." The tbl_sql object in R is my_data, and I want to save this is a new table in MySQL labeled "car_data".
library(DBI)
library(tidyverse)
library(dbplyr)
#establish connection and import data from MySQL
con <- DBI::dbConnect(RMariaDB::MariaDB(),
dbname = "test",
host = "127.0.0.1",
user = "user",
password = "password")
my_data <- tbl(con, "mtcars")
my_data <- my_data %>% filter(mpg >= 22)
# write function to save tbl_sql as a new table in SQL
write_to_database <- function(input_tbl, db, schema, tbl_name){
# connection
tbl_connection <- input_tbl$src$con
# SQL query
sql_query <- glue::glue(
"SELECT *\n",
"INTO {db}.{schema}.{tbl_name}\n",
"FROM (\n",
dbplyr::sql_render(input_tbl),
"\n) AS sub_query"
)
result <- dbExecute(tbl_connection, as.character(sql_query))
}
# execute function
write_to_database(my_data, "test", "test", "car_data")
After running final line, I get the following error. I'm not sure how I can fix this.
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '.test.car_data
FROM (
SELECT *
FROM `mtcars`
WHERE (`mpg` >= 22.0)
) AS sub_quer' at line 2 [1064]
12.
stop(structure(list(message = "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '.test.car_data\nFROM (\nSELECT *\nFROM `mtcars`\nWHERE (`mpg` >= 22.0)\n) AS sub_quer' at line 2 [1064]",
call = NULL, cppstack = NULL), class = c("Rcpp::exception",
"C++Error", "error", "condition")))
11.
result_create(conn#ptr, statement, is_statement)
10.
initialize(value, ...)
9.
initialize(value, ...)
8.
new("MariaDBResult", sql = statement, ptr = result_create(conn#ptr,
statement, is_statement), bigint = conn#bigint, conn = conn)
7.
dbSend(conn, statement, params, is_statement = TRUE)
6.
.local(conn, statement, ...)
5.
dbSendStatement(conn, statement, ...)
4.
dbSendStatement(conn, statement, ...)
3.
dbExecute(tbl_connection, as.character(sql_query))
2.
dbExecute(tbl_connection, as.character(sql_query))
1.
write_to_database(my_data, "test", "test", "car_data")
Creating a table with INTO command is an SQL Server (even MS Access) specific syntax and not supported in MySQL. Instead, consider the counterpart statement: CREATE TABLE...SELECT. Also, schema differs between RDBMS's. For MySQL, database is synonymous to schema.
Therefore, consider adjusted version of SQL build:
sql_query <- glue::glue(
"CREATE TABLE {db}.{tbl_name}\n AS \n",
"SELECT * \n",
"FROM (\n",
dbplyr::sql_render(input_tbl),
"\n) AS sub_query"
)
I'm using the R package monitoR and getting an error message that I can't figure out.
I'm trying to upload a correlation template list ("bithTemps") to a MySQL database ("noh") using the dbUploadTemplate command.
dbUploadTemplate(templates = bithTemps,
uid = "root",
pwd = "****",
db.name = "noh",
analyst = 1,
locationID = "2",
date.recorded = "2017/09/07",
recording.equip = "Unknown",
species.code = "BITH",
type = "COR")
This returns:
Error: $ operator is invalid for atomic vectors
I have confirmed the ODBC connection is working, that the template list is functional (i.e., it works when called to other arguments in the package), and that the SQL database has the required entries for analyst, location, and species code.
It seems that this error was actually triggered by a non-functional ODBC connection. This part of the dbUploadTemplate function
species <- RODBC::sqlQuery(dbCon, paste("SELECT `pkSpeciesID`, `fldSpeciesCode` FROM `tblSpecies` WHERE `fldSpeciesCode` = '",
paste(species.code, sep = "", collapse = "' OR `fldSpeciesCode` = '"),
"'", sep = ""))
queries a table in the SQL database and returns an object called 'species'. If the query fails (e.g., because RODBC can't connect) than 'species' is empty, and the following operation
speciesID <- NULL
for (i in 1:length(species.code)) {
speciesID[i] <- species$pkSpeciesID[species$fldSpeciesCode ==
species.code[i]]
}
triggers the error. Fixing the ODBC connection resolves the error.
I am using RMySQL package to write (append) data in current table.
I am using R, version 3.3.2.
My code looks like this:
library(RMySQL)
df_final <- some_data
m<-dbDriver("MySQL")
mydb <- dbConnect(m, user='odvjet12_mislav',
password='my_pass',
host='91.234.46.219',
dbname='odvjet12_fina_pn')
dbWriteTable(mydb, value = df_final, name = "fina_pn", append = TRUE, row.names = FALSE)
This code works fine for some time, but in last ten days, it always return an error:
Error in .local(conn, statement, ...) :
could not run statement: The used command is not allowed with this MySQL version
I don't understand how it is possible for code to work for some time and now, it returns an error?
I kindly ask for feedback on this issue.
Best,
Mislav Šagovac
You could also use dbGetQuery from the RMySQL package and iterate over the rows, which was my solution when I reached a similar error for a dataframe I wanted to write to a MySQL DB:
mydb = dbConnect(MySQL(), user='user', password='password', dbname='databasename', host='hostname')
for(i in 1:nrow(df)){
dbGetQuery(mydb,paste0("INSERT INTO MYTABLE (COL1,COL2) VALUES(",df$col1[i],",",df$col2[i],")"))
}
I'm trying to get data from MySQL DB into Rstudio-server. My actions are like
mydb = dbConnect(MySQL(), user='user', password='password', dbname='dbname', host='localhost')
query <- stri_paste('select sellings.updated_at AS Up_Date, concat(item_parameters.title, " ", ad_attributes.int_value) AS Class, CONCAT(geos.name, " ", geos.kind) AS place, geos.lon, geos.lat, sellings.price AS price, ((geo_routes.distance*2/1000 + 100)) AS delivery_cost FROM sellings, users, item_parameters, ad_attributes, geos, geo_routes WHERE users.encrypted_password!="" && item_parameters.title="Класс" && sellings.price IS NOT NULL && ad_attributes.int_value IS NOT NULL AND users.id=sellings.user_id AND item_parameters.id=ad_attributes.item_parameter_id AND sellings.id = ad_attributes.ad_id AND sellings.geo_guid = geos.guid AND geos.routable_guid = geo_routes.src_guid AND geo_routes.distance = (SELECT geo_routes.distance FROM geo_routes, geos WHERE geos.guid = sellings.geo_guid AND geo_routes.src_guid = geos.routable_guid AND geo_routes.dst_guid = (SELECT geos.routable_guid FROM geos WHERE geos.name = "Воронеж" && geos.kind = "г")) ORDER BY Up_Date;')
rs = dbGetQuery(mydb, query)
And I get an empty dataframe. But when I do the same with my local DB everything is OK. The query takes a pretty long time, about 3 minutes, but it works properly. Moreover the same query works right from the command line in MySQL. On the server, it takes about 4 seconds. OS of server is Debian 7, OS of local machine is Win 8. Any idea?
Sometimes when querying from the command line the default schema has been set in a previous command. This command doesn't carry over to R so the exact same query from a command line to a R session might not work. Maybe check the dbname.
Insert the below statements in your SQL query
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
It worked for me
Can you do such a thing? I have the following but cursor.execute does not like the syntax of selectSQL. Ultimately I'm looking to iterate through all tables in a .accdb and insert records from each table into a another .accdb with the same tables and fields. Reason being, bringing over new records from field data collection on TabletPCs to master database on server.
import pyodbc
connOtherDB = pyodbc.connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ='path to my dbase;")
otherDBtbls = connOtherDB.cursor().tables()
for t in otherDBtbls:
if t.table_name.startswith("tbl"): #ignores MS sys tables
cursor = connOtherDB.cursor()
#selectSQL = '"SELECT * from '+ str(t.table_name) + '"'
cursor.execute("SELECT * from tblDatabaseComments") #this works if I refer to a table by name
cursor.execute(selectSQL) #this does not work. throws SQL syntax error
row = cursor.fetchone()
if row:
print t.table_name
print row
Use str.format() to ease building of SQL statements:
import pyodbc
connOtherDB = pyodbc.connect("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ='path to my dbase;")
otherDBtbls = connOtherDB.cursor().tables()
for t in otherDBtbls:
if t.table_name.startswith("tbl"): #ignores MS sys tables
cursor = connOtherDB.cursor()
selectSQL = 'SELECT * FROM {}'.format(t.table_name)
cursor.execute(selectSQL)
row = cursor.fetchone()
if row:
print t.table_name
print row
As an aside, take a look a PEP 8 -- Style Guide for Python Code for guidance on maximum line length and variable naming, among other coding conventions.