errors by SQLCMD when using a sequence of scripts - sqlcmd

As a beginner, I am trying to load many scripts using SQLCMD command in sequence as
:r c:\SQL_scripts\script_1.sql
:r c:\SQL_scripts\script_2.sql
:r c:\SQL_scripts\script_3.sql
.
.
.
The order in the sequence is important. Also, these scrips include many common-name variables which I can't change. Upon execution, SQLCMD generate many errors since these variables are already declared in the previous scripts. Is there a way to create a new query for each file to avoid this problem or reset all variable names before running the next script? Thank you. Menachem
As explained, the errors are:
Msg 134, Level 15, State 1, Line 204
The variable name '#path_in' has already been declared. Variable names must be unique within a query batch or stored procedure.
.
.
.

Related

Python hangs on fetchall using MySQL connector

I am fairly new to Python and MySQL. I am writing code that queries 60 different tables each containing records for each second in a five minute period. The code executes every five minutes. A few of the queries can reach 1/2 MB of data but most are in the 50 KB range. I am running on a workstation running Windows 7,64-bit using MySQL Connector/Python. I am testing my code using PowerShell windows but the code will eventually run as a scheduled task. The workstation has plenty of RAM (8 GB). Other processes are running but according to the Task Manager only half of memory is being used. Mostly everything performs as expected but sometimes processing hangs. I have inserted print statements in the code (I've also used debugger tracing) to determine where the hang occurs. It is occurring on a call to fetchall. Below is the germane parts of the code. All CAPS are (pseudo)constants.
mncdb = mysql.connector.connect(
option_files=ENV_MCG_MYSQL_OPTION_FILE,
option_groups=ENV_MCG_MYSQL_OPTION_GROUP,
host=ut_get_workstation_hostname(),
database=ENV_MNC_DATABASE_NAME
)
for generic_table_id in DBR_TABLE_INDEX:
site_table_id = DBR_SITE_TABLE_NAMES[site_id][generic_table_id]
db_cursor = mncdb.cursor()
db_command = (
"SELECT *"
+" FROM "
+site_table_id
+" WHERE "
+DBR_DATETIME_FIELD
+" >= '"
+query_start_time+"'"
+" AND "
+DBR_DATETIME_FIELD
+" < '"
+query_end_time+"'"
)
try:
db_cursor.execute(db_command)
print "selected data for table "+site_table_id
try:
table_info = db_cursor.fetchall()
print "extracted data for table "+site_table_id
except:
print "DB exception "+formatExceptionInfo()
print "FETCH failed to return any rows..."
table_info = []
raise
except:
print "uncaught DB error "+formatExceptionInfo()
raise
.
.
.
other processing that uses the data
.
.
.
db_cursor.close()
mncdb.close()
.
.
.
No exceptions are being raised. In a separate PowerShell window I can access the data being processed by the code. For my testing all data in the database is loaded before the code is executed. No processes are updating the database while the code is being tested. The hanging can occur on the first execution of the code or after several hours of execution.
My question is what could be causing the code to hang on the fetchall statement?
You can alleviate this by setting the fetch size:
mncdb = mysql.connector.connect(option_files=ENV_MCG_MYSQL_OPTION_FILE, option_groups=ENV_MCG_MYSQL_OPTION_GROUP,host=ut_get_workstation_hostname(,database=ENV_MNC_DATABASE_NAME, cursorclass = MySQLdb.cursors.SSCursor)
But before you do this, you should also use the mysql excuse for prepared statements instead of string concatenation when building your statement.
Hanging could involve the MySQL tables themselves and not specifically the Python code. Do they contain many records? Are they very wide tables? Are they indexed on the datetime_field?
Consider various strategies:
Specifically select the needed columns instead of the asterisk, calling all columns.
Index on the DBR_DATETIME_FIELD being used in the where clause (i.e., implicit join).
Diagnose further with printed timers print(datetime.datetime.now()) to see which are the bottleneck tables. In doing so, be sure to import the datetime module.

Pass parameter from Batch file to MYSQL script

I'm having a really hard time believing this question has never been asked before, it MUST be! I'm working on a batch file that needs to run some sql commands. All tutorials explaining this DO NOT WORK (referring to this link:Pass parameters to sql script that someone will undoubtedly mention)! I've tried other posts on this site verbatim and still nothing is working.
The way I see it, there are two ways I can approach this:
1. Either figure out how to call my basic MYSQL script and specify a parameter or..
2. Find an equivalent "USE ;" command that works in batch
My Batch file so far:
:START
#ECHO off
:Set_User
set usrCode = 0
mysql -u root SET #usrCode = '0'; \. caller.sql
Simply put, I want to pass 'usrCode' to my MYSQL script 'caller.sql' which looks like this:
USE `my_db`;
CALL collect_mismatch(#usrCode);
I know that procedures are a whole other topic to get into, but assume that the procedure is working just fine. I just can't get my parameter from Batch to MYSQL.
Ideally I would like to have the 'USE' & 'CALL' commands in my Batch file, but I can't find anything that let's me select a database in Batch before CALLing my procedure. That's when I tried the above link which boasts a simple command line entry and you're off to the races, but it isn't the case.
Any help would be greatly appreciated.
This will work;
echo SET #usrCode = '0'; > params.sql
type params.sql caller.sql | mysql -u root dbname

SSIS - Export multiple SQL Server tables to multiple text files

I have to move data between two SQL Server DBs. My task is to export the data as text (.dat) files, move the files and import into the destination. I have to migrate over 200 tables.
This is what I tried
1) I used a Execute SQL task to fetch my tables.
2) Used a For each loop to loop through the table names from the collection.
3) Used a script task inside the for each loop to build the text file destination path.
4) Called a DFT with the table name in a variable for the source ole db and the path name in a variable for the destination flat file.
First table extracts fine but the second table bombs with a synchronization error. I see this is numerous posts but could not find one that matches my scenario. Hence posting here.
Even if I get the package to work with multiple DFTs, the second table from the second DFT does not export columns because the flat file connection manager still remembers the first table columns. Is there a way to get it to forget the columns?
Any thoughts on how I can export multiple tables to multiple text files using one DFT using dynamic source and destination variable?
Thanks and appreciate your help.
Unfortunately Bulk Import Task only enable us to use format files effectively to map the columns between source and destinations. Bulk Import Task uses BULK INSERT TSQL command to import the data, to execute user should have the BULKADMIN server privilege.
Most of the companies would not allow BULKADMIN server privilege to enable due to security reasons.
Hence using the script task to construct BCP statements is a good and simple option to Export.
You does not require to construct .bat file as script itself can execute dos commands which runs under .NET security account.
I figured out a way to do this. I thought I will share if anybody is stuck in the same situation.
So, in summary, I needed to export and import data via files. I also wanted to use a format file if at all possible for various reasons.
What I did was
1) Construct a DFT which gets me a list of table names from the DB that I need to export. I used 'oledb' as a source and 'recordset destination' as target and stored the table names inside a object variable.
A DFT is not really necessary. You can do it any other way. Also, in our application, we store the table names in a table.
2) Add a 'For each loop container' with a 'For Each ADO Enumerator' which takes my object variable from the previous step into the collection.
3) Parse the variable one by one and construct BCP statements like below inside a Script task. Create variables as necessary. The BCP statement will be stored in a variable.
I loop through the tables and construct multiple BCP statements like this.
BCP "DBNAME.DBO.TABLENAME1" out "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -f "PATH\filename.fmt"
BCP "DBNAME.DBO.TABLENAME1" out "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -f "PATH\filename.fmt"
The statements are put inside a .bat file. This is also done inside the script task.
4) A execute process task will next execute the .BAT file. I had to do this because, I do not have the option to use the 'master..xp_cmdShell' command or the 'BULK INSERT' command in my company. If I had the option to execute cmdshell, I could have directly run the command from the package.
5) Again add a 'For each loop container' with a 'For Each ADO Enumerator' which takes my object variable from the previous step into the collection.
6) Parse the variable one by one and construct BCP statements like this inside a Script task. Create variables as necessary. The BCP statement will be stored in a variable.
I loop through the tables and construct multiple BCP statements like this.
BCP "DBNAME.DBO.TABLENAME1" in "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -b10000 -f "PATH\filename.fmt"
BCP "DBNAME.DBO.TABLENAME1" in "PATH\FILENAME2.dat" -S SERVERNAME -T -t"|" -r$\n -b10000 -f "PATH\filename.fmt"
The statements are put inside a .bat file. This is also done inside the script task.
The -b10000 was put so I can import in batches. Without this many of my large tables could not be copied due to less space in the tempdb.
7) Run the .bat file to import the file again.
I am not sure if this is the best solution. I still thought I will share what satisfied my requirement. If my answer is not clear, I would be happy to explain if you have any questions. We can also optimize this solution. The same can be done purely via VB Scripts but you have to write some code to do that.
I also created a package configuration file where I can change the DB name, server name, the data and format file locations dynamically.
Thanks.

Second RMySQL operation fails - why?

I am running a script that stores different datasets to a MySQL database. This works so far, but only sequentially. e.g.:
# write table1
replaceTable(con,tbl="table1",dframe=dframe1)
# write table2
replaceTable(con,tbl="table2",dframe=dframe2)
If I select both (I use StatET / Eclipse) and run the selection, I get an error:
Error in function (classes, fdef, mtable) :
unable to find an inherited method for function "dbWriteTable",
for signature "MySQLConnection", "data.frame", "data.frame".
I guess this has to do with the fact that my con is still busy or so when the second request is started. When I run the script line after line it just works fine. Hence I wonder, how can I tell R to wait til the first request is ready and then go ahead ? How can I make R scripts interactive (just console like plot examples - no tcl/tk).
EDIT:
require(RMySQL)
replaceTable <- function(con,tbl,dframe){
if(dbExistsTable(con,tbl)){
dbRemoveTable(con,tbl)
dbWriteTable(con,tbl,dframe)
cat("Existing database table updated / overwritten.")
}
else {
dbWriteTable(con,tbl,dframe)
cat("New database table created")
}
}
dbWriteTable has two important arguments:
overwrite: a logical specifying whether to overwrite an existing table
or not. Its default is ‘FALSE’.
append: a logical specifying whether to append to an existing table
in the DBMS. Its default is ‘FALSE’.
For past project I have successfully achieve appending, overwriting, creating, ... of tables with proper combinations of these.

Using shell script to insert data into remote MYSQL database

I've been trying to get a shell(bash) script to insert a row into a REMOTE database, but I've been having some trouble :(
The script is meant to upload a file to a server, get a URL, HASH, and a file size, connect to a remote mysql database, and insert the data into an existing table. I've gotten it working until the remote MYSQL database bit.
It looks like this:
#!/bin/bash
zxw=randomtext
description=randomtext2
for file in "$#"
do
echo -n *****
ident= *****
data= ****
size=` ****
hash=`****
mysql --host=randomhost --user=randomuser --password=randompass randomdb
insert into table (field1,field2,field3) values('http://www.example.com/$hash','$file','$size');
echo "done"
done
I'm a total noob at programming so yeah :P
Anyway, I added the \ to escape the brackets as I was getting errors. As it is right now, the script is works fine until connects to the mysql database. It just connects to the mysql database and doesn't do the insert command (and I don't even know if the insert command would work in bash).
PS: I've tried both the mysql commands from the command line one by one, and they worked, though I defined the hash/file/size and didn't have the escaping "".
Anyway, what do you guys think? Is what I'm trying to do even possible? If so how?
Any help would be appreciated :)
The insert statement has to be sent to mysql, not another line in the shell script, so you need to make it a "here document".
mysql --host=randomhost --user=randomuser --password=randompass randomdb << EOF
insert into table (field1,field2,field3) values('http://www.site.com/$hash','$file','$size');
EOF
The << EOF means take everything before the next line that contains nothing but EOF (no whitespace at the beginning) as standard input to the program.
This might not be exactly what you are looking for but it is an option.
If you want to bypass the annoyance of actually including your query in the sh script, you can save the query as .sql file (useful sometimes when the query is REALLY big and complicated). This can be done with simple file IO in whatever language you are using.
Then you can simply include in your sh scrip something like:
mysql -u youruser -p yourpass -h remoteHost < query.sql &
This is called batch mode execution. Optionally, you can include the ampersand at the end to ensure that that line of the sh script does not block.
Also if you are concerned about the same data getting entered multiple times and your rdbms getting inconsistent, you should explore MySql transactions (commit, rollback, etc).
Don't use raw SQL from bash; bash has no sane facility for sanitizing the data beforehand. Generate a CSV file and upload that instead.