CSV insertion using SQLCMD, how to do it? - csv

I have been reading a lot of forums and tutorials where other users ask the same but the answer is almost the same always: use BCP.EXE (BULK INSERT) instead to import, and also export with BCP.EXE using native format.
But in some post I read that there is a (complicated) way to do it with SQLCMD, and my need now is strongly requesting me to to it through SQLCMD as the target is an Azure SQL DB. Does anyone know the steplist to follow? Or any useful resource that can share with me?
Thank you very much in advance.
I'm expecting to use only SQLCMD to insert the CSV data into an Azure SQL DB.

No, you can use SQLCMD to convert data:
sqlcmd -S ipdb -U sa -P "passwordsa" -d dbname -Q "select * from tabelname" -o "direktori/file.csv" -s"," -W -w 700
Or BCP to import data:
bcp Tablename in ~/filename.txt -S localhost -U sa -P <your_password> -d Databasename -c -t ','
Why don't you use SSMS to import the CSV?

Related

How to structure a cron job and script to execute sql command

I have a MySQL database accessible through CPANEL. I want to execute a SQL command to DELETE from dbtable where eventdate = 'YYYY-MM-DD'. This is my cron job.
curl -L --max-redirs 1000 -v "https://ottawaoc.ca/test/files/delete_dates.sh" 1>/dev/null
and here is the shell script
#!/bin/bash
mysql --user = "ottawaoc_test" --password = "test ps" --database = "ottawaoc_test" --execute ="DELETE FROM `h8be5_eventregistration` WHERE `eventdate` = '2020-09-27'"
(I do insert the correct password.)
I get output mailed to me and it seems to get the shell script but nothing happens within the database.
Could someone help to give me the correct commands and/or tell me how I can get errors from MySQL.
I used to run mysql crons by putting this in the shell:
#!/bin/bash
echo "mysql statement;" | mysql -B -hHOST -uUSER -pPASS DBNAME

Importing from MySQL dump to Clickhouse

I want to import from MySQL dump to Clickhouse. I've tried going through the official docs but cannot find anything. I've tried importing using CSV following Stack Overflow answer. Any help appreciated. I've an Ubuntu 16.04 LTS.
On small data, the export to tsv will work but at large it will not work, because only export will take a lot of time.
In this case, you need to import directly from stdout and clickhouse knows how to do it perfectly.
Example code:
mysql -u user -ppass --compress -ss -e "SELECT * FROM table WHERE id >0 AND id <=1000000" db_name | sed 's/\"//g;s/\t/","/g;s/^/"/;s/$/"/' | clickhouse-client --query="INSERT INTO db_name.table FORMAT CSV"
Using this method, I import 500 GB and 1,9 billion rows in 7-10 hours in a clickhouse
You can export data from MySQL into TSV file using MySQL command line:
mysql -Bse "select * from TABLE_NAME" > table.tsv
And then import data to ClickHouse:
cat table.tsv | clickhouse-client --query="INSERT INTO TABLE_NAME FORMAT TabSeparated"
My SQL data dump can be done by the following query:
mysql --protocol tcp -u clickhouse_user_name -p -P 9004 your_db_name < data.sql

Incorrect syntax when import sql file from MySQL to MS SQL via SQLCMD

I have large .sql files exported from MySQL, and try to import them to MS SQL(localDB) via
SQLCMD. But when I type in the following into Command-prompt:
sqlcmd.exe -S (localdb)\MSSQLLocaldb -i
C:\Users\Administrator\Desktop\1\SQLQuery4.sql
I got the following error message:
Incorrect syntax near 'tblo'
I checked my .sql file, it seems SQLCMD can't understand double quotes
e.g.
INSERT INTO "tblo" VALUES (2,'DTT','10000286','Dp','y',2,38,'2010-02-22
11:03:51','2010-02-22 11:03:51');
However, it's fine with SSMS
Any idea to solve this problem?
I found a solution by myself:
I can add --skip-quote-names flag when I dump data from MySQL
e.g.
mysqldump.exe -hlocalhost -uUserName -pPassword --compatible=mssql --no-create-info --skip-quote-names --skip-add-locks DataBase tblo > D:\Test\dump.sql
Result in dump.sql will be like:
INSERT INTO tblo VALUES (2,'DTT','10000286','Dp','y',2,38,'2010-02-22 11:03:51','2010-02-22 11:03:51');
So I can use this .sql to directly import data into MS SQL server via SQLCMD
sqlcmd -S (localdb)\MSSQLLocaldb -i D:\Test\dump.sql

How can I export a mySQL #temp table to a .csv file within script?

I'd like to write a script that allows me to export .csv files from 15-20 temporary tables I created, using a script instead of having to copy and paste in a separate .csv file and then save them down.
:!!sqlcmd -S server -d database-E -Q "SET NOCOUNT ON
SELECT * FROM TABLE" -o "C:\Users\name\Documents\folder\filename.csv"
-W -w 1024 -s ","
I've tried this, which works (not formatting correctly) but it doesn't seem to be work at all for a temp table; the .csv file contains this.
Msg 208 Level 16 State 1 Server SERVERNAME
Invalid object name '#TEMPTABLE'.
I cannot obtain "elevated privileges" to be able to use BCP export, because I cannot write a stored procedure, create a new database, or access the command line. Is there a workaround for this?
Temp tables are ephemeral; they do not persist across sessions. Instead of creating temp tables, create actual tables, either in the database that you're working with, or in tempdb, then export the data from tempdb
An example:
sqlcmd -S server -d database -E -Q "If Exists (select * FROM tempdb.sys.tables WHERE name = 'Tmp_DataExport1') drop TABLE tempdb..Tmp_DataExport1;"
sqlcmd -S server -d database -E -Q "SELECT TOP 5 * INTO tempdb..Tmp_DataExport1 FROM T_SourceTable"
sqlcmd -S server -d database -E -Q "SELECT * FROM tempdb..Tmp_DataExport1" -o "c:\temp\filename.csv" -W -w 1024 -s ","

Dump a mysql database to a plaintext (CSV) backup from the command line

I'd like to avoid mysqldump since that outputs in a form that is only convenient for mysql to read. CSV seems more universal (one file per table is fine). But if there are advantages to mysqldump, I'm all ears. Also, I'd like something I can run from the command line (linux). If that's a mysql script, pointers to how to make such a thing would be helpful.
If you can cope with table-at-a-time, and your data is not binary, use the -B option to the mysql command. With this option it'll generate TSV (tab separated) files which can import into Excel, etc, quite easily:
% echo 'SELECT * FROM table' | mysql -B -uxxx -pyyy database
Alternatively, if you've got direct access to the server's file system, use SELECT INTO OUTFILE which can generate real CSV files:
SELECT * INTO OUTFILE 'table.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM table
In MySQL itself, you can specify CSV output like:
SELECT order_id,product_name,qty
FROM orders
INTO OUTFILE '/tmp/orders.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
From http://www.tech-recipes.com/rx/1475/save-mysql-query-results-into-a-text-or-csv-file/
You can dump a whole database in one go with mysqldump's --tab option. You supply a directory path and it creates one .sql file with the CREATE TABLE DROP IF EXISTS syntax and a .txt file with the contents, tab separated. To create comma separated files you could use the following:
mysqldump --password --fields-optionally-enclosed-by='"' --fields-terminated-by=',' --tab /tmp/path_to_dump/ database_name
That path needs to be writable by both the mysql user and the user running the command, so for simplicity I recommend chmod 777 /tmp/path_to_dump/ first.
The select into outfile option wouldn't work for me but the below roundabout way of piping tab-delimited file through SED did:
mysql -uusername -ppassword -e "SELECT * from tablename" dbname | sed 's/\t/","/g;s/^/"/;s/$/"/' > /path/to/file/filename.csv
Here is the simplest command for it
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' | sed 's/\t/,/g' > output.csv
If there is a comma in the column value then we can generate .tsv instead of .csv with the following command
mysql -h<hostname> -u<username> -p<password> -e 'select * from databaseName.tableNaame' > output.csv
If you really need a "Backup" then you also need database schema, like table definitions, view definitions, store procedures and so on. A backup of a database isn't just the data.
The value of the mysqldump format for backup is specifically that it is very EASY to use it to restore mysql databases. A backup that isn't easily restored is far less useful. If you are looking for a method to reliably backup mysql data to so you can restore to a mysql server then I think you should stick with the mysqldump tool.
Mysql is free and runs on many different platforms. Setting up a new mysql server that I can restore to is simple. I am not at all worried about not being able to setup mysql so I can do a restore.
I would be far more worried about a custom backup/restore based on a fragile format like csv/tsv failing. Are you sure that all your quotes, commas, or tabs that are in your data would get escaped correctly and then parsed correctly by your restore tool?
If you are looking for a method to extract the data then see several in the other answers.
You can use below script to get the output to csv files. One file per table with headers.
for tn in `mysql --batch --skip-page --skip-column-name --raw -uuser -ppassword -e"show tables from mydb"`
do
mysql -uuser -ppassword mydb -B -e "select * from \`$tn\`;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $tn.csv
done
user is your user name, password is the password if you don't want to keep typing the password for each table and mydb is the database name.
Explanation of the script: The first expression in sed, will replace the tabs with "," so you have fields enclosed in double quotes and separated by commas. The second one insert double quote in the beginning and the third one insert double quote at the end. And the final one takes care of the \n.
If you want to dump the entire db as csv
#!/bin/bash
host=hostname
uname=username
pass=password
port=portnr
db=db_name
s3_url=s3://buckera/db_dump/
DATE=`date +%Y%m%d`
rm -rf $DATE
echo 'show tables' | mysql -B -h${host} -u${uname} -p${pass} -P${port} ${db} > tables.txt
awk 'NR>1' tables.txt > tables_new.txt
while IFS= read -r line
do
mkdir -p $DATE/$line
echo "select * from $line" | mysql -B -h"${host}" -u"${uname}" -p"${pass}" -P"${port}" "${db}" > $DATE/$line/dump.tsv
done < tables_new.txt
touch $DATE/$DATE.fin
rm -rf tables_new.txt tables.txt
Check out mk-parallel-dump which is part of the ever-useful maatkit suite of tools. This can dump comma-separated files with the --csv option.
This can do your whole db without specifying individual tables, and you can specify groups of tables in a backupset table.
Note that it also dumps table definitions, views and triggers into separate files. In addition providing a complete backup in a more universally accessible form, it also immediately restorable with mk-parallel-restore
Two line PowerShell answer:
# Store in variable
$Global:csv = (mysql -uroot -p -hlocalhost -Ddatabase_name -B -e "SELECT * FROM some_table") `
| ConvertFrom-Csv -Delimiter "`t"
# Out to csv
$Global:csv | Export-Csv "C:\temp\file.csv" -NoTypeInformation
Boom-bata-boom
-D = the name of your database
-e = query
-B = tab-delimited
There's a slightly simpler way to get all the tables into tab delimited fast:
#!/bin/bash
tablenames=$(mysql your_database -e "show tables;" -B |sed "1d")
IFS=$'\n'
tables=($tablenames)
for table in ${tables[#]}; do
mysql your_database -e "select * from ${table}" -B > "${table}.tsv"
done
Here's a basic python script that does the work! You can also choose to export only the headers (column names) or headers & data both.
Just change the database credentials and run the script. It will output all the data to the output folder.
To run the script -
Run: pip install mysql-connector-python
Change database credentials in the "INPUT" section
Run: python filename.py
import mysql.connector
from pathlib import Path
import csv
#========INPUT===========
databaseHost=""
databaseUsername=""
databasePassword=""
databaseName=""
outputDirectory="./WITH-DATA/"
exportTableData=True #MAKING THIS FIELD FALSE WILL STORE ONLY THE TABLE HEADERS (COLUMN NAMES) IN THE CSV FILE
#========INPUT END===========
Path(outputDirectory).mkdir(parents=True, exist_ok=True)
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
mycursor.execute("SHOW TABLES")
tables = mycursor.fetchall()
tableNames=[table[0] for table in tables]
print("================================")
print("Total number of tables: "+ str(len(tableNames)))
print(tableNames)
print("================================")
for tableName in tableNames:
print("================================")
print("Processing: "+ str(tableName))
mydb = mysql.connector.connect(
host=databaseHost,
user=databaseUsername,
password=databasePassword
)
mycursor = mydb.cursor()
mycursor.execute("USE "+databaseName)
if exportTableData:
mycursor.execute("SELECT * FROM "+tableName)
else:
mycursor.execute("SELECT * FROM "+tableName+" LIMIT 1")
print(mycursor.column_names)
with open(outputDirectory+tableName+".csv", 'w', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(mycursor.column_names)
if exportTableData:
myresult = mycursor.fetchall()
csvwriter.writerows(myresult)