Oneliner pipe to PostgreSQL from MySQL with bash - mysql

So I am piping quite a lot of data using bash everyday between 3 servers:
Server A is mysql (connection over SSH)
Server B is just a centos server where I run the
bash script.
Server C is postgresql 9.6.
All was good until one table got one row with a double quote in the middle of a varchar. This is breaking my pipe at the insertion level (on pg side).
Indeed, when getting the data this way from Mysql, it is not quoted. So, I believe in the end it's because of the basic behaviour of COPY and its QUOTE parameter.
Here is the bash code:
ssh -o ConnectTimeout=5 -i "$SSH_KEY" "$SSH_USER"#"$SSH_IP" 'mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' | \
psql -h "$DWH_IP" "$PG_DB" -c "COPY tableA FROM stdin WITH CSV HEADER DELIMITER E'\t' NULL AS 'NULL';"
I tried playing with the COPY parameter QUOTE but unsuccessfully.
Should I put some sed in the middle of the pipeline?
I also tried double quoting when getting the data out of mysql but could not find the relevant parameter when mysql is used in a pipe like this.
I'd lik to keep things in one pipe (no MYSQL->CSV then CSV->PG please).#
Thanks!

Here's working sample of importing csv to postgres:
t=# create table so10 (i int,t text);
CREATE TABLE
t=# \q
postgres#vao-VirtualBox:~$ echo "1,Bro" | psql -d t -c "copy so10 from stdin with csv"
COPY 1
postgres#vao-VirtualBox:~$ psql t -c "select * from so10"
i | t
---+-----
1 | Bro
(1 row)
You can open ssh tunnel to mysql and run mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' locally (instead of echo in my example)

Related

connect to mysql db and execute query and export result to variable - bash script

I want to connect to mysql databse and execute some queries and export its result to a varibale, and do all of these need to be done entirely by bash script
I have a snippet code but does not work.
#!/bin/bash
BASEDIR=$(dirname $0)
cd $BASEDIR
mysqlUser=n_userdb
mysqlPass=d2FVR0NA3
mysqlDb=n_datadb
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1")
echo "${result}" >> a.txt
whats the problem ?
The issue was resolved in the chat by using the correct password.
If you further want to get only the data, use mysql with -NB (or --skip-column-names and --batch).
Also, the script needs to quote the variable expansions, or there will be issues with usernames/passwords containing characters that are special to the shell. Additionally, uppercase variable names are usually reserved for system variables.
#!/bin/sh
basedir=$(dirname "$0")
mysqlUser='n_userdb'
mysqlPass='d2FVR0NA3'
mysqlDb='n_datadb'
cd "$basedir" &&
mysql -NB -u "$mysqlUser" -p"$mysqlPass" -D "$mysqlDb" \
-e 'select * from confs limit 1' >a.txt 2>a-err.txt
Ideally though, you'd use a my.cnf file to configure the username and password.
See e.g.
MySQL Utilities - ~/.my.cnf option file
mysql .my.cnf not reading credentials properly?
Do this:
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1" | grep '^\|' | tail -1)
The $() statement of Bash has trouble handling variables which contain multiple lines so the above hack greps only the interesting part: the data

How can I export a mySQL #temp table to a .csv file within script?

I'd like to write a script that allows me to export .csv files from 15-20 temporary tables I created, using a script instead of having to copy and paste in a separate .csv file and then save them down.
:!!sqlcmd -S server -d database-E -Q "SET NOCOUNT ON
SELECT * FROM TABLE" -o "C:\Users\name\Documents\folder\filename.csv"
-W -w 1024 -s ","
I've tried this, which works (not formatting correctly) but it doesn't seem to be work at all for a temp table; the .csv file contains this.
Msg 208 Level 16 State 1 Server SERVERNAME
Invalid object name '#TEMPTABLE'.
I cannot obtain "elevated privileges" to be able to use BCP export, because I cannot write a stored procedure, create a new database, or access the command line. Is there a workaround for this?
Temp tables are ephemeral; they do not persist across sessions. Instead of creating temp tables, create actual tables, either in the database that you're working with, or in tempdb, then export the data from tempdb
An example:
sqlcmd -S server -d database -E -Q "If Exists (select * FROM tempdb.sys.tables WHERE name = 'Tmp_DataExport1') drop TABLE tempdb..Tmp_DataExport1;"
sqlcmd -S server -d database -E -Q "SELECT TOP 5 * INTO tempdb..Tmp_DataExport1 FROM T_SourceTable"
sqlcmd -S server -d database -E -Q "SELECT * FROM tempdb..Tmp_DataExport1" -o "c:\temp\filename.csv" -W -w 1024 -s ","

gnu parallel mysql LOAD DATA LOCAL INFILE

I am trying to use GNU parallel to execute several LOAD DATA LOCAL INFILE mysql commands where:
{1} is the name of the file which I obtain from a UNIX find command pipe
{2} is the result of a chop.pl script that prints out a certain token from the file string according to certain rules
It seems that I am calling GNU parallel the correct way, except it
does not keep the double-quotes around the mysql command after the
-e, and it causes it not work.
E.g.
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e "LOAD DATA LOCAL INFILE '{2}' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='{1}', col6='foo'"
The command it is attempting, lacking the double quotes after -e, is like so:
mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e LOAD DATA LOCAL INFILE '/my/file/name/yadda_yadda-12345678.txt' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='yadda_yadda', col6='foo'
Any ideas how to add back the double-quotes after the -e?
The lazy and effective way: put the mysql command in a function, then have parallel call that, passing {1} and {2}.
Using functions is actually suggested by the parallel man pages:
https://www.gnu.org/software/parallel/man.html#QUOTING
Answering my own question, the solution was to escape all double-quotes, single-quotes and parenthesis signs:
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e \"LOAD DATA LOCAL INFILE \'{2}\' IGNORE INTO TABLE tblname IGNORE 1 LINES \(col1,col2,col3,col4\) set col5=\'{1}\', col6=\'foo\'\"

pass parameter from bash to mysql script

I'm trying to pass parameter from bash script to mysql script. The bash script is
#!/bin/bash
for file in `ls *.symbol`
do
path=/home/qz/$file
script='/home/qz/sqls/load_eval.sql'
mysql -u qz -h compute-0-10 -pabc -e "set #pred = '$path'; source $script;"
done
The load_eval.sql is
use biogrid;
load data local infile #pred into table lasp
fields terminated by ','
lines terminated by '\n'
(score, symbols);
When running the bash script, I got error the message:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '#pred into table lasp ..
It seems the value of the parameter #pred is not passed into mysql script.
MySQL doesn't support session variables in a LOAD DATA INFILE statement like that. This has been recognized as a feature request for quite some time (http://bugs.mysql.com/bug.php?id=39115), but the feature has never been implemented.
I would recommend using mysqlimport instead of doing the complex steps with mysql that you're doing. The file's name must match the table's name, but you can trick this with a symbolic link:
#!/bin/bash
for file in *.symbol
do
path="/home/qz/$file"
ln -s -f "$path" /tmp/lasp.txt
mysqlimport -u qz -h compute-0-10 -pabc \
--local --columns "score,symbols" /tmp/lasp.txt
done
rm -f /tmp/lasp.txt
PS: No need use `ls`. As you can see above, filename expansion works fine.

How to ignore certain MySQL tables when importing a database?

I have a large SQL file with one database and about 150 tables. I would like to use mysqlimport to import that database, however, I would like the import process to ignore or skip over a couple of tables. What is the proper syntax to import all tables, but ignore some of them? Thank you.
The accepted answer by RandomSeed could take a long time! Importing the table (just to drop it later) could be very wasteful depending on size.
For a file created using
mysqldump -u user -ppasswd --opt --routines DBname > DBdump.sql
I currently get a file about 7GB, 6GB of which is data for a log table that I don't 'need' to be there; reloading this file takes a couple of hours. If I need to reload (for development purposes, or if ever required for a live recovery) I skim the file thus:
sed '/INSERT INTO `TABLE_TO_SKIP`/d' DBdump.sql > reduced.sql
And reload with:
mysql -u user -ppasswd DBname < reduced.sql
This gives me a complete database, with the "unwanted" table created but empty. If you really don't want the tables at all, simply drop the empty tables after the load finishes.
For multiple tables you could do something like this:
sed '/INSERT INTO `TABLE1_TO_SKIP`/d' DBdump.sql | \
sed '/INSERT INTO `TABLE2_TO_SKIP`/d' | \
sed '/INSERT INTO `TABLE3_TO_SKIP`/d' > reduced.sql
There IS a 'gotcha' - watch out for procedures in your dump that might contain "INSERT INTO TABLE_TO_SKIP".
mysqlimport is not the right tool for importing SQL statements. This tool is meant to import formatted text files such as CSV. What you want to do is feed your sql dump directly to the mysql client with a command like this one:
bash > mysql -D your_database < your_sql_dump.sql
Neither mysql nor mysqlimport provide the feature you need. Your best chance would be importing the whole dump, then dropping the tables you do not want.
If you have access to the server where the dump comes from, then you could create a new dump with mysqldump --ignore-table=database.table_you_dont_want1 --ignore-table=database.table_you_dont_want2 ....
Check out this answer for a workaround to skip importing some table
For anyone working with .sql.gz files; I found the following solution to be very useful. Our database was 25GB+ and I had to remove the log tables.
gzip -cd "./mydb.sql.gz" | sed -r '/INSERT INTO `(log_table_1|log_table_2|log_table_3|log_table_4)`/d' | gzip > "./mydb2.sql.gz"
Thanks to the answer of Don and comment of Xosofox and this related post:
Use zcat and sed or awk to edit compressed .gz text file
Little old, but figure it might still come in handy...
I liked #Don's answer (https://stackoverflow.com/a/26379517/1446005) but found it very annoying that you'd have to write to another file first...
In my particular case this would take too much time and disc space
So I wrote a little bash script:
#!/bin/bash
tables=(table1_to_skip table2_to_skip ... tableN_to_skip)
tableString=$(printf "|%s" "${tables[#]}")
trimmed=${tableString:1}
grepExp="INSERT INTO \`($trimmed)\`"
zcat $1 | grep -vE "$grepExp" | mysql -uroot -p
this does not generate a new sql script but pipes it directly to the database
also, it does create the tables, just doesn't import the data (which was the problem I had with huge log tables)
Unless you have ignored the tables during the dump with mysqldump --ignore-table=database.unwanted_table, you have to use some script or tool to filter out the data you don't want to import from the dump file before passing it to mysql client.
Here is a bash/sh function that would exclude the unwanted tables from a SQL dump on the fly (through pipe):
# Accepts one argument, the list of tables to exclude (case-insensitive).
# Eg. filt_exclude '%session% action_log %_cache'
filt_exclude() {
local excl_tns;
if [ -n "$1" ]; then
# trim & replace /[,;\s]+/ with '|' & replace '%' with '[^`]*'
excl_tns=$(echo "$1" | sed -r 's/^[[:space:]]*//g; s/[[:space:]]*$//g; s/[[:space:]]+/|/g; s/[,;]+/|/g; s/%/[^\`]\*/g');
grep -viE "(^INSERT INTO \`($excl_tns)\`)|(^DROP TABLE (IF EXISTS )?\`($excl_tns)\`)|^LOCK TABLES \`($excl_tns)\` WRITE" | \
sed 's/^CREATE TABLE `/CREATE TABLE IF NOT EXISTS `/g'
else
cat
fi
}
Suppose you have a dump created like so:
MYSQL_PWD="my-pass" mysqldump -u user --hex-blob db_name | \
pigz -9 > dump.sql.gz
And want to exclude some unwanted tables before importing:
pigz -dckq dump.sql.gz | \
filt_exclude '%session% action_log %_cache' | \
MYSQL_PWD="my-pass" mysql -u user db_name
Or you could pipe into a file or any other tool before importing to DB.
If desired, you can do this one table at a time:
mysqldump -p sourceDatabase tableName > tableName.sql
mysql -p -D targetDatabase < tableName.sql
Here is my script to exclude some tables from mysql dump
I use it to restore DB when need to keep orders and payments data
exclude_tables_from_dump.sh
#!/bin/bash
if [ ! -f "$1" ];
then
echo "Usage: $0 mysql_dump.sql"
exit
fi
declare -a TABLES=(
user
order
order_product
order_status
payments
)
CMD="cat $1"
for TBL in "${TABLES[#]}";do
CMD+="|sed 's/DROP TABLE IF EXISTS \`${TBL}\`/# DROP TABLE IF EXIST \`${TBL}\`/g'"
CMD+="|sed 's/CREATE TABLE \`${TBL}\`/CREATE TABLE IF NOT EXISTS \`${TBL}\`/g'"
CMD+="|sed -r '/INSERT INTO \`${TBL}\`/d'"
CMD+="|sed '/DELIMITER\ \;\;/,/DELIMITER\ \;/d'"
done
eval $CMD
It avoid DROP and reCREATE of tables and inserting data to this tables.
Also it strip all FUNCTIONS and PROCEDURES that stored between DELIMITER ;; and DELIMITER ;
I would not use it on production but if I would have to import some backup quickly that contains many smaller table and one big monster table that might take hours to import I would most probably "grep -v unwanted_table_name original.sql > reduced.sql
and then mysql -f < reduced.sql