I am trying to use GNU parallel to execute several LOAD DATA LOCAL INFILE mysql commands where:
{1} is the name of the file which I obtain from a UNIX find command pipe
{2} is the result of a chop.pl script that prints out a certain token from the file string according to certain rules
It seems that I am calling GNU parallel the correct way, except it
does not keep the double-quotes around the mysql command after the
-e, and it causes it not work.
E.g.
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e "LOAD DATA LOCAL INFILE '{2}' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='{1}', col6='foo'"
The command it is attempting, lacking the double quotes after -e, is like so:
mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e LOAD DATA LOCAL INFILE '/my/file/name/yadda_yadda-12345678.txt' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='yadda_yadda', col6='foo'
Any ideas how to add back the double-quotes after the -e?
The lazy and effective way: put the mysql command in a function, then have parallel call that, passing {1} and {2}.
Using functions is actually suggested by the parallel man pages:
https://www.gnu.org/software/parallel/man.html#QUOTING
Answering my own question, the solution was to escape all double-quotes, single-quotes and parenthesis signs:
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e \"LOAD DATA LOCAL INFILE \'{2}\' IGNORE INTO TABLE tblname IGNORE 1 LINES \(col1,col2,col3,col4\) set col5=\'{1}\', col6=\'foo\'\"
Related
I want to connect to mysql databse and execute some queries and export its result to a varibale, and do all of these need to be done entirely by bash script
I have a snippet code but does not work.
#!/bin/bash
BASEDIR=$(dirname $0)
cd $BASEDIR
mysqlUser=n_userdb
mysqlPass=d2FVR0NA3
mysqlDb=n_datadb
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1")
echo "${result}" >> a.txt
whats the problem ?
The issue was resolved in the chat by using the correct password.
If you further want to get only the data, use mysql with -NB (or --skip-column-names and --batch).
Also, the script needs to quote the variable expansions, or there will be issues with usernames/passwords containing characters that are special to the shell. Additionally, uppercase variable names are usually reserved for system variables.
#!/bin/sh
basedir=$(dirname "$0")
mysqlUser='n_userdb'
mysqlPass='d2FVR0NA3'
mysqlDb='n_datadb'
cd "$basedir" &&
mysql -NB -u "$mysqlUser" -p"$mysqlPass" -D "$mysqlDb" \
-e 'select * from confs limit 1' >a.txt 2>a-err.txt
Ideally though, you'd use a my.cnf file to configure the username and password.
See e.g.
MySQL Utilities - ~/.my.cnf option file
mysql .my.cnf not reading credentials properly?
Do this:
result=$(mysql -u $mysqlUser -p$mysqlPass -D $mysqlDb -e "select * from confs limit 1" | grep '^\|' | tail -1)
The $() statement of Bash has trouble handling variables which contain multiple lines so the above hack greps only the interesting part: the data
I want to make a bash script that connects to my MySQL server and inserts some valuse from a txt file.
I have written this down:
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('cat test.txt');" | mysql -uroot -ptest test;
but I'm recieving the following error:
ERROR 1136 (21S01) at line 1: Column count doesn't match value count
at row 1
I suppose the error is in my txt file, but I've tried many variations and still no hope of success.
My txt file looks like this:
10.16.54.29 00:f8:e5:33:22:3f marsara
Try this one:
#!/bin/bash
inputfile="test.txt"
cat $inputfile | while read ip mac server; do
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('$ip', '$mac', '$server');"
done | mysql -uroot -ptest test;
This way you streaming the file read as well the mysql comand execution.
Assuming you have many rows to add, you probably need LOAD DATA INFILE statement, not INSERT. The source file has to be on the server, but it seems to be the case here.
Something like that:
#!/bin/bash
mysql -uroot -ptest test << EOF
LOAD DATA INFILE 'test.txt'
INTO TABLE tbl_name
FIELDS TERMINATED BY ' ';
EOF
LOAD DATA INFILE has many options as you will discover by reading the doc.
You are trying to insert the value "cat test.txt" as a String in the database in an INSERT statement that requires 3 parameters (IP,MAC and SERVER) so this is why you get this error message.
You need to read the text file first and extract the IP, MAC and Server values and then use these in the query that would look like this once filled :
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('10.16.54.29', '00:f8:e5:33:22:3f', 'marsara');" | mysql -uroot -ptest test;
I use this and it works:
mysql -uroot -proot < infile
or select the database first
./mysql -uroot -proot db_name < infile
or copy the whole SQL into the clipboard and paste it with
pbpaste > temp_infile && mysql -uroot -proot < temp_infile && rm temp_infile
#!/bin/bash
username=root
password=root
dbname=myDB
host=localhost
TS=$(date +%s)
echo $1
mysql -h$host -D$dbname -u$username -p$password -e"INSERT INTO dailyTemp (UTS, tempF) VALUES ($TS, $1);"
exit 0
So I am piping quite a lot of data using bash everyday between 3 servers:
Server A is mysql (connection over SSH)
Server B is just a centos server where I run the
bash script.
Server C is postgresql 9.6.
All was good until one table got one row with a double quote in the middle of a varchar. This is breaking my pipe at the insertion level (on pg side).
Indeed, when getting the data this way from Mysql, it is not quoted. So, I believe in the end it's because of the basic behaviour of COPY and its QUOTE parameter.
Here is the bash code:
ssh -o ConnectTimeout=5 -i "$SSH_KEY" "$SSH_USER"#"$SSH_IP" 'mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' | \
psql -h "$DWH_IP" "$PG_DB" -c "COPY tableA FROM stdin WITH CSV HEADER DELIMITER E'\t' NULL AS 'NULL';"
I tried playing with the COPY parameter QUOTE but unsuccessfully.
Should I put some sed in the middle of the pipeline?
I also tried double quoting when getting the data out of mysql but could not find the relevant parameter when mysql is used in a pipe like this.
I'd lik to keep things in one pipe (no MYSQL->CSV then CSV->PG please).#
Thanks!
Here's working sample of importing csv to postgres:
t=# create table so10 (i int,t text);
CREATE TABLE
t=# \q
postgres#vao-VirtualBox:~$ echo "1,Bro" | psql -d t -c "copy so10 from stdin with csv"
COPY 1
postgres#vao-VirtualBox:~$ psql t -c "select * from so10"
i | t
---+-----
1 | Bro
(1 row)
You can open ssh tunnel to mysql and run mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' locally (instead of echo in my example)
I want to make a bash script that connects to my MySQL server and inserts some valuse from a txt file.
I have written this down:
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('cat test.txt');" | mysql -uroot -ptest test;
but I'm recieving the following error:
ERROR 1136 (21S01) at line 1: Column count doesn't match value count
at row 1
I suppose the error is in my txt file, but I've tried many variations and still no hope of success.
My txt file looks like this:
10.16.54.29 00:f8:e5:33:22:3f marsara
Try this one:
#!/bin/bash
inputfile="test.txt"
cat $inputfile | while read ip mac server; do
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('$ip', '$mac', '$server');"
done | mysql -uroot -ptest test;
This way you streaming the file read as well the mysql comand execution.
Assuming you have many rows to add, you probably need LOAD DATA INFILE statement, not INSERT. The source file has to be on the server, but it seems to be the case here.
Something like that:
#!/bin/bash
mysql -uroot -ptest test << EOF
LOAD DATA INFILE 'test.txt'
INTO TABLE tbl_name
FIELDS TERMINATED BY ' ';
EOF
LOAD DATA INFILE has many options as you will discover by reading the doc.
You are trying to insert the value "cat test.txt" as a String in the database in an INSERT statement that requires 3 parameters (IP,MAC and SERVER) so this is why you get this error message.
You need to read the text file first and extract the IP, MAC and Server values and then use these in the query that would look like this once filled :
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('10.16.54.29', '00:f8:e5:33:22:3f', 'marsara');" | mysql -uroot -ptest test;
I use this and it works:
mysql -uroot -proot < infile
or select the database first
./mysql -uroot -proot db_name < infile
or copy the whole SQL into the clipboard and paste it with
pbpaste > temp_infile && mysql -uroot -proot < temp_infile && rm temp_infile
#!/bin/bash
username=root
password=root
dbname=myDB
host=localhost
TS=$(date +%s)
echo $1
mysql -h$host -D$dbname -u$username -p$password -e"INSERT INTO dailyTemp (UTS, tempF) VALUES ($TS, $1);"
exit 0
I need to dump all tables in MySQL in CSV format.
Is there a command using mysqldump to just output every row for every table in CSV format?
First, I can give you the answer for one table:
The trouble with all these INTO OUTFILE or --tab=tmpfile (and -T/path/to/directory) answers is that it requires running mysqldump on the same server as the MySQL server, and having those access rights.
My solution was simply to use mysql (not mysqldump) with the -B parameter, inline the SELECT statement with -e, then massage the ASCII output with sed, and wind up with CSV including a header field row:
Example:
mysql -B -u username -p password database -h dbhost -e "SELECT * FROM accounts;" \
| sed "s/\"/\"\"/g;s/'/\'/;s/\t/\",\"/g;s/^/\"/;s/$/\"/;s/\n//g"
"id","login","password","folder","email"
"8","mariana","xxxxxxxxxx","mariana",""
"3","squaredesign","xxxxxxxxxxxxxxxxx","squaredesign","mkobylecki#squaredesign.com"
"4","miedziak","xxxxxxxxxx","miedziak","miedziak#mail.com"
"5","Sarko","xxxxxxxxx","Sarko",""
"6","Logitrans
Poland","xxxxxxxxxxxxxx","LogitransPoland",""
"7","Amos","xxxxxxxxxxxxxxxxxxxx","Amos",""
"9","Annabelle","xxxxxxxxxxxxxxxx","Annabelle",""
"11","Brandfathers and
Sons","xxxxxxxxxxxxxxxxx","BrandfathersAndSons",""
"12","Imagine
Group","xxxxxxxxxxxxxxxx","ImagineGroup",""
"13","EduSquare.pl","xxxxxxxxxxxxxxxxx","EduSquare.pl",""
"101","tmp","xxxxxxxxxxxxxxxxxxxxx","_","WOBC-14.squaredesign.atlassian.net#yoMama.com"
Add a > outfile.csv at the end of that one-liner, to get your CSV file for that table.
Next, get a list of all your tables with
mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"
From there, it's only one more step to make a loop, for example, in the Bash shell to iterate over those tables:
for tb in $(mysql -u username -ppassword dbname -sN -e "SHOW TABLES;"); do
echo .....;
done
Between the do and ; done insert the long command I wrote in Part 1 above, but substitute your tablename with $tb instead.
This command will create two files in /path/to/directory table_name.sql and table_name.txt.
The SQL file will contain the table creation schema and the txt file will contain the records of the mytable table with fields delimited by a comma.
mysqldump -u username -p -t -T/path/to/directory dbname table_name --fields-terminated-by=','
If you are using MySQL or MariaDB, the easiest and performant way dump CSV for single table is -
SELECT customer_id, firstname, surname INTO OUTFILE '/exportdata/customers.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM customers;
Now you can use other techniques to repeat this command for multiple tables. See more details here:
https://mariadb.com/kb/en/the-mariadb-library/select-into-outfile/
https://dev.mysql.com/doc/refman/5.7/en/select-into.html
mysqldump has options for CSV formatting:
--fields-terminated-by=name
Fields in the output file are terminated by the given
--lines-terminated-by=name
Lines in the output file are terminated by the given
The name should contain one of the following:
`--fields-terminated-by`
\t or "\""
`--fields-enclosed-by=name`
Fields in the output file are enclosed by the given
and
--lines-terminated-by
\r
\n
\r\n
Naturally you should mysqldump each table individually.
I suggest you gather all table names in a text file. Then, iterate through all tables running mysqldump. Here is a script that will dump and gzip 10 tables at a time:
MYSQL_USER=root
MYSQL_PASS=rootpassword
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS}"
SQLSTMT="SELECT CONCAT(table_schema,'.',table_name)"
SQLSTMT="${SQLSTMT} FROM information_schema.tables WHERE table_schema NOT IN "
SQLSTMT="${SQLSTMT} ('information_schema','performance_schema','mysql')"
mysql ${MYSQL_CONN} -ANe"${SQLSTMT}" > /tmp/DBTB.txt
COMMIT_COUNT=0
COMMIT_LIMIT=10
TARGET_FOLDER=/path/to/csv/files
for DBTB in `cat /tmp/DBTB.txt`
do
DB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $1}'`
TB=`echo "${DBTB}" | sed 's/\./ /g' | awk '{print $2}'`
DUMPFILE=${DB}-${TB}.csv.gz
mysqldump ${MYSQL_CONN} -T ${TARGET_FOLDER} --fields-terminated-by="," --fields-enclosed-by="\"" --lines-terminated-by="\r\n" ${DB} ${TB} | gzip > ${DUMPFILE}
(( COMMIT_COUNT++ ))
if [ ${COMMIT_COUNT} -eq ${COMMIT_LIMIT} ]
then
COMMIT_COUNT=0
wait
fi
done
if [ ${COMMIT_COUNT} -gt 0 ]
then
wait
fi
This worked well for me:
mysqldump <DBNAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
Or if you want to only dump a specific table:
mysqldump <DBNAME> <TABLENAME> --fields-terminated-by ',' \
--fields-enclosed-by '"' --fields-escaped-by '\' \
--no-create-info --tab /var/lib/mysql-files/
I'm dumping to /var/lib/mysql-files/ to avoid this error:
mysqldump: Got error: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement when executing 'SELECT INTO OUTFILE'
It looks like others had this problem also, and there is a simple Python script now, for converting output of mysqldump into CSV files.
wget https://raw.githubusercontent.com/jamesmishra/mysqldump-to-csv/master/mysqldump_to_csv.py
mysqldump -u username -p --host=rdshostname database table | python mysqldump_to_csv.py > table.csv
You also can do it using Data Export tool in dbForge Studio for MySQL.
It will allow you to select some or all tables and export them into CSV format.