I'm trying to pass parameter from bash script to mysql script. The bash script is
#!/bin/bash
for file in `ls *.symbol`
do
path=/home/qz/$file
script='/home/qz/sqls/load_eval.sql'
mysql -u qz -h compute-0-10 -pabc -e "set #pred = '$path'; source $script;"
done
The load_eval.sql is
use biogrid;
load data local infile #pred into table lasp
fields terminated by ','
lines terminated by '\n'
(score, symbols);
When running the bash script, I got error the message:
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '#pred into table lasp ..
It seems the value of the parameter #pred is not passed into mysql script.
MySQL doesn't support session variables in a LOAD DATA INFILE statement like that. This has been recognized as a feature request for quite some time (http://bugs.mysql.com/bug.php?id=39115), but the feature has never been implemented.
I would recommend using mysqlimport instead of doing the complex steps with mysql that you're doing. The file's name must match the table's name, but you can trick this with a symbolic link:
#!/bin/bash
for file in *.symbol
do
path="/home/qz/$file"
ln -s -f "$path" /tmp/lasp.txt
mysqlimport -u qz -h compute-0-10 -pabc \
--local --columns "score,symbols" /tmp/lasp.txt
done
rm -f /tmp/lasp.txt
PS: No need use `ls`. As you can see above, filename expansion works fine.
Related
I need to upload data from CSV to my MYSQL Server, I've used mysqlsh to do it using jobs:
"C:\Program Files (x86)\MySQL\MySQL Shell\bin\mysqlsh.exe" --sql -h x.x.x.x -u user -password -D database -e "LOAD DATA LOCAL INFILE 'file.csv' INTO TABLE table FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' IGNORE 1 ROWS (field1, field2)
But when i execute the command i got this error:
The used command is not allowed with this MySQL version
I readed that i need to set local_infile to TRUE, i've made and i can't do it
What I'm doing wrong?
You need to enable local_infile option on client side, as well. The only way to do that in MySQL Shell is to pass that option as connection option.
mysqlsh.exe mysql://user#x.x.x.x/database?local-infile=1 -e "LOAD DATA..."
You can get more information about connection options by calling mysqlsh -i -e "\? connection".
If you want to load big input CSV file, you can use MySQL Shell's Parallel data import.
So I am piping quite a lot of data using bash everyday between 3 servers:
Server A is mysql (connection over SSH)
Server B is just a centos server where I run the
bash script.
Server C is postgresql 9.6.
All was good until one table got one row with a double quote in the middle of a varchar. This is breaking my pipe at the insertion level (on pg side).
Indeed, when getting the data this way from Mysql, it is not quoted. So, I believe in the end it's because of the basic behaviour of COPY and its QUOTE parameter.
Here is the bash code:
ssh -o ConnectTimeout=5 -i "$SSH_KEY" "$SSH_USER"#"$SSH_IP" 'mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' | \
psql -h "$DWH_IP" "$PG_DB" -c "COPY tableA FROM stdin WITH CSV HEADER DELIMITER E'\t' NULL AS 'NULL';"
I tried playing with the COPY parameter QUOTE but unsuccessfully.
Should I put some sed in the middle of the pipeline?
I also tried double quoting when getting the data out of mysql but could not find the relevant parameter when mysql is used in a pipe like this.
I'd lik to keep things in one pipe (no MYSQL->CSV then CSV->PG please).#
Thanks!
Here's working sample of importing csv to postgres:
t=# create table so10 (i int,t text);
CREATE TABLE
t=# \q
postgres#vao-VirtualBox:~$ echo "1,Bro" | psql -d t -c "copy so10 from stdin with csv"
COPY 1
postgres#vao-VirtualBox:~$ psql t -c "select * from so10"
i | t
---+-----
1 | Bro
(1 row)
You can open ssh tunnel to mysql and run mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' locally (instead of echo in my example)
I'm trying to load some csv files by calling mysql from the terminal without entering mysql interpreter.
I created the following function which I call when I'm ready to load all csv files mentioned in "$#"
function sqlConn {
sqlLoad="$sqlConnBase $# $dbName"
`"$sqlLoad"`
#I tried simply with $sqlLoad too but same problem occurs,
#although everything needed for the query is present in either
#$sqlLoad or "$sqlLoad"
}
sqlConnBase and dbName are global variables defined at the beginning of my bash script like this:
sqlConnBase="mysql -h localhost -u group8 --password=toto123"
dbName="cs322"
I call sqlConn like this:
sqlConn " --local-infile=1 < sqlLoadFile.sql"
the content of sqlLoadFile.sql is the following:
LOAD DATA LOCAL INFILE 'CSV/notes_rem.csv'
INTO TABLE Notes
CHARACTER SET UTF8
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\'
LINES TERMINATED BY '\n' STARTING BY '';
The problem I get is the following:
./loadAll.bash: line 31: mysql -h localhost -u group8
--password=toto123 --local-infile=1 < sqlLoadFile.sql cs322: command not found
the strange thing is that when I simply execute
mysql -h localhost -u group8 --password=toto123
--local-infile=1 < sqlLoadFile.sql cs322
on my terminal it does populate my cs322 database, i.e. all the rows of my csv are present in my cs322 database.
What could be the source of the error in my script?
The mysql -h localhost ... is treated as a command and not just mysql where the rest is arguments.
You need to use eval instead of the backticks:
eval "$sqlLoad"
When that is said you should be really careful with escapes, word splitting and globbing, and the above approach should be avoided.
A recommended approach is to populate an array with arguments:
declare -a args
args+=("-h" "localhost")
args+=("-u" "group")
# ...
mysql "${args[#]}"
I am trying to use GNU parallel to execute several LOAD DATA LOCAL INFILE mysql commands where:
{1} is the name of the file which I obtain from a UNIX find command pipe
{2} is the result of a chop.pl script that prints out a certain token from the file string according to certain rules
It seems that I am calling GNU parallel the correct way, except it
does not keep the double-quotes around the mysql command after the
-e, and it causes it not work.
E.g.
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e "LOAD DATA LOCAL INFILE '{2}' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='{1}', col6='foo'"
The command it is attempting, lacking the double quotes after -e, is like so:
mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e LOAD DATA LOCAL INFILE '/my/file/name/yadda_yadda-12345678.txt' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='yadda_yadda', col6='foo'
Any ideas how to add back the double-quotes after the -e?
The lazy and effective way: put the mysql command in a function, then have parallel call that, passing {1} and {2}.
Using functions is actually suggested by the parallel man pages:
https://www.gnu.org/software/parallel/man.html#QUOTING
Answering my own question, the solution was to escape all double-quotes, single-quotes and parenthesis signs:
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e \"LOAD DATA LOCAL INFILE \'{2}\' IGNORE INTO TABLE tblname IGNORE 1 LINES \(col1,col2,col3,col4\) set col5=\'{1}\', col6=\'foo\'\"
I'm trying to write the results of a query to a file using mysql. I've seen some information on the outfile construct in a few places but it seems that this only writes the file to the machine that MySQL is running on (in this case a remote machine, i.e. the database is not on my local machine).
Alternatively, I've also tried to run the query and grab (copy/paste) the results from the mysql workbench results window. This worked for some of the smaller datasets, but the largest of the datasets seems to be too big and causing an out of memory exception/bug/crash.
Any help on this matter would be greatly appreciated.
You could try executing the query from the your local cli and redirect the output to a local file destination;
mysql -user -pass -e"select cols from table where cols not null" > /tmp/output
This is dependent on the SQL client you're using to interact with the database. For example, you could use the mysql command line interface in conjunction with the "tee" operator to output to a local file:
http://dev.mysql.com/doc/refman/5.1/en/mysql-commands.html
tee [file_name], \T [file_name]
Execute the command above before executing the SQL and the result of the query will be output to the file.
Specifically for MySQL Workbench, here's an article on Execute Query to Text Output. Although I don't see any documentation, there are indications that there should be also be an "Export" option under Query, though that is almost certainly version dependent.
You could try this, if you want to write MySQL query result in a file.
This example write the MySQL query result into a csv file with comma separated format
SELECT id,name,email FROM customers
INTO OUTFILE '/tmp/customers.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
If you are running mysql queries on the command line. Here I suppose you have the list of queries in a text file and you want the output in another text file. Then you can use this. [ test_2 is the database name ]
COMMAND 1
mysql -vv -u root -p test_2 < query.txt > /root/results.txt 2>&1
Where -vv is for the verbose output.
If you use the above statement as
COMMAND 2
mysql -vv -u root -p test_2 < query.txt 2>&1 > /root/results.txt
It will redirect STDERR to normal location (i.e on the terminal) and STDOUT to the output file which in my case is results.txt
The first command executes the query.txt until is faces an error and stops there.
That's how the redirection works. You can try
#ls key.pem asdf > /tmp/output_1 2>&1 /tmp/output_2
Here key.pm file exists and asdf doesn't exists. So when you cat the files you get the following
# cat /tmp/output_1
key.pem
#cat /tmp/output_2
ls: cannot access asdf: No such file or directory
But if you modify the previous statement with this
ls key.pem asdf > /tmp/output_1 > /tmp/output_2 2>&1
Then you get the both error and output in output_2
cat /tmp/output_2
ls: cannot access asdf: No such file or directory
key.pem
mysql -v -u -c root -p < /media/sf_Share/Solution2.sql 2>&1 > /media/sf_Share/results.txt
This worked for me. Since I wanted the comments in my script also to be reflected in the report I added a flag -c