Bash script to insert values in MySQL - mysql

I want to make a bash script that connects to my MySQL server and inserts some valuse from a txt file.
I have written this down:
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('cat test.txt');" | mysql -uroot -ptest test;
but I'm recieving the following error:
ERROR 1136 (21S01) at line 1: Column count doesn't match value count
at row 1
I suppose the error is in my txt file, but I've tried many variations and still no hope of success.
My txt file looks like this:
10.16.54.29 00:f8:e5:33:22:3f marsara

Try this one:
#!/bin/bash
inputfile="test.txt"
cat $inputfile | while read ip mac server; do
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('$ip', '$mac', '$server');"
done | mysql -uroot -ptest test;
This way you streaming the file read as well the mysql comand execution.

Assuming you have many rows to add, you probably need LOAD DATA INFILE statement, not INSERT. The source file has to be on the server, but it seems to be the case here.
Something like that:
#!/bin/bash
mysql -uroot -ptest test << EOF
LOAD DATA INFILE 'test.txt'
INTO TABLE tbl_name
FIELDS TERMINATED BY ' ';
EOF
LOAD DATA INFILE has many options as you will discover by reading the doc.

You are trying to insert the value "cat test.txt" as a String in the database in an INSERT statement that requires 3 parameters (IP,MAC and SERVER) so this is why you get this error message.
You need to read the text file first and extract the IP, MAC and Server values and then use these in the query that would look like this once filled :
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('10.16.54.29', '00:f8:e5:33:22:3f', 'marsara');" | mysql -uroot -ptest test;

I use this and it works:
mysql -uroot -proot < infile
or select the database first
./mysql -uroot -proot db_name < infile
or copy the whole SQL into the clipboard and paste it with
pbpaste > temp_infile && mysql -uroot -proot < temp_infile && rm temp_infile

#!/bin/bash
username=root
password=root
dbname=myDB
host=localhost
TS=$(date +%s)
echo $1
mysql -h$host -D$dbname -u$username -p$password -e"INSERT INTO dailyTemp (UTS, tempF) VALUES ($TS, $1);"
exit 0

Related

Bash Scripting for inseart .csv file in mysql with particular columns [duplicate]

I want to make a bash script that connects to my MySQL server and inserts some valuse from a txt file.
I have written this down:
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('cat test.txt');" | mysql -uroot -ptest test;
but I'm recieving the following error:
ERROR 1136 (21S01) at line 1: Column count doesn't match value count
at row 1
I suppose the error is in my txt file, but I've tried many variations and still no hope of success.
My txt file looks like this:
10.16.54.29 00:f8:e5:33:22:3f marsara
Try this one:
#!/bin/bash
inputfile="test.txt"
cat $inputfile | while read ip mac server; do
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('$ip', '$mac', '$server');"
done | mysql -uroot -ptest test;
This way you streaming the file read as well the mysql comand execution.
Assuming you have many rows to add, you probably need LOAD DATA INFILE statement, not INSERT. The source file has to be on the server, but it seems to be the case here.
Something like that:
#!/bin/bash
mysql -uroot -ptest test << EOF
LOAD DATA INFILE 'test.txt'
INTO TABLE tbl_name
FIELDS TERMINATED BY ' ';
EOF
LOAD DATA INFILE has many options as you will discover by reading the doc.
You are trying to insert the value "cat test.txt" as a String in the database in an INSERT statement that requires 3 parameters (IP,MAC and SERVER) so this is why you get this error message.
You need to read the text file first and extract the IP, MAC and Server values and then use these in the query that would look like this once filled :
#!/bin/bash
echo "INSERT INTO test (IP,MAC,SERVER) VALUES ('10.16.54.29', '00:f8:e5:33:22:3f', 'marsara');" | mysql -uroot -ptest test;
I use this and it works:
mysql -uroot -proot < infile
or select the database first
./mysql -uroot -proot db_name < infile
or copy the whole SQL into the clipboard and paste it with
pbpaste > temp_infile && mysql -uroot -proot < temp_infile && rm temp_infile
#!/bin/bash
username=root
password=root
dbname=myDB
host=localhost
TS=$(date +%s)
echo $1
mysql -h$host -D$dbname -u$username -p$password -e"INSERT INTO dailyTemp (UTS, tempF) VALUES ($TS, $1);"
exit 0

Oneliner pipe to PostgreSQL from MySQL with bash

So I am piping quite a lot of data using bash everyday between 3 servers:
Server A is mysql (connection over SSH)
Server B is just a centos server where I run the
bash script.
Server C is postgresql 9.6.
All was good until one table got one row with a double quote in the middle of a varchar. This is breaking my pipe at the insertion level (on pg side).
Indeed, when getting the data this way from Mysql, it is not quoted. So, I believe in the end it's because of the basic behaviour of COPY and its QUOTE parameter.
Here is the bash code:
ssh -o ConnectTimeout=5 -i "$SSH_KEY" "$SSH_USER"#"$SSH_IP" 'mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' | \
psql -h "$DWH_IP" "$PG_DB" -c "COPY tableA FROM stdin WITH CSV HEADER DELIMITER E'\t' NULL AS 'NULL';"
I tried playing with the COPY parameter QUOTE but unsuccessfully.
Should I put some sed in the middle of the pipeline?
I also tried double quoting when getting the data out of mysql but could not find the relevant parameter when mysql is used in a pipe like this.
I'd lik to keep things in one pipe (no MYSQL->CSV then CSV->PG please).#
Thanks!
Here's working sample of importing csv to postgres:
t=# create table so10 (i int,t text);
CREATE TABLE
t=# \q
postgres#vao-VirtualBox:~$ echo "1,Bro" | psql -d t -c "copy so10 from stdin with csv"
COPY 1
postgres#vao-VirtualBox:~$ psql t -c "select * from so10"
i | t
---+-----
1 | Bro
(1 row)
You can open ssh tunnel to mysql and run mysql -h "$MYHOST" -u "$USER"-p"$PWD" prod -e "SELECT * FROM tableA "' locally (instead of echo in my example)

Export as csv in beeline hive

I am trying to export my hive table as a csv in beeline hive. When I run the command !sql select * from database1 > /user/bob/output.csv it gives me syntax error.
I have successfully connected to the database at this point using the below command. The query outputs the correct results on console.
beeline -u 'jdbc:hive2://[databaseaddress]' --outputformat=csv
Also, not very clear where the file ends up. It should be the file path in hdfs correct?
When hive version is at least 0.11.0 you can execute:
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/directoryWhereToStoreData'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY "\n"
SELECT * FROM yourTable;
from hive/beeline to store the table into a directory on the local filesystem.
Alternatively, with beeline, save your SELECT query in yourSQLFile.sql and run:
beeline -u 'jdbc:hive2://[databaseaddress]' --outputformat=csv2 -f yourSQlFile.sql > theFileWhereToStoreTheData.csv
Also this will store the result into a file in the local file system.
From hive, to store the data somewhere into HDFS:
CREATE EXTERNAL TABLE output
LIKE yourTable
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 'hfds://WhereDoYou/Like';
INSERT OVERWRITE TABLE output SELECT * from yourTable;
then you can collect the data to a local file using:
hdfs dfs -getmerge /WhereDoYou/Like
This is another option to get the data using beeline only:
env HADOOP_CLIENT_OPTS="-Ddisable.quoting.for.sv=false" beeline -u "jdbc:hive2://your.hive.server.address:10000/" --incremental=true --outputformat=csv2 -e "select * from youdatabase.yourtable"
Working on:
Connected to: Apache Hive (version 1.1.0-cdh5.10.1)
Driver: Hive JDBC (version 1.1.0-cdh5.10.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.10.1 by Apache Hive
You can use this command to save output in CSV format from beeline:
beeline -u 'jdbc:hive2://bigdataplatform-dev.nam.nsroot.net:10000/;principal=hive/bigdataplatform-dev.net#NAMUXDEV.NET;ssl=true' --outputformat=csv2 --verbose=false --fastConnect=true --silent=true -f $query_file>out.csv
Save your SQL query file into $query_file.
Result will be in out.csv.
I have complete example here: hivehoney
Following worked for me
hive --silent=true --verbose=false --outputformat=csv2 -e "use <db_name>; select * from <table_name>" > table_name.csv
One advantage over using beeline is that you don't have have to provide hostname or user/pwd if you are running on hive node.
When some of the columns have string values having commas, tsv (tab separated) works better
hive --silent=true --verbose=false --outputformat=tsv -e "use <db_name>; select * from <table_name>" > table_name.tsv
Output format in CSV:
$ beeline -u jdbc:hive2://192.168.0.41:10000/test_db -n user1 -p password **--outputformat=csv2** -e "select * from t1";
Output format in custom delimiter:
$ beeline -u jdbc:hive2://192.168.0.41:10000/test_db -n user1 -p password **--outputformat=dsv** **--delimiterForDSV='|'** -e "select * from t1";
Running command in background and redirect out to file:
$nohup `$ beeline -u jdbc:hive2://192.168.0.41:10000/test_db -n user1 -p password --outputformat=csv2 -e "select * from t1"; > output.csv 2> log` &
Reference URLs:
https://dwgeek.com/export-hive-table-into-csv-format-using-beeline-client-example.html/
https://dwgeek.com/hiveserver2-beeline-command-line-shell-options-examples.html/
From Beeline
beeline -u 'jdbc:hive2://123.12.4132:345/database_name' --outputformat=csv2 -e "select col1, col2, col3 from table_name" > /path/to/dump.csv

gnu parallel mysql LOAD DATA LOCAL INFILE

I am trying to use GNU parallel to execute several LOAD DATA LOCAL INFILE mysql commands where:
{1} is the name of the file which I obtain from a UNIX find command pipe
{2} is the result of a chop.pl script that prints out a certain token from the file string according to certain rules
It seems that I am calling GNU parallel the correct way, except it
does not keep the double-quotes around the mysql command after the
-e, and it causes it not work.
E.g.
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e "LOAD DATA LOCAL INFILE '{2}' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='{1}', col6='foo'"
The command it is attempting, lacking the double quotes after -e, is like so:
mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e LOAD DATA LOCAL INFILE '/my/file/name/yadda_yadda-12345678.txt' IGNORE INTO TABLE tblname IGNORE 1 LINES (col1,col2,col3,col4) set col5='yadda_yadda', col6='foo'
Any ideas how to add back the double-quotes after the -e?
The lazy and effective way: put the mysql command in a function, then have parallel call that, passing {1} and {2}.
Using functions is actually suggested by the parallel man pages:
https://www.gnu.org/software/parallel/man.html#QUOTING
Answering my own question, the solution was to escape all double-quotes, single-quotes and parenthesis signs:
find /my/folder/ -name "*.txt" | while read i; do chop.pl $i; echo $i; done | parallel -t -N 2 mysql -h localhost -uuser -pxxxxxxx --local-infile=1 -D dbname -e \"LOAD DATA LOCAL INFILE \'{2}\' IGNORE INTO TABLE tblname IGNORE 1 LINES \(col1,col2,col3,col4\) set col5=\'{1}\', col6=\'foo\'\"

saving blob field to disk from bash

I have a mysql database with a blob field containing a zip and I need to save it as a file on disk, from bash. I'm doing the following but the end result doesn't read as a zip... Am I doing something wrong or is the file stored not actually a zip (the entry in the database is actually created by a seismological station, so I have no control over it)?
echo "USE database; SELECT blobcolumn FROM table LIMIT 1" | mysql -u root > file.zip
then I open file.zip with a file editor and remove first line which contains the column header. Then 'unzip' doesn't recognize it as a zip file.
For a gzipped blob you can use:
echo "use db; select blob from table where id=blah" | mysql -N --raw -uuser -ppass > mysql.gz
I have not tried this with a zip file.
The proper way to do this would be to use DUMPFILE, otherwise mysql will mess up your data.
mysql -uroot -e "SELECT blobcolumn INTO DUMPFILE '/tmp/file.zip' FROM table LIMIT 1" database
I know this is an old question, but I needed the answer myself, so this is what worked for me.
I found that mysql appends a newline character at the end, which needs to be removed before the correct binary value remains.
echo "USE database; SELECT blobcolumn FROM table LIMIT 1" | mysql -N --raw -u root | head -c -1 > file.zip
you would need to skip column, like
sql="USE database; SELECT blobcolumn FROM table LIMIT 1"
mysql -u root -N <<< $sql > file.zip