How do I pass a numeric value to a mysql script - mysql

How do I pass a parameter (numeric) from the command line to MySQL query file?
mysql --host=<hostname> --user=<username> -p --port=#### < Query.sql > Output.csv

You probably need to transform your Query.sql file, then pass it to mysql. The mysql program itself doesn't have any parameter substitution capability.
For example, in linux or freebsd.
cat 'SELECT #numericParameter := 42;' >/tmp/param$$
cat /tmp/param$$ Query.sql | mysql --host=<hostname> --user=<username> -p --port=#### > Output.csv
rm /tmp/param$$
The first line of this shell script writes one line of SQL, setting a parameter, into a temp file.
The second line sticks that one line at the beginning of your SQL file and pipes the result to the mysql client program.
The third line deletes the temp file.
This leaves your SQL file unmodified when it's done. That's probably what you want.

Related

Add date in Mysqldump when running cmd from Java and Shell

I would like to append date in file name when taking backup using mysqldump. I am storing the command in properties file and running it via ProcessBuilder and shell script. I have tried multiple ways to add the date (BTW all the answers here were only if we run the command directly in linux)
mysqldump -u <user> -p <database> | gzip > <backup>$(date +%Y-%m-%d-%H.%M.%S).sql.gz
Got the error: No table found for "+%Y-%m-%d-%H.%M.%S"
mysqldump -u root -ppassword dbName --result-file=/opt/backup/`date -Iminutes`.dbName.sql
Got the error: unknow option -I
Is there a way around for this to add date in the command itself? I cannot append the date in java method or shell script.
I can't tell from your question whether you're running in a shell. If so, try these three lines to generate your backup file.
DATE=`date +%Y-%m-%d-%H.%M.%S`
FILENAME=backup${DATE}.sql.gz
mysqldump -u user -p database | gzip > ${FILENAME}
Notice how you should surround the date command in the first line with backticks, not ${}, to get its result into the DATE shell variable.

replacing sqlplus commands with mysql commands

i am trying to rewrite a script that is written in c-shell script to that uses sql plus command to get information from an oracle database but i am replacing it with mysql and i would like to replace all sqlplus syntax with mysql syntax. I am asking all the c-shell gurus to explain to me what this command means
set SQLPLUS=${ORACLE_HOME}/bin/sqlplus
set REPORT=${MYBD_HOME}/Scripts/report.sql
so somewhere along the line i invoke the sql plus command using the follwing
${SQLPLUS} ${MYDBUSER} # &{REPORT}
i am able to say i undertand what the right hand values mean ({ORACLE_HOME}/bin/sqlplus) is the path to where my sqplus command is located and thus i need it to invoke the command and the {REPORT=$(MYBD_HOME}/Scripts.report.sql) is the path where my sql script that is to be ran by invoking the sqplus command resides correct?
what i dont understand is what the set command is initializing this to. is SQLPLUS a variable so i dont have to type the path when i try to put it in my .csh script?
If so then all i need to do to run this script on a mysql database is simply set the SQLPLUS(problably change it to MYSQL) to point to the path where my msql exec is right
set MYSQL=${MYSQL_HOME}/bin/mysql
then just invoke mysql and run the sql statement
${MYSQL}${MYDBUSER}#${REPORT}
is this what i need to do ro tun the same .tsch script to get data from a mysql table?
You'll need something like this:
${MYSQL} -u $username -p$password -D $database < ${REPORT}
(The username and password are passed in differently to the mysql executable than they are passed to SQLPlus. You'll need to parse out the username and the password from ${MYDBUSER}. Likely, that contains a string such as "scott/tiger". The equivalent on the mysql command line client would be "-u scott -ptiger -D scott".
That # (at sign) is a SQLPlus thing; it tells SQLPLus to read input from the specified filename. The equivalent in mysql would be the source command, e.g.
${MYSQL} -u $username -p$password <_EOD!
use $database
source ${REPORT}
_EOD!
Also, your report.sql file likely includes spool and other SQLPLus specific commands. The mysql command line client is NOT ANYWHERE near as powerful a reporting tool as SQLPlus is.
Addendum:
Q: what exactly does the spool do?
The SQLPlus spool command directs output to a file. It's frequently used to create a log file from a SQLPLus session, and is also useful for creating report files.
set trimspool on
spool /tmp/file.lis
select 'foo' as foo from dual;
spool off
Q: Why can't i set the user name and passowrd to a variable and use that?
You could set a variable, the end result of the command line sent to the OS would be the same.
set MYDBUSER="-u username -ppassword -D database"
${MYSQL} ${MYDBUSER} <${REPORT}
Q:Seems like mysql is more verbose than sqlplus.
The mysql command line client takes unix-style options. These are equivalent:
mysql -u myusername -pmypassword -D mydatabase
mysql --user=myusername --password=mypassword --database=mydatabase

How to insert over a million records into a MySQL database?

I have a sql file which contains over a million insert statements. The official tool for MySQL administration is not able to open it because of its size. Is it possible to insert records using a BASH script?
I tried this so far but it doesn't work.
while read line
do
mysql -u root --password=root << eof
use dip;
$line;
eof
done < $1
mysql -u root --password=root <mysqlfile.sql
Try this:
while read line
do
mysql -u root --password=root -c "$line"
done < $1
Notes:
If the sql contains double quotes ("), yo'll have to escape them
If the SQL statements go over multiple lines, you'll have to figure that out
The advantage of this method is each line gets its own transaction, whereas if you fire the whole file in, it could blow the logs being such a large change set

write results of sql query to a file in mysql

I'm trying to write the results of a query to a file using mysql. I've seen some information on the outfile construct in a few places but it seems that this only writes the file to the machine that MySQL is running on (in this case a remote machine, i.e. the database is not on my local machine).
Alternatively, I've also tried to run the query and grab (copy/paste) the results from the mysql workbench results window. This worked for some of the smaller datasets, but the largest of the datasets seems to be too big and causing an out of memory exception/bug/crash.
Any help on this matter would be greatly appreciated.
You could try executing the query from the your local cli and redirect the output to a local file destination;
mysql -user -pass -e"select cols from table where cols not null" > /tmp/output
This is dependent on the SQL client you're using to interact with the database. For example, you could use the mysql command line interface in conjunction with the "tee" operator to output to a local file:
http://dev.mysql.com/doc/refman/5.1/en/mysql-commands.html
tee [file_name], \T [file_name]
Execute the command above before executing the SQL and the result of the query will be output to the file.
Specifically for MySQL Workbench, here's an article on Execute Query to Text Output. Although I don't see any documentation, there are indications that there should be also be an "Export" option under Query, though that is almost certainly version dependent.
You could try this, if you want to write MySQL query result in a file.
This example write the MySQL query result into a csv file with comma separated format
SELECT id,name,email FROM customers
INTO OUTFILE '/tmp/customers.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
If you are running mysql queries on the command line. Here I suppose you have the list of queries in a text file and you want the output in another text file. Then you can use this. [ test_2 is the database name ]
COMMAND 1
mysql -vv -u root -p test_2 < query.txt > /root/results.txt 2>&1
Where -vv is for the verbose output.
If you use the above statement as
COMMAND 2
mysql -vv -u root -p test_2 < query.txt 2>&1 > /root/results.txt
It will redirect STDERR to normal location (i.e on the terminal) and STDOUT to the output file which in my case is results.txt
The first command executes the query.txt until is faces an error and stops there.
That's how the redirection works. You can try
#ls key.pem asdf > /tmp/output_1 2>&1 /tmp/output_2
Here key.pm file exists and asdf doesn't exists. So when you cat the files you get the following
# cat /tmp/output_1
key.pem
#cat /tmp/output_2
ls: cannot access asdf: No such file or directory
But if you modify the previous statement with this
ls key.pem asdf > /tmp/output_1 > /tmp/output_2 2>&1
Then you get the both error and output in output_2
cat /tmp/output_2
ls: cannot access asdf: No such file or directory
key.pem
mysql -v -u -c root -p < /media/sf_Share/Solution2.sql 2>&1 > /media/sf_Share/results.txt
This worked for me. Since I wanted the comments in my script also to be reflected in the report I added a flag -c

Insert data into mysql table data from a FIFO pipe in linux continuously

I want to insert data from a fifo pipe into a mysql table, right now for me, this is possible until the fifo pipe process is killed,
the command :
$>mkfifo /path/to/pipe
$>sudo chmod 666 /path/to/pipe
$>find \ -sl > /path/to/pipe & msql db1 -e"LOAD DATA INFILE '/path/to/pipe' INTO TABLE T1 " &
the data in the fifo pipe is inserted until the process of mysql is down by kill process.
Is possible insert data without kill the process of the fifo pipe data in?
Thanks!!
To clarify #JulienPalard's comment above, you should be able to achieve your aim with the following commands.
(I use two different shell processes, whereas he uses one. For my description, try having both shells visible at once so that you can read output in one shell and write input in the other. If you know what you're doing, you can put the mysql process into the background and thus use only one shell.)
Shell 1: output
$ mkfifo mypipe # create a named pipe
$ chmod 666 mypipe # Give all users read-write access to the pipe
$ tail -f mypipe | mysql -umyName -p mySchema # pipe mypipe into mysql
The last line above tells the named pipe to perpetually feed into the mysql process. Whenever you echo something into mypipe, it will be sent to the mysql process as standard input.
After this, you won't get a new prompt because your tail command will run until you kill its process.
Keep this shell open and its tail process running while you use your other shell process (Shell 2: input) to send commands to mysql.
Shell 2: input
$ echo 'show tables;' > mypipe # this will print output onto your *other* shell (Shell 1: output)
$ echo 'insert into mytable (1,2,3);' > mypipe # this performs an insertion
Does the mysql system log report any errors? I would look at http://www.mysqlperformanceblog.com/2008/07/03/how-to-load-large-files-safely-into-innodb-with-load-data-infile/