Using shell script to insert data into remote MYSQL database - mysql

I've been trying to get a shell(bash) script to insert a row into a REMOTE database, but I've been having some trouble :(
The script is meant to upload a file to a server, get a URL, HASH, and a file size, connect to a remote mysql database, and insert the data into an existing table. I've gotten it working until the remote MYSQL database bit.
It looks like this:
#!/bin/bash
zxw=randomtext
description=randomtext2
for file in "$#"
do
echo -n *****
ident= *****
data= ****
size=` ****
hash=`****
mysql --host=randomhost --user=randomuser --password=randompass randomdb
insert into table (field1,field2,field3) values('http://www.example.com/$hash','$file','$size');
echo "done"
done
I'm a total noob at programming so yeah :P
Anyway, I added the \ to escape the brackets as I was getting errors. As it is right now, the script is works fine until connects to the mysql database. It just connects to the mysql database and doesn't do the insert command (and I don't even know if the insert command would work in bash).
PS: I've tried both the mysql commands from the command line one by one, and they worked, though I defined the hash/file/size and didn't have the escaping "".
Anyway, what do you guys think? Is what I'm trying to do even possible? If so how?
Any help would be appreciated :)

The insert statement has to be sent to mysql, not another line in the shell script, so you need to make it a "here document".
mysql --host=randomhost --user=randomuser --password=randompass randomdb << EOF
insert into table (field1,field2,field3) values('http://www.site.com/$hash','$file','$size');
EOF
The << EOF means take everything before the next line that contains nothing but EOF (no whitespace at the beginning) as standard input to the program.

This might not be exactly what you are looking for but it is an option.
If you want to bypass the annoyance of actually including your query in the sh script, you can save the query as .sql file (useful sometimes when the query is REALLY big and complicated). This can be done with simple file IO in whatever language you are using.
Then you can simply include in your sh scrip something like:
mysql -u youruser -p yourpass -h remoteHost < query.sql &
This is called batch mode execution. Optionally, you can include the ampersand at the end to ensure that that line of the sh script does not block.
Also if you are concerned about the same data getting entered multiple times and your rdbms getting inconsistent, you should explore MySql transactions (commit, rollback, etc).

Don't use raw SQL from bash; bash has no sane facility for sanitizing the data beforehand. Generate a CSV file and upload that instead.

Related

MYSQL codes not being read

When I input a code or anything into MySQL and hit "enter" it moves down and "->" appears. It is as if the code is not going through or the code is not being read.
I have attempted to download "add-ons" but I am really not sure what I am doing. This is for school and I am having trouble getting in touch with the professor.
I am new to this and can't figure out what I am doing wrong. Please help!
Please see image of what it looks like to me.
Please add semicolon ; after the mysql code.
Problem 1: Be aware of the prompt. Either of these
MariaDB >
mysql >
means that you are inside the MySQL commandline tool. You can enter only SQL statements. Most SQL queries need to be terminated by a ; or \G (but not both). To exit that tool:
exit
Or, if you get stuck in certain ways
CTRL-C
exit
Each of these implies a shell script:
$
#
mymachine$
/usr/home/rj $
C:\Users\rj:
and many others
Problem 2: mysqldump is a command, not SQL. So it needs to be executed in a shell script.
Problem 3: There is yet another problem. When it suggested typing 'help;', it did not mean for you to include the quotes. Instead, type just help;.

What is the correct syntax for the SOURCE command in SQL

In codeAnywhere I'm trying to run pre written script files to create a table. When using codeAnywhere one must import the file to the shell for the code first, as I have done. However I have been unable to use the SOURCE command to run these files. I have currently attempted this syntax:
USE exams SOURCE students.txt;
What is the correct syntax here? Do I need to name the database in the syntax?
Are there other commands which run text files containing code?
EDIT: I tried using this syntax, to the following result:
ERROR: Failed to open file 'exams(question5.txt)', error: 2
Put the commands on separate lines, without semi-colons for the shell commands, and if this doesn't work, then prefix with \ as well (I don't need to on my setup, but it's in the docs):
USE exams
SOURCE students.txt
https://dev.mysql.com/doc/mysql-shell-excerpt/5.7/en/mysql-shell-commands.html
On the shell you can use the following command to execute the queries from a text file:
mysql db_name < text_file
Hint: If the USE command (with correct database name) is specified on the textfile you don't need to specify the database. The SOURCE command is not available on MySQL instead you need the <.
You can find more information about executing queries from text files here:
https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html

Pass parameter from Batch file to MYSQL script

I'm having a really hard time believing this question has never been asked before, it MUST be! I'm working on a batch file that needs to run some sql commands. All tutorials explaining this DO NOT WORK (referring to this link:Pass parameters to sql script that someone will undoubtedly mention)! I've tried other posts on this site verbatim and still nothing is working.
The way I see it, there are two ways I can approach this:
1. Either figure out how to call my basic MYSQL script and specify a parameter or..
2. Find an equivalent "USE ;" command that works in batch
My Batch file so far:
:START
#ECHO off
:Set_User
set usrCode = 0
mysql -u root SET #usrCode = '0'; \. caller.sql
Simply put, I want to pass 'usrCode' to my MYSQL script 'caller.sql' which looks like this:
USE `my_db`;
CALL collect_mismatch(#usrCode);
I know that procedures are a whole other topic to get into, but assume that the procedure is working just fine. I just can't get my parameter from Batch to MYSQL.
Ideally I would like to have the 'USE' & 'CALL' commands in my Batch file, but I can't find anything that let's me select a database in Batch before CALLing my procedure. That's when I tried the above link which boasts a simple command line entry and you're off to the races, but it isn't the case.
Any help would be greatly appreciated.
This will work;
echo SET #usrCode = '0'; > params.sql
type params.sql caller.sql | mysql -u root dbname

How do you update/add to SQL tables with bash commands?

Need to add SNMP information to a SQL database and update it on a regular schedule. SNMP info can be queried from bash commands.
You can use bash commands to write insert statements to a file, then pipe the file into the mysql program.
Say you have a file that looks like this:
key1,1.0
key2,1.4
key3,1.9
key4,2.0
key5,3.5
you can pipe it into a bash script that looks something like:
#!/bin/bash
while read key, value; do
echo "insert into sometable(key, value) values('$key' $value);"
done >/tmp/inserts.sql
mysql </tmp/inserts.sql >/tmp/inserts.out
If your data comes from somewhere else then same principle, just generate SQL commands into a file and pipe them into mysql.
This strategy isn't as kludgy as it might seem at first. MySQL's own mysqldump backup utility dumps the database to a file in the form of SQL statements.

Using mysqldump to format one insert per line?

This has been asked a few times but I cannot find a resolution to my problem. Basically when using mysqldump, which is the built in tool for the MySQL Workbench administration tool, when I dump a database using extended inserts, I get massive long lines of data. I understand why it does this, as it speeds inserts by inserting the data as one command (especially on InnoDB), but the formatting makes it REALLY difficult to actually look at the data in a dump file, or compare two files with a diff tool if you are storing them in version control etc. In my case I am storing them in version control as we use the dump files to keep track of our integration test database.
Now I know I can turn off extended inserts, so I will get one insert per line, which works, but any time you do a restore with the dump file it will be slower.
My core problem is that in the OLD tool we used to use (MySQL Administrator) when I dump a file, it does basically the same thing but it FORMATS that INSERT statement to put one insert per line, while still doing bulk inserts. So instead of this:
INSERT INTO `coupon_gv_customer` (`customer_id`,`amount`) VALUES (887,'0.0000'),191607,'1.0300');
you get this:
INSERT INTO `coupon_gv_customer` (`customer_id`,`amount`) VALUES
(887,'0.0000'),
(191607,'1.0300');
No matter what options I try, there does not seem to be any way of being able to get a dump like this, which is really the best of both worlds. Yes, it take a little more space, but in situations where you need a human to read the files, it makes it MUCH more useful.
Am I missing something and there is a way to do this with MySQLDump, or have we all gone backwards and this feature in the old (now deprecated) MySQL Administrator tool is no longer available?
Try use the following option:
--skip-extended-insert
It worked for me.
With the default mysqldump format, each record dumped will generate an individual INSERT command in the dump file (i.e., the sql file), each on its own line. This is perfect for source control (e.g., svn, git, etc.) as it makes the diff and delta resolution much finer, and ultimately results in a more efficient source control process. However, for significantly sized tables, executing all those INSERT queries can potentially make restoration from the sql file prohibitively slow.
Using the --extended-insert option fixes the multiple INSERT problem by wrapping all the records into a single INSERT command on a single line in the dumped sql file. However, the source control process becomes very inefficient. The entire table contents is represented on a single line in the sql file, and if a single character changes anywhere in that table, source control will flag the entire line (i.e., the entire table) as the delta between versions. And, for large tables, this negates many of the benefits of using a formal source control system.
So ideally, for efficient database restoration, in the sql file, we want each table to be represented by a single INSERT. For an efficient source control process, in the sql file, we want each record in that INSERT command to reside on its own line.
My solution to this is the following back-up script:
#!/bin/bash
cd my_git_directory/
ARGS="--host=myhostname --user=myusername --password=mypassword --opt --skip-dump-date"
/usr/bin/mysqldump $ARGS --database mydatabase | sed 's$VALUES ($VALUES\n($g' | sed 's$),($),\n($g' > mydatabase.sql
git fetch origin master
git merge origin/master
git add mydatabase.sql
git commit -m "Daily backup."
git push origin master
The result is a sql file INSERT command format that looks like:
INSERT INTO `mytable` VALUES
(r1c1value, r1c2value, r1c3value),
(r2c1value, r2c2value, r2c3value),
(r3c1value, r3c2value, r3c3value);
Some notes:
password on the command line ... I know, not secure, different discussion.
--opt: Among other things, turns on the --extended-insert option (i.e., one INSERT per table).
--skip-dump-date: mysqldump normally puts a date/time stamp in the sql file when created. This can become annoying in source control when the only delta between versions is that date/time stamp. The OS and source control system will date/time stamp the file and version. Its not really needed in the sql file.
The git commands are not central to the fundamental question (formatting the sql file), but shows how I get my sql file back into source control, something similar can be done with svn. When combining this sql file format with your source control of choice, you will find that when your users update their working copies, they only need to move the deltas (i.e., changed records) across the internet, and they can take advantage of diff utilities to easily see what records in the database have changed.
If you're dumping a database that resides on a remote server, if possible, run this script on that server to avoid pushing the entire contents of the database across the network with each dump.
If possible, establish a working source control repository for your sql files on the same server you are running this script from; check them into the repository from there. This will also help prevent having to push the entire database across the network with every dump.
As others have said using sed to replace "),(" is not safe as this can appear as content in the database.
There is a way to do this however:
if your database name is my_database then run the following:
$ mysqldump -u my_db_user -p -h 127.0.0.1 --skip-extended-insert my_database > my_database.sql
$ sed ':a;N;$!ba;s/)\;\nINSERT INTO `[A-Za-z0-9$_]*` VALUES /),\n/g' my_database.sql > my_database2.sql
you can also use "sed -i" to replace in-line.
Here is what this code is doing:
--skip-extended-insert will create one INSERT INTO for every row you have.
Now we use sed to clean up the data. Note that regular search/replace with sed applies for single line so we cannot detect the "\n" character as sed works one line at a time. That is why we put ":a;N;$!ba;" which basically tells sed to search multi-line and buffer the next line.
Hope this helps
What about storing the dump into a CSV file with mysqldump, using the --tab option like this?
mysqldump --tab=/path/to/serverlocaldir --single-transaction <database> table_a
This produces two files:
table_a.sql that contains only the table create statement; and
table_a.txt that contains tab-separated data.
RESTORING
You can restore your table via LOAD DATA:
LOAD DATA INFILE '/path/to/serverlocaldir/table_a.txt'
INTO TABLE table_a FIELDS TERMINATED BY '\t' ...
LOAD DATA is usually 20 times faster than using INSERT statements.
If you have to restore your data into another table (e.g. for review or testing purposes) you can create a "mirror" table:
CREATE TABLE table_for_test LIKE table_a;
Then load the CSV into the new table:
LOAD DATA INFILE '/path/to/serverlocaldir/table_a.txt'
INTO TABLE table_for_test FIELDS TERMINATED BY '\t' ...
COMPARE
A CSV file is simplest for diffs or for looking inside, or for non-SQL technical users who can use common tools like Excel, Access or command line (diff, comm, etc...)
I'm afraid this won't be possible. In the old MySQL Administrator I wrote the code for dumping db objects which was completely independent of the mysqldump tool and hence offered a number of additional options (like this formatting or progress feedback). In MySQL Workbench it was decided to use the mysqldump tool instead which, besides being a step backwards in some regards and producing version problems, has the advantage to stay always up-to-date with the server.
So the short answer is: formatting is currently not possible with mysqldump.
Try this:
mysqldump -c -t --add-drop-table=FALSE --skip-extended-insert -uroot -p<Password> databaseName tableName >c:\path\nameDumpFile.sql
I found this tool very helpful for dealing with extended inserts: http://blog.lavoie.sl/2014/06/split-mysqldump-extended-inserts.html
It parses the mysqldump output and inserts linebreaks after each record, but still using the faster extended inserts. Unlike a sed script, there shouldn't be any risk of breaking lines in the wrong place if the regex happens to match inside a string.
I liked Ace.Di's solution with sed, until I got this error:
sed: Couldn't re-allocate memory
Thus I had to write a small PHP script
mysqldump -u my_db_user -p -h 127.0.0.1 --skip-extended-insert my_database | php mysqlconcatinserts.php > db.sql
The PHP script also generates a new INSERT for each 10.000 rows, again to avoid memory problems.
mysqlconcatinserts.php:
#!/usr/bin/php
<?php
/* assuming a mysqldump using --skip-extended-insert */
$last = '';
$count = 0;
$maxinserts = 10000;
while($l = fgets(STDIN)){
if ( preg_match('/^(INSERT INTO .* VALUES) (.*);/',$l,$s) )
{
if ( $last != $s[1] || $count > $maxinserts )
{
if ( $count > $maxinserts ) // Limit the inserts
echo ";\n";
echo "$s[1] ";
$comma = '';
$last = $s[1];
$count = 0;
}
echo "$comma$s[2]";
$comma = ",\n";
} elseif ( $last != '' ) {
$last = '';
echo ";\n";
}
$count++;
}
add
set autocommit=0;
to first line of your sql script file, then import by:
mysql -u<user> -p<password> --default-character-set=utf8 db_name < <path>\xxx.sql
, it will fast 10x.