MySQL import from stdin - mysql

I am generating a csv in stdout using awk.
Is there a way to directly import that contents in mysql without putting it to file?

As the answer from #xdazz says, just use LOAD DATA LOCAL INFILE. I assume it was downvoted out of ignorance or lazyness. A quick perusal of the MySQL manuals would have shown that to be a perfectly viable answer.
In 2016 and for MariaDB, which will be most relevant to most users, you do this:
bash
awk '{ /* My script that spits out a CSV */ }' | mysql --local-infile=1 -u user -ppassword mydatabase -e "LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE mytable FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"';"
Obviously, do bother to read the manual and change the options to LOAD DATA INFILE as required to suit your specific case.

MySQL supports getting data in via extended inserts that look like this:
insert into table (col1, col2, col3) values
(a,b,c),
(d,e,f),
(g,h,i);
So you can modify your CSV to include a left paren and a right paren and a comma, and prepend it with insert into table... and append it with a semicolon, then you can pipe that directly to a MySQL commandline.
You can also use named pipes, a Unix construct, to pipe a TSV (tab-separated, not comma-separated) to a load data infile like this:
mkfifo /tmp/mysqltsv
cat file.csv | sed -e 's/,/\t/g' > /tmp/mysqltsv
mysql -e "load data infile '/tmp/mysqltsv' into table tblname"
That is pseudocode. You need to run the cat in one process and the mysql command in another. Easiest is to use two different terminals. More advanced is to background the cat|sed.

It does not seem that you can import CSV from stdin directly.
You have to save it to a file so that mysql uses its name as the name of the table (without the extension), you can use mysqlimport as in:
mysqlimport -uUSER -pPASS DB FILE

#xdazz was quicker than me, but I would consider putting the result to a file. Why? Because that way, if something went wrong, you can check and track the issue back. This would be very helpful, if you face intermittent problems, that don't always occur. Of course, to preserve disk space, after the import is done, I'd ZIP them up not to consume too much.

Yes, just use pipe.
$ your_command | mysql -u user -p
Sorry, this answer is not enough. You can't pipe the csv out direct to mysql.
You have to do extra work to make the result be valid sql.
Or, you may consider using mysql native load data infile syntax which is supporting loading a csv file to database.

Related

Load data infile MySQL with absolute URL

i trying load a CSV in a table.
I have my CSV in a folder of my server. (wwww.myweb.com/temp/file.csv)
I use this sentence:
LOAD DATA INFILE 'http://wwww.myweb.com/temp/file.csv' INTO TABLE ga_tmpActivosDocumentos FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' IGNORE 1 LINES (idTipoSuelo,C_Latitud,C_Longitud,Referencia,Zona,idProvincia,Poblacion,TituloActivo,Descripcion,Superficie,Gastos,Equipamiento,EquipamientoEN,GestionDocumental,PrecioVenta,CampoLibre1_Texto,CampoLibre1_Titulo,CampoLibre1_TextoEN,CampoLibre1_TituloEN,Activo, IMG1,IMG_Desc1,IMG_Desc1EN,IMG2,IMG_Desc2,IMG_Desc2EN,IMG3,IMG_Desc3,IMG_Desc3EN,IMG4,IMG_Desc4,IMG_Desc4EN,DOC1,DOC_Desc1,DOC_Desc1EN,DOC2,DOC_Desc2,DOC_Desc2EN,DOC3,DOC_Desc3,DOC_Desc3EN,DOC4,DOC_Desc4,DOC_Desc4EN,URL1,URL_Desc1,URL_Desc1EN,URL2,URL_Desc2,URL_Desc2EN) SET idCliente = 23
The sentence not work for me. I try to change the path to .../temp/file.csv, and other combinations but not work.
Also use "LOAD DATA LOCAL INFILE" but does not work.
I have read other topics, but only look examples with a relative URL, never absolute.
Thanks, and sorry for my english
#vadym-tyemirov answer works but if you don't want to create a temporary file, one solution is to load it from '/dev/stdin' and pipe it to the mysql cli:
wget -O - 'http://wwww.myweb.com/temp/file.csv' |
mysql \
--user=root \
--password=password \
--execute="LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE table_name"
Save CSV file on your LOCAL computer.
Connect to the DB from your LOCAL computer
Issue the following command: load data LOCAL infile '/tmp/file.csv' INTO TABLE table_name;
You can also load data files by using the mysqlimport utility; it operates by sending a LOAD DATA INFILE statement to the server. The --local option causes mysqlimport to read data files from the client host.
MySQL cannot access the file in that location. Try moving it somewhere simple like /tmp (or copy it) on the local filesystem, and not via a URL parameter.
The MySQL process likely cannot load the folders BEFORE "temp/file.csv"

How to import sql file to database ignoring X lines in file?

I'm trying to import pretty large .sql file to mysql database. However After some time of importing it, I encountered an error, so I want to fix it, and continue importing from the specific line of this file (when I ended last time), is this possible?
You can use tail along with the MySQL Command-Line Tool if you are on a unix like system like Himanshu mentioned. The command would be:
tail -n +100 dump.sql | mysql -u user -p -D database
Note: You may run into problems starting part of the way thru a file because there may be references to values set at a previous point in the file.
Next time you can try to import mysql file with -f to ignore the errors:
-f, --force Continue even if we get an SQL error.
You can use IGNORE number LINES syntax to do this:
The IGNORE number LINES option can be used to ignore lines at the start of the file. For example, you can use IGNORE 1 LINES to skip over an initial header line containing column names:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test IGNORE 1 LINES;
See also:
LOAD DATA INFILE Syntax in official documentation

Import Multiple .sql dump files into mysql database from shell

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql
cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.
One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

Using mysqlimport where the filename is different from the table name

I've been playing with mysqlimport and I've run into the restriction where the filename has to be the same as the table name. Is there any way to work round this?
I can't rename the file as it is used by other processes and I don't want to copy the file as there will be many of them, some being very large.
I want to use mysqlimport not LOAD INFILE.
EDIT: Unfortunately this needs to run on windows so no tricks with symbolic links I'm afraid.
You didn't say what platform you are on. On unix you can create a symbolic link to the file:
ln -s filename.txt tablename.txt
Then use that in the mysqlimport command.
But mysqlimport is just a command line interface to LOAD INFILE so you could also do this on the command line:
mysql -e "load data infile 'filename' into table TBL_NAME" dbname
mysqlimport uses the filename to determine the name of the table into which the data should be loaded. The program does this by stripping off any filename extension (the last period and anything following it); the result is then used as the table name. For example, mysqlimport treats a file named City.txt or City.dat as input to be loaded into a table named City.
Have you tried using the alias command, assuming you are on a Linux system?
Just create a symbolic link:
ln -s /tmp/real_file.txt /tmp/your_table_name.txt

MySQL Memory engine + init-file

I'm trying to set up a MySQL database so that the tables are ran by the memory engine. I don't really care about loosing some data that gets populated but I would like to dump it daily (via mysqldump in a cronjob) and have the init-file set to this dump. However I can't seem to figure out how to get the mysqldump to be compatable with how the init-file wants the SQL statements to be formatted.
Am I just missing something completely obvious trying to set up a database this way?
MySQL dumps are exactly that -- dumps of the MySQL database contents as SQL. So, there isn't any way to read this directly as a database file.
What you can do, is modify your init script for MySQL to automatically load the last dump (via the command line) every time MySQL starts.
An even better solution would be to use a ramdisk to hold the entire contents of your database in memory, and then periodically copy this to a safe location as your backup.
Although, if you want to maintain the contents of your databases at all, you're better off just using one of the disk-based storage engines (InnoDB or MyISAM), and just giving your server a lot of RAM to use as a cache.
This solution is almost great, but it causes problems when string values in table data contain semicolons - all of them are replaced with newline char.
Here is how I implemented this:
mysqldump --comments=false --opt dbname Table1 Table2 > /var/lib/mysql/mem_tables_init.tmp1
#Format dump file - each statement into single line; semicolons in table data are preserved
grep -v -- ^-- /var/lib/mysql/mem_tables_init.tmp1 | sed ':a;N;$!ba;s/\n/THISISUNIQUESTRING/g' | sed -e 's/;THISISUNIQUESTRING/;\n/g' | sed -e 's/THISISUNIQUESTRING//g' > /var/lib/mysql/mem_tables_init.tmp2
#Add "USE database_name" instruction
cat /var/lib/mysql/mem_tables_init.tmp2 |sed -e 's/DROP\ TABLE/USE\ `dbname`;\nDROP\ TABLE/' > /var/lib/mysql/mem_tables_init.sql
#Cleanup
rm -f /var/lib/mysql/mem_tables_init.tmp1 /var/lib/mysql/mem_tables_init.tmp2
My understanding is that the --init-file is expecting each SQL statement on a single line and that there are no comments in the file.
You should be able to clear up the comments with:
mysqldump --comments=false
As for each SQL statement on one line, I'm not familiar with a mysqldump option to do that, but what you can do is a line of Perl to remove all of the newlines:
perl -pi -w -e 's/\n//g;' theDumpFilename
I don't know if --init-file will like it or not, but it's worth a shot.
The other thing you could do is launch mysql from a script that also loads in a regular mysqldump file. Not the solution you were looking for, but it might accomplish the effect you're after.
I stumbled onto this, so I'll tell you what I do. First, I have an ip->country db in a memory table. There is no reason to try to "save" it, its easily and regularly dropped and recreated, but it may be unpredictable how the php will act when its missing and its only scheduled to be updated weekly. Second, I have a bunch of other memory tables. There is no reason to save these, as they are even more volatile, with lifespans in minutes. They will be refreshed very quickly, but stale data is better than none at all. Also, if you are using any separate key caches, they may (in some cases) need to loaded first or you will be unable to load them. And finally, be sure to put a "use" statement in there if you're not dumpling complete databases, as there is no other interface (like mysql client) to open the database at start up.. So..
cat << EOF > /var/lib/mysql/initial_load.tmp
use fieldsave_db;
cache index fieldsave_db.search in search_cache;
EOF
mysqldump --comments=false -udrinkin -pbeer# fieldsave_db ip2c \
>> /var/lib/mysql/initial_load.tmp
mysqldump --comments=false -ufields -pavenue -B memtables \
>> /var/lib/mysql/initial_load.tmp
grep -v -- ^-- /var/lib/mysql/initial_load.tmp |tr -d '\012' \
|sed -e 's/;/;\n/g' > /var/lib/mysql/initial_load.sql
As always, YMMV, but it works for me.