I've been playing with mysqlimport and I've run into the restriction where the filename has to be the same as the table name. Is there any way to work round this?
I can't rename the file as it is used by other processes and I don't want to copy the file as there will be many of them, some being very large.
I want to use mysqlimport not LOAD INFILE.
EDIT: Unfortunately this needs to run on windows so no tricks with symbolic links I'm afraid.
You didn't say what platform you are on. On unix you can create a symbolic link to the file:
ln -s filename.txt tablename.txt
Then use that in the mysqlimport command.
But mysqlimport is just a command line interface to LOAD INFILE so you could also do this on the command line:
mysql -e "load data infile 'filename' into table TBL_NAME" dbname
mysqlimport uses the filename to determine the name of the table into which the data should be loaded. The program does this by stripping off any filename extension (the last period and anything following it); the result is then used as the table name. For example, mysqlimport treats a file named City.txt or City.dat as input to be loaded into a table named City.
Have you tried using the alias command, assuming you are on a Linux system?
Just create a symbolic link:
ln -s /tmp/real_file.txt /tmp/your_table_name.txt
Related
i trying load a CSV in a table.
I have my CSV in a folder of my server. (wwww.myweb.com/temp/file.csv)
I use this sentence:
LOAD DATA INFILE 'http://wwww.myweb.com/temp/file.csv' INTO TABLE ga_tmpActivosDocumentos FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' IGNORE 1 LINES (idTipoSuelo,C_Latitud,C_Longitud,Referencia,Zona,idProvincia,Poblacion,TituloActivo,Descripcion,Superficie,Gastos,Equipamiento,EquipamientoEN,GestionDocumental,PrecioVenta,CampoLibre1_Texto,CampoLibre1_Titulo,CampoLibre1_TextoEN,CampoLibre1_TituloEN,Activo, IMG1,IMG_Desc1,IMG_Desc1EN,IMG2,IMG_Desc2,IMG_Desc2EN,IMG3,IMG_Desc3,IMG_Desc3EN,IMG4,IMG_Desc4,IMG_Desc4EN,DOC1,DOC_Desc1,DOC_Desc1EN,DOC2,DOC_Desc2,DOC_Desc2EN,DOC3,DOC_Desc3,DOC_Desc3EN,DOC4,DOC_Desc4,DOC_Desc4EN,URL1,URL_Desc1,URL_Desc1EN,URL2,URL_Desc2,URL_Desc2EN) SET idCliente = 23
The sentence not work for me. I try to change the path to .../temp/file.csv, and other combinations but not work.
Also use "LOAD DATA LOCAL INFILE" but does not work.
I have read other topics, but only look examples with a relative URL, never absolute.
Thanks, and sorry for my english
#vadym-tyemirov answer works but if you don't want to create a temporary file, one solution is to load it from '/dev/stdin' and pipe it to the mysql cli:
wget -O - 'http://wwww.myweb.com/temp/file.csv' |
mysql \
--user=root \
--password=password \
--execute="LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE table_name"
Save CSV file on your LOCAL computer.
Connect to the DB from your LOCAL computer
Issue the following command: load data LOCAL infile '/tmp/file.csv' INTO TABLE table_name;
You can also load data files by using the mysqlimport utility; it operates by sending a LOAD DATA INFILE statement to the server. The --local option causes mysqlimport to read data files from the client host.
MySQL cannot access the file in that location. Try moving it somewhere simple like /tmp (or copy it) on the local filesystem, and not via a URL parameter.
The MySQL process likely cannot load the folders BEFORE "temp/file.csv"
I'm trying to import pretty large .sql file to mysql database. However After some time of importing it, I encountered an error, so I want to fix it, and continue importing from the specific line of this file (when I ended last time), is this possible?
You can use tail along with the MySQL Command-Line Tool if you are on a unix like system like Himanshu mentioned. The command would be:
tail -n +100 dump.sql | mysql -u user -p -D database
Note: You may run into problems starting part of the way thru a file because there may be references to values set at a previous point in the file.
Next time you can try to import mysql file with -f to ignore the errors:
-f, --force Continue even if we get an SQL error.
You can use IGNORE number LINES syntax to do this:
The IGNORE number LINES option can be used to ignore lines at the start of the file. For example, you can use IGNORE 1 LINES to skip over an initial header line containing column names:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test IGNORE 1 LINES;
See also:
LOAD DATA INFILE Syntax in official documentation
I am generating a csv in stdout using awk.
Is there a way to directly import that contents in mysql without putting it to file?
As the answer from #xdazz says, just use LOAD DATA LOCAL INFILE. I assume it was downvoted out of ignorance or lazyness. A quick perusal of the MySQL manuals would have shown that to be a perfectly viable answer.
In 2016 and for MariaDB, which will be most relevant to most users, you do this:
bash
awk '{ /* My script that spits out a CSV */ }' | mysql --local-infile=1 -u user -ppassword mydatabase -e "LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE mytable FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"';"
Obviously, do bother to read the manual and change the options to LOAD DATA INFILE as required to suit your specific case.
MySQL supports getting data in via extended inserts that look like this:
insert into table (col1, col2, col3) values
(a,b,c),
(d,e,f),
(g,h,i);
So you can modify your CSV to include a left paren and a right paren and a comma, and prepend it with insert into table... and append it with a semicolon, then you can pipe that directly to a MySQL commandline.
You can also use named pipes, a Unix construct, to pipe a TSV (tab-separated, not comma-separated) to a load data infile like this:
mkfifo /tmp/mysqltsv
cat file.csv | sed -e 's/,/\t/g' > /tmp/mysqltsv
mysql -e "load data infile '/tmp/mysqltsv' into table tblname"
That is pseudocode. You need to run the cat in one process and the mysql command in another. Easiest is to use two different terminals. More advanced is to background the cat|sed.
It does not seem that you can import CSV from stdin directly.
You have to save it to a file so that mysql uses its name as the name of the table (without the extension), you can use mysqlimport as in:
mysqlimport -uUSER -pPASS DB FILE
#xdazz was quicker than me, but I would consider putting the result to a file. Why? Because that way, if something went wrong, you can check and track the issue back. This would be very helpful, if you face intermittent problems, that don't always occur. Of course, to preserve disk space, after the import is done, I'd ZIP them up not to consume too much.
Yes, just use pipe.
$ your_command | mysql -u user -p
Sorry, this answer is not enough. You can't pipe the csv out direct to mysql.
You have to do extra work to make the result be valid sql.
Or, you may consider using mysql native load data infile syntax which is supporting loading a csv file to database.
I have a directory with a bunch of .sql files that mysql dumps of each database on my server.
e.g.
database1-2011-01-15.sql
database2-2011-01-15.sql
...
There are quite a lot of them actually.
I need to create a shell script or single line probably that will import each database.
I'm running on a Linux Debian machine.
I thinking there is some way to pipe in the results of a ls into some find command or something..
any help and education is much appreciated.
EDIT
So ultimately I want to automatically import one file at a time into the database.
E.g. if I did it manually on one it would be:
mysql -u root -ppassword < database1-2011-01-15.sql
cat *.sql | mysql? Do you need them in any specific order?
If you have too many to handle this way, then try something like:
find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch
This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.
One-liner to read in all .sql files and imports them:
for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done
The only trick is the bash substring replacement to strip out the .sql to get the database name.
There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:
for i in *.sql
do
echo "file=$i"
mysql -u admin_privileged_user --password=whatever your_database_here < $i
done
mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.
I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.
I don't remember the syntax of mysqldump but it will be something like this
find . -name '*.sql'|xargs mysql ...
I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.
Here it is on GitHub
It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.
So:
myload foo.sql bar.sql.gz
Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.
For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).
I have looked all over and found no solution, any help on this would be great.
Query:
LOAD DATA INFILE '/Users/name/Desktop/loadIntoDb/loadIntoDB.csv'
INTO TABLE `tba`.`tbl_name`
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES
(
field1, field2, field3
)
Error:
Can't get stat of '/Users/name/Desktop/loadIntoDb/loadIntoDB.csv' (Errcode:2)
NOTE:
I'm running MySQL Query browser on OSX 10.6.4 connecting to MySQL 5.x
Things I've tried:
Drag-n-drop
Chmod 777
Put in a folder with 777 permissions
as well as the file having 777
permissions
try to use LOAD DATA LOCAL INFILE instead of LOAD DATA INFILE
otherwise check if apparmor is active for your directory
I had a similar problem. The resolution was a mildly ugly hack, but much easier to remember than apparmor workarounds provided that you can 'sudo'. First, I had to put the input file in the mysql sub-directory for the database I was using:
sudo cp myfile.txt /var/lib/mysql/mydatabasename
This does a copy and leaves 'root' as the file owner. After getting into mysql and doing a USE mydatabasename, I was able to populate appropriate table using
LOAD DATA INFILE 'mytabdelimitedtextfile.txt' INTO TABLE mytablename;
Using --local parameter will help with this.
Example: mysqlimport --local databasename file.txt -p
source:
http://dev.mysql.com/doc/refman/5.1/en/load-data.html
"The --local option causes mysqlimport to read data files from the client host"
For me, copying the contents to /tmp and using that as the source folder did the trick.
I use MariaDB, and my version does not allow using the "LOCAL" modifier.
Interestingly, giving read-write access to the CSV folder did not work either.
I had the same problem while populating a table in mysql on a AWS instance.
In my case i had the csv file in the instance itself.
Putting the Absolute path solved my problem.
Here's the line from MySQL documentation
If LOCAL is specified, the file is read by the client program on the client host and sent to the server. The file can be given as a full path name to specify its exact location. If given as a relative path name, the name is interpreted relative to the directory in which the client program was started.
http://dev.mysql.com/doc/refman/5.7/en/load-data.html