As MySQL doc says LOAD DATA INFILE is 20x times faster than bulk insert(INSERT VALUES), I am tring to replace INSERT VALUES with LOAD DATA INFILE in my application.
When I try LOAD DATA INFILE with ADO.NET, it ends in an error. It seems there is no way to load a file from local machine where MySQL is not running.
But that is not a problem for mysqlimport, the doc also says that.
You can also load data files by using the mysqlimport utility; it
operates by sending a LOAD DATA INFILE statement to the server. The
--local option causes mysqlimport to read data files from the client host.
I am curious about what the protocol is used by mysqlimport when it imports data into MySQL on a remote machine.
Is it parsing the data file and construct SQL like INSERT VALUES?
or it has a special protocol to send the data file directly to MySQL?
Related
I had a 56G csv local file with 45W+ columns, and wanted to use "LOAD DATA LOCAL INFILE" command to import data to remote MySQL server. It took so long and then raise an error "Lost connection to MySQL server during querys..." Is it my file too big so it takes so long time to use LOCAL upload? Or other reasons? How can I solve this?
I am testing Google Cloud SQL and I have to test the instructions LOAD DATA INFILE and SELECT...INTO OUTFILE.
I understood that the instruction LOAD DATA LOCAL INFILE could be used instead of LOAD DATA INFILE, but how can it be used in Cloud SQL?
Can the instruction SELECT...INTO OUTFILE be used as is?
Thank you Renzo for your answer but when I try to execute the instruction SELECT.. INTO OUTFILE when I am connected to the Cloud SQL instance, I get the following error message:
ERROR 1045 (28000) at line 1: Access denied for user 'root'#'%'
I tried to used the instruction LOAD DATA LOCAL INFILE from my computer connected to Cloud SQL instance on MySQL and it worked. I used a file located on my computer to import CSV data into a table successfully. But I am wondering if I can use a file present in a bucket to do the same...
I try to explain what I need to do:
We are migrating our Website to GCP App Engine/Cloud SQL. We extensively use both instructions LOAD DATA INFILE which select files from a folder "ftp" to load them into the databases. As well, we use the SELECT INTO OUTFILE to export data to CSV files into a folder of our Website. So my concern is to be able to use the same process on App Engine/Cloud SQL.
The difference is that the Website instance and database instance are separated on GCP. Should we use buckets on GCP to replace our folders ? Or should we create folders on App Engine / Cloud SQL instance instead? What is the best solution according to you?
Thanks in advance
According to the page about Importing data into Cloud SQL,
you can also use the LOAD DATA LOCAL INFILE statement in the mysql client, which loads a local file to the database
You can follow a sample steps in this link to start the mysql client by connecting to your Cloud SQL instance first through Cloud Shell. After connecting, you will be able to execute the import statement: LOAD DATA LOCAL INFILE.
The same steps can be applied for exporting and the syntax SELECT...INTO OUTFILE can be used as is. Together with other alternatives, this can be also found on Exporting data from Cloud SQL :
Exporting in CSV format is equivalent to running the following SQL statement:
SELECT <query> INTO OUTFILE ... CHARACTER SET 'utf8mb4'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\"'
ESCAPED BY '\\' LINES TERMINATED BY '\n'
Once you have executed the operations, it is also advisable to check their status especially if it is long running or an error has occurred.
I am trying to insert data from text file into MySQL using LOAD DATA LOCAL INFILE command.
When I hit this command in MySQL window I get the total number of records inserted.
LOAD DATA LOCAL INFILE 'filepath' INTO TABLE tablename FIELDS TERMINATED BY '\t';
However, when I pass the same command using ruby, I see the less data is getting inserted.
connect.query("LOAD DATA LOCAL INFILE 'filepath' INTO TABLE tablename FIELDS TERMINATED BY '\\t';")
I have verified by printing the above query and it is the same.
I am using MySQL Workbench 6.3 version 6.3.10 build 12092614
If 'filepath' is a variable, is will not be expanded. Try this:
connect.query("LOAD DATA LOCAL INFILE '#{filepath}' INTO TABLE tablename FIELDS TERMINATED BY '\\t';")
If 'filepath' is a file path literal, try using an absolute path from the root. That is, if MySQL is running locally.
If this query is being submitted to a remote MySQL server, then there is no hope of it opening a local file, and you may need to rethink your import strategy completely.
I want to able able to use the load data infile command in mySQL, but instead of loading the data from a local file I want to load it from a CSV file.
I.e., if the file is in local storage it'd look like:
LOAD DATA INFILE'C:\\abc.csv' INTO TABLE abc
But if it's in S3, not sure how I could do something like this.
Is this possible?
NOTE: this is not an RDS machine, so this command does not seem to work:
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-template-copys3tords.html
The mysql CLI allows you to execute STDIN as a stream of SQL statements.
Using a combination of the s3 CLI and mkfifo, you can stream data out of s3.
Then it's a simple matter of connecting the streams with something that re-formats the CSV into valid SQL.
mkfifo /tmp/mypipe
s3 cp s3://your/s3/object /tmp/mypipe
python transform_csv_to_sql.py < /tmp/mypipe | mysql target_database
You might be able to remove the python step and use MySQL's CSV code if you tell MySQL to load the data directly from your fifo:
mkfifo /tmp/mypipe
s3 cp s3://your/s3/object /tmp/mypipe
mysql target_database --execute "LOAD DATA INFILE '/tmp/mypipe'"
Good luck!
what is the fastest method to upload a csv file into mysql via perl, mysql?
The csv file is probably about 20Gs.
thanks
consider using mysql LOAD DATA INFILE
http://dev.mysql.com/doc/refman/5.0/en/load-data.html
You could use Perl and DBI to execute a LOAD DATA INFILE statement, if for some reason you couldn't use the mysql client or mysqladmin.