I have a CSV file. It contain 1.4 million rows of data, so I am not able to open that csv file in Excel because its limit is about 1 million rows.
Therefore, I want to import this file in MySQL workbench. This csv file contains columns like
"Service Area Code","Phone Numbers","Preferences","Opstype","Phone Type"
I am trying to create a table in MySQL workbench named as "dummy" containing columns like
ServiceAreaCodes,PhoneNumbers,Preferences,Opstyp,PhoneTyp.
The CSV file is named model.csv. My code in workbench is like this:
LOAD DATA LOCAL INFILE 'model.csv' INTO TABLE test.dummy FIELDS TERMINATED BY ',' lines terminated by '\n';
but I am getting an error like model.CSV file not found
I guess you're missing the ENCLOSED BY clause
LOAD DATA LOCAL INFILE '/path/to/your/csv/file/model.csv'
INTO TABLE test.dummy FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '\n';
And specify the csv file full path
Load Data Infile - MySQL documentation
In case you have smaller data set, a way to achieve it by GUI is:
Open a query window
SELECT * FROM [table_name]
Select Import from the menu bar
Press Apply on the bottom right below the Result Grid
Reference:
http://www.youtube.com/watch?v=tnhJa_zYNVY
In the navigator under SCHEMAS, right click your schema/database and select "Table Data Import Wizard"
Works for mac too.
You can use MySQL Table Data Import Wizard
At the moment it is not possible to import a CSV (using MySQL Workbench) in all platforms, nor is advised if said file does not reside in the same host as the MySQL server host.
However, you can use mysqlimport.
Example:
mysqlimport --local --compress --user=username --password --host=hostname \
--fields-terminated-by=',' Acme sales.part_*
In this example mysqlimport is instructed to load all of the files named "sales" with an extension starting with "part_". This is a convenient way to load all of the files created in the "split" example. Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for CSV files and the --local option specifies that the incoming data is located on the client. Without the --local option, MySQL will look for the data on the database host, so always specify the --local option.
There is useful information on the subject in AWS RDS documentation.
If the server resides on a remote machine, make sure the file in in the remote machine and not in your local machine.
If the file is in the same machine where the mysql server is, make sure the mysql user has permissions to read/write the file, or copy teh file into the mysql schema directory:
In my case in ubuntu it was: /var/lib/mysql/db_myschema/myfile.csv
Also, not relative to this problem, but if you have problems with the new lines, use sublimeTEXT to change the line endings to WINDOWS format, save the file and retry.
It seems a little tricky since it really had bothered me for a long time.
You just need to open the table (right click the "Select Rows- Limit 10000") and you will open a new window. In this new window, you will find "import icon".
https://www.convertcsv.com/csv-to-sql.htm
This helped me a lot. You upload your excel (or .csv) file and it would give you an .sql file with SQL statements which you can execute - even in the terminal on Linux.
Related
I am trying to insert in my database the data contained in a .csv file. I am using XAMP. I dont know where I have to locate or configure the path in order to make MySQL find the file.
Go to localhost/phpmyadmin create a database and select that database you will find "Import" option on the top left right navbar of phpmyadmin and then follow the guide and upload your .csv file.
Have a look at this tutorial
Upload .csv file in public_html via ftp.
Access your phpmyadmin -> Select database -> Click on SQL tab.
Type following SQL query.
load data local infile ‘Full path for .CSV file’ into table city
fields terminated by ‘,’
enclosed by ‘”‘lines terminated by ‘\n’
Replace actual full path with “Full path for .CSV file” (/home /username/public_html/filename.csv)
now you are ready to rock by clicking GO
LOAD DATA LOCAL INFILE "C://Users/sergi/Desktop/file.csv"
I get the error message. I am using phpAdmin
2000 - Can't find file 'c:/tmp/userlist.csv'.
LOAD DATA LOCAL INFILE 'c:/tmp/userlist.csv'
INTO TABLE users2
FIELDS TERMINATED BY ';'
LINES TERMINATED BY '\n';
I have googled and that error seems to be a lot of other things too.
It does exist as i have just pasted into windows and it opens the file.
With LOAD DATA LOCAL the file is read locally and it needs to be accessible by the client. You mentioned you are using phpMyAdmin (a MySQL client), so the file needs to be accessible from wherever phpMyAdmin is running from.
If phpMyAdmin isn't installed on your computer, then you need to upload the file to the server where it's located and change the file path accordingly.
Another solution would be to install a MySQL client on your computer to run the query, so it can read the file locally. I recommend MySQL Workbench.
The only thing I had to do in the end was press IMPORT inside phpMyadmin. It would have been nicer if somebody had told me that instead of mucking around for two days trying to import a csv via sqls.
I am trying to import a .csv (or .txt it doesn't matter) with MySQL Workbench and I am actually using the following command :
LOAD DATA LOCAL INFILE '/path/to/your/csv/file/model.csv' INTO TABLE test.dummy FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n';
My .csv file is placed in the mysql/data folder of the server where my database is placed (on localhost). MySQL Workbench tells me "Query interrupted".
I previously tried to put the full path but the query was running successfully without results. My database was still empty...
Thanks for your help.
Okay so... Problem solved. It just seems that MySQLWorbench accepts this request with a direct path (no errors, script executed) but doesn't refresh his table view... So my table have been fullfilled around 3 times before I restarted MySQLWorkbench to discover that my table was full.
So... Problem solved. Sorry for the time loss.
I have a csv file that I will be regularly updating through a batch script that calls cygwin+ bash script. I would like to automate the upload of the csv file into a MySQL database such that a table in my database would be updated with the csv file at regular intervals. The database is currently running on a Windows Server 2003 machine and administered with phpMyAdmin.
I have looked online and found some ways that I could achieve part of that, but I am confused as to where the code presented in those sources should be placed and how they would be called. For instance, Import CSV file directly into MySQL seems to show how to upload a csv file to a MySQL database from the SQL command line once, but not repeatedly, the latter being what I need.
I would prefer the solution to involved bash scripting (as opposed to batch and php) if possible (i.e. I would prefer a solution that I could integrate with the bash scripts that update the csv file).
Thank you
You can execute a MySQL script from the command line by doing something like:
mysql -uUsername -pPassword database_name < infile.sql
You could invoke that from the command line and in the infile.sql you could have code like:
LOAD DATA INFILE 'filename.csv' TO table_name
FIELDS TERMINATED BY ','
You can use a here document:
# some bash script stuff
mysql ... <<EOF
SQL COMMANDS
GO HERE
EOF
# more bash script stuff
You can use Quartz to create a cronjob - for periodically updating your database. with the help of cronmaker (http://www.cronmaker.com/), you get to choose when and how often your database gets updated.
This is a sample SQL Script to import data into your MySQL database:
LOAD DATA INFILE 'c:/.../filename.csv'
INTO TABLE discounts
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;
run the above script in your cronjob using your preferred language.
Is there a way I can make the OUTFILE statement in a .sql file point to the path of the current working directory of the .sql file itself, without manually specifying an absolute path name? As it is now, the default location is the data directory of the schema that I'm working with (ie. C:\progra~1\mysql\etc\etc).
Thanks!
Seems like scripting would be your best bet on this. Using Perl or PHP to generate the query with the outfile in cwd.
Not directly, but you can get similar functionality by using mysql at the command line.
First, some background: the LOAD DATA INFILE statement has a LOCAL variant, LOAD LOCAL DATA INFILE, that allows you to read data from a file on your local file system; without LOCAL it reads from the specified location on the server.
Unfortunately, the SELECT INTO OUTFILE statement always writes to the specified location on the server's filesystem (which can include server-configured mounts such as NFS or CIFS shares); there is no LOCAL equivalent.
However, you can approximate such functionality by using the mysql client in batch mode:
mysql -e 'SELECT * FROM db1.tbl1' -B > output.tsv
The -B option causes the statement to run in batch mode, so the query results are printed in a tab-delimited way, with no borders. Using > somefile.txt allows you to redirect that output to the specified file, so you have something equivalent to what you want, without the FIELDS or LINES options of SELECT INTO OUTFILE, but close enough!