I have saved my file to csv using:
pd.to_csv('path.csv', encoding='utf-8', index=False)
And then try to use table data import wizard to import data to mysql talbe. I have aligned every column, but got the decoding error.
Attached is the pic, my question is:
Is there a way to specify mysql workbench's encoding setting? My workbench version is: 5.7
Solutions I have tried, but it doesn't work:
Open and save the file as .csv using mac numbers, ensure the encoding is utf-8
I am trying to insert data from text file into MySQL using LOAD DATA LOCAL INFILE command.
When I hit this command in MySQL window I get the total number of records inserted.
LOAD DATA LOCAL INFILE 'filepath' INTO TABLE tablename FIELDS TERMINATED BY '\t';
However, when I pass the same command using ruby, I see the less data is getting inserted.
connect.query("LOAD DATA LOCAL INFILE 'filepath' INTO TABLE tablename FIELDS TERMINATED BY '\\t';")
I have verified by printing the above query and it is the same.
I am using MySQL Workbench 6.3 version 6.3.10 build 12092614
If 'filepath' is a variable, is will not be expanded. Try this:
connect.query("LOAD DATA LOCAL INFILE '#{filepath}' INTO TABLE tablename FIELDS TERMINATED BY '\\t';")
If 'filepath' is a file path literal, try using an absolute path from the root. That is, if MySQL is running locally.
If this query is being submitted to a remote MySQL server, then there is no hope of it opening a local file, and you may need to rethink your import strategy completely.
I am facing a problem while importing MySql dump into Hive.
I used sqoop connector to import data from MySql to Hive successfully. However, there are more data dumps to import to Hive. Restoring the database first is not feasible. Since the dump size is of 300G, hence takes 3 days to restore. Also, I can't restore more than two files on MySql because of disk space issue.
As a result, I am looking to import data, which is in MySql dump, directly into hive without restoring into MySql.
There is one more problem with the MySql dump is that there are multiple insert statements (around 1 billion). So will it creates multiple files for each insert? In that case, how to merge them?
You can use the "load" command provided by Hive to load data present in the your local directory.
Example: This will load the data present in the file fileName.csv into your hive table tableName.
load data local inpath '/tmp/fileName.csv' overwrite into table tableName;
In case your data is present in the HDFS.Use the same load command without local option.
Example:Here /tmp/DataDirectory is a HDFS directory and all the files present in that directory will be loaded into Hive.
load data inpath '/tmp/DataDirectory/*' overwrite into table tableName;
Caution: As Hive is schema on read make sure to take care of your line delimiter and field delimiter are same in both the file and the Hive table you are loading into.
I have a MySql table with more than 100,000 records and would like to analyse the data in Stata. Is there any way to import the MySql table in Stata?
You can use ODBC or a plugin to directly connect Stata to MySQL; or you can export the data from MySQL (e.g. to a CSV file, via SELECT ... INTO OUTFILE or mysqldump) and then import into Stata.
I have a CSV file. It contain 1.4 million rows of data, so I am not able to open that csv file in Excel because its limit is about 1 million rows.
Therefore, I want to import this file in MySQL workbench. This csv file contains columns like
"Service Area Code","Phone Numbers","Preferences","Opstype","Phone Type"
I am trying to create a table in MySQL workbench named as "dummy" containing columns like
ServiceAreaCodes,PhoneNumbers,Preferences,Opstyp,PhoneTyp.
The CSV file is named model.csv. My code in workbench is like this:
LOAD DATA LOCAL INFILE 'model.csv' INTO TABLE test.dummy FIELDS TERMINATED BY ',' lines terminated by '\n';
but I am getting an error like model.CSV file not found
I guess you're missing the ENCLOSED BY clause
LOAD DATA LOCAL INFILE '/path/to/your/csv/file/model.csv'
INTO TABLE test.dummy FIELDS TERMINATED BY ','
ENCLOSED BY '"' LINES TERMINATED BY '\n';
And specify the csv file full path
Load Data Infile - MySQL documentation
In case you have smaller data set, a way to achieve it by GUI is:
Open a query window
SELECT * FROM [table_name]
Select Import from the menu bar
Press Apply on the bottom right below the Result Grid
Reference:
http://www.youtube.com/watch?v=tnhJa_zYNVY
In the navigator under SCHEMAS, right click your schema/database and select "Table Data Import Wizard"
Works for mac too.
You can use MySQL Table Data Import Wizard
At the moment it is not possible to import a CSV (using MySQL Workbench) in all platforms, nor is advised if said file does not reside in the same host as the MySQL server host.
However, you can use mysqlimport.
Example:
mysqlimport --local --compress --user=username --password --host=hostname \
--fields-terminated-by=',' Acme sales.part_*
In this example mysqlimport is instructed to load all of the files named "sales" with an extension starting with "part_". This is a convenient way to load all of the files created in the "split" example. Use the --compress option to minimize network traffic. The --fields-terminated-by=',' option is used for CSV files and the --local option specifies that the incoming data is located on the client. Without the --local option, MySQL will look for the data on the database host, so always specify the --local option.
There is useful information on the subject in AWS RDS documentation.
If the server resides on a remote machine, make sure the file in in the remote machine and not in your local machine.
If the file is in the same machine where the mysql server is, make sure the mysql user has permissions to read/write the file, or copy teh file into the mysql schema directory:
In my case in ubuntu it was: /var/lib/mysql/db_myschema/myfile.csv
Also, not relative to this problem, but if you have problems with the new lines, use sublimeTEXT to change the line endings to WINDOWS format, save the file and retry.
It seems a little tricky since it really had bothered me for a long time.
You just need to open the table (right click the "Select Rows- Limit 10000") and you will open a new window. In this new window, you will find "import icon".
https://www.convertcsv.com/csv-to-sql.htm
This helped me a lot. You upload your excel (or .csv) file and it would give you an .sql file with SQL statements which you can execute - even in the terminal on Linux.