Quickest way to populate a table in MySQL via phymyadmin - mysql

I have about 8000 records in an Excel file and wish to add them to a MySQL table. I need to know the quickest way to populate the table.

Save the Excel worksheet as a .CSV file, and use the MySQL LOAD DATA statement to read the .CSV file and insert rows to a table. That's the quickest.

Related

How to import single CSV file with more than one table to MySQL database

I've just found that I can import a CSV file to MySQL table. I tried it on phpMyAdmin, but I also found out that by importing a CSV file, its columns need to match the mysql database table you are importing to. This means one CSV file equals one table only in the SQL database, correct me if I'm wrong though.
The problem is the employee table I'm inserting data to, is related to other tables. There's also the rolemap table where when every employee is inserted in the employee table, the rolemap table also creates a new row for that new employee(only inserts the employee_id generated by the employee table, then the role of the user if admin or not).
The question is can I achieve this logic, by importing a CSV file in phpMyAdmin or any database manager? I'm thinking that maybe there will be some formatting that needs to be done in the CSV file in order to import to different tables in the database. Or is this not possible, that I need to parse the CSV file in a backend and handle how to insert it to each respective table in the database?

BigQuery append data from csv to column

I have a BigQuery table where I added a new column and am not sure as to how I can append data to its row.
This is the BigQuery table:
This is the csv/excel file: I did try to upload the csv directly as a new table but had errors and am now trying to update the column named 'Max_Takeoff_kg', its the last column in the csv. How do I write a query within BigQuery to update the rows with the data in the csv in the last column.
If you're loading your data only for this time, I'd recommend that you save your XLS as CSV and try to create a new table again.
Anyway, you can update your table using BigQuery DML as you can see here
Its important to remember that in your case, for this approach works correctly you must have a way to identify your rows uniquely.
Example:
UPDATE your_db.your_table
SET your_field = <value>
WHERE <condition_to_identify_row_uniquely>
I hope it helps

About Removing the duplicate CSV files or rows from bigquery Table

I created a table in BigQuery In Cloud Application . By mistake I uploaded two csv files in a bigquery Table.
How to delete either one or both csv file from bigquery table?
Thanks
Arvind
Unfortunately, there is currently no way to remove data from a BigQuery table. Your best option is to re-import the data in a new table. (If you no longer have the original CSV, you can export the table and then remove the duplicates before re-importing.)

in mysql database,how to upload the modified data from csv file

Initially, I created a database called "sample" and updated the data from massive size CSV file.
Whenever I have small changes in .csv file (some data are added/deleted/modified), I have to update this in database too. Always updating the entire .csv file (large) is not efficient.
Is there any efficient way to update the modified data from .csv file to database?
Assuming that you are using LOAD DATE INFILE for importing from CSV, try using this syntax:
LOAD DATA INFILE 'file_name'
IGNORE
INTO TABLE `tbl_name`
...
...
IGNORE keyword will skip any rows in the CSV that duplicate any existing row in the table causing a conflict with a unique key. Read more here.
This will be more quicker and efficient than importing the complete CSV again.

Updating a SQL table with CSV data?

I am trying to update one of my SQL tables with new columns in my source CSV file. The CSV records in this file are already in this SQL table, but this SQL table is lacking some of the new columns from this CSV file.
I already added the new columns to my SQL table structure via ALTER TABLE. But now I just need to import the data from this CSV file into the new columns. How can I do this? I am trying to use SSIS and SQL Server to accomplish this, but am pretty new to Excel.
This is probably too late to solve salvationishere's problem; though I'm posting this for future readers!
You could just generate the SQL INSERT/UPDATE/etc command by parsing the csv file (a simple python script will do).
You could alternatively use this online parser:
http://www.convertcsv.com/csv-to-sql.htm
(Hoping that it'd still be available when you click!)
to generate your SQL command. The interface is extremely straight forward and it does the entire job in an awesome way.
You have several options:
If you are loading the data into a non-production system where you can edit the target tables, you could load the data into a new table, rename the old table to obsolete, and rename the new table to the old table name.
You can load the data into a staging table and then write a SQL statement to update the target table from the staging table.
You can open the CSV file in Excel and write a formula to generate an update script, drag the formula down across all rows so that you get a separate update statement for each row, and then run the separate update statements in management studio.
You can truncate the target table and update your existing ssis package that imports the file to use the new columns if you have the full history in your CSV file.
There are more options, but any of the above would probably be more than adequate solutions.