I already have a table in phpmyadmin that contains users records. Each user has a unique admission number. I now want to add a new column to this table and was wondering how I can import data for this new column using just the admission number and new data.
Is this possible? I have a CSV but can't work out the best way to import the data without overwriting any existing records.
Thanks.
As far as I can see, this is not possible. phpMyAdmin's import features are for whole rows only.
You could write a small PHP script that opens the CSV file using fgetcsv(), walks through every line and creates a UPDATE statement for each record:
UPDATE tablename SET new_column = "new_value" WHERE admission_number = "number"
you can then either output and copy+paste the commands, or execute them directly in the script.
If you want to do it using just CSV, here are the steps you could perform.
In a text editor, make a comma separated list of all the column names for the final table (including your new column). This will be useful for importing the new data.
Add the new column to your table using phpmyadmin
Export current table in csv format and sort by admission number in Excel
In your new data CSV, sort by admission number
Copy the column over from your new data to your exported CSV and save for re-import.
Backup your users table (export to CSV)
Truncate the contents of your table (Operations, Truncate)
Import your updated CSV
Optional / Recommended: When you import CSV into phpmyadmin, use the column names option to specify the columns you are using, separated by commas (no spaces).
Assumptions:
1. You are using Spreadsheet such as Excel / OpenOffice to open your csv files.
Any problems?
Truncate the table again and import the sql backup file.
Related
I've just found that I can import a CSV file to MySQL table. I tried it on phpMyAdmin, but I also found out that by importing a CSV file, its columns need to match the mysql database table you are importing to. This means one CSV file equals one table only in the SQL database, correct me if I'm wrong though.
The problem is the employee table I'm inserting data to, is related to other tables. There's also the rolemap table where when every employee is inserted in the employee table, the rolemap table also creates a new row for that new employee(only inserts the employee_id generated by the employee table, then the role of the user if admin or not).
The question is can I achieve this logic, by importing a CSV file in phpMyAdmin or any database manager? I'm thinking that maybe there will be some formatting that needs to be done in the CSV file in order to import to different tables in the database. Or is this not possible, that I need to parse the CSV file in a backend and handle how to insert it to each respective table in the database?
I have data in excel sheet and have to export the data to MySQL. I am using the MySQL for Excel tool in Data tab to export the data to a new table.
I have a table with id (primary key) and name (unique, with index), around 1000 rows. Exporting it takes 3 - 5 minutes.
Why is it so slow? Any suggestions or tips on how I can speed up the export process?
I have a sheet with 150,000 rows. So need some help..
Found one way to do it:
Create the table from excel using MySQL plugin for Excel. It can be done manually, make sure to have the same columns in the excel and the table (order and data type).
Before saving the sheet as a CSV, replace all comma (,) characters in the sheet with blank string; because the comma acts as separator for columns. Save the file as a .csv file.
Open mysql command line tool and run the below sql to import data to the table. Replace the csv file path and table name.
LOAD DATA INFILE "C:/data.csv" INTO TABLE mytable COLUMNS TERMINATED BY "," LINES TERMINATED BY "\r\n";
This imported 155,000 columns in a matter of seconds.
I have read access to this database which I want to query. I have a csv file which contains a list of ids. I want to query the database and find all documents with the ids listed in the csv file. The normal way, I believe, I would go about doing this is to import the csv file as a table or as a new column. However, I only have read permissions, and I don't believe I can get my permissions changed. As such, I cannot change any of the tables or add any new tables.
So there is 1000 or so lines in the csv file. Typing in all the ids manually into a query isn't really an option. But I also can't import the csv file. Is there some solution that would allow me to look for and find all relevant entries (entries whose id matches an id in the csv file).
We have many CSV files in S3, but one of the tables had a new column added, so when importing those CSV files, we get an error "Delimiter not found. The new column is nullable and added to the end of the table so I'm hoping there's a way to import the old MySQL exports with NULL for the new table column.
Is there a way to do this without editing all the export files to add that column?
You can map columns in a COPY command:
http://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-column-mapping.html
You can specify the columns that exist in your file in the COPY command (in the same order as they come in your CSV file). With the EMPTYASNULL parameters, columns with no data will get a NULL value.
COPY table_name (column_a, column_b)
FROM 's3://xxx'
CSV
[...]
EMPTYASNULL
;
I have to import CSV File for different clients in my system, some with [,] some [|] Etc… separated. Always very big files.
While importing I need to filter duplicate records, duplicate records should not insert in dB.
Problem is columns can be different for different clients. I have database for every client for CSV DATA which I import every day, Week or month depending on client need. I keep data for every import so we can generate reports & what data we receive in CSV file our system do processing after import.
Data structure example:
Client 1 database:
First_Name | Last_Name | Email | Phone | Ect…
95% data always same every new CSV file. Some records new comes & some records they delete from csv. so our system only process those records those newly imported .
Currently what happening we import data every time in new table. We keep table name with time stamp so we can keep track for import. it Is expensive process, it duplicate records and tables.
Im thinking solution and I need your suggestion on it.
Keeping one table every time import when I import CSV file data in table, I’ll alter existing table add new column, column name will be “current date” (byte or Boolean) add true false on import??
My other question is first time when I import CSV file …I need to write script:
While importing CSV data, if table already exists then my date logic will work else if table does not exist it should create table given or provided “client name” as table name. Challenge is columns I don’t know it should create columns from CSV file.
Table already exist some new records came in it should insert else update.
Is it do able in mysql??
although I have to do something for mssql also but right now I need solution for my sql.
Please help me... im not good in MySQL :(
You can certainly do an insert Or update statement when importing each record.
see here :
https://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
I propose you create a script to dynamically create you table if it doesn't
What is the language that you would use to insert your csv?