I have a table in mysql with the following headings:
staff_id,dept_id,role_id,username,firstname,lastname,passwd,backend,email,phone,phone_ext,mobile,signature,lang,timezone,locale,notes,isactive,isadmin,isvisible,onvacation,assigned_only,show_assigned_tickets,change_passwd,max_page_size,auto_refresh_rate,default_signature_type,default_paper_size,extra,permissions,created,lastlogin,passwdreset,updated
staff_id is a primary key value and is set to AUTO_INCREMENT.
I found the solution by Queue in this post really helpful, although when the data is being imported the staff_id column is not being auto-incremented. I am inserting the column names in the Format-Specific Options in phpmysql. I can only get it to populate if the staff_id value exists in the csv file I am trying to import.
So if the data looks like this:
2,1,1,agent,Mista,Busta,NULL,NULL,agent#company.org,,NULL,,,NULL,NULL,NULL,<p>this is an agent; mista busta; agent#company.org</p>,1,0,1,0,0,0,0,0,0,none,Letter,"{""def_assn_role"":true}","{""user.create"":1,""user.delete"":1,""user.edit"":1,""user.manage"":1,""user.dir"":1,""org.create"":1,""org.delete"":1,""org.edit"":1,""faq.manage"":1}",2020-02-04 10:18:42,NULL,NULL,2020-02-04 10:18:42
...note the first '2' is the staff_id. What I would like do is have this in the csv:
,1,1,agent,Mista,Busta,NULL,NULL,agent#company.org,,NULL,,,NULL,NULL,NULL,<p>this is an agent; mista busta; agent#company.org</p>,1,0,1,0,0,0,0,0,0,none,Letter,"{""def_assn_role"":true}","{""user.create"":1,""user.delete"":1,""user.edit"":1,""user.manage"":1,""user.dir"":1,""org.create"":1,""org.delete"":1,""org.edit"":1,""faq.manage"":1}",2020-02-04 10:18:42,NULL,NULL,2020-02-04 10:18:42
...leaving the staff_id column blank and allowing mysql to auto-populate (auto_increment) as it sees fit.
AUTO INCREMENT is set at the server level within MySQL, and can be overwritten if assigned manually within a CSV or other import. If you simply exclude the entire column from your import, you'll allow MySQL to do what is set as default for that column, and automatically assign ID's, since it doesn't think you want to assign them yourself.
Also as a side note, if you import more than once without using TRUNCATE TABLE -- MySQL will pick up on the last inserted ID and move on from there, even if the table is empty. So if you ever want to start over from 1 you'll have to trucate the table.
I would like to insert records into a new database. I have an old database that contains fields that are no longer in the new database.
So when I execute the insertion request I get this message (field 'contract' unknown in field list).
I know it's normal but I would like to ignore this message and force the insertion.
There is no way to do this using MYSQL alone
i have a php script i use for stripping out data that there is no column for in the database table
i use SHOW COLUMNS FROM table to find out what columns are available, i then strip the data out that does not match these columns and then build the insert query to insert the data in the database minus the unavailable columns
the down side is that if you want to keep something from the orignal data and there is no column for it in the new table, then it will be discarded.
I have a table with 18 columns and I want to update all rows of the column page_count using LOAD DATA INFILE. The file with the data is extremely simple, just \n separated values. I'm not trying to pull off anything impressive here; I just want to update the values in that one single column - about 3000 records. The code I'm using is
LOAD DATA INFILE '/tmp/sbfh_counter_restore' REPLACE INTO TABLE obits FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' (page_count);
And all this does is add one new row with the entire contents of the file dumped into the column page_count. What am I doing wrong? Shoud I use phpmyadmin? I'd be happy to use that as it better fits my skill set ;)
I created the file using SELECT page_count FROM obits INTO outfile '/tmp/sbfh_counter_restore'
based what I can understand from MySQL document, it does not support loading data into ONE COLUMN, but it will require ALL COLUMNS to be present in the file in correct order.
At least, you should use SELECT * FROM obits INTO outfile, then load it back as it will ensure the column order is consistent.
As all your file content was loaded into a new row, I think you should check the primary key (or unique key) of your table. The rows will be matched by the key and update or insert based on the matching result. It is likely that page_count is not your primary or unique key.
Hope that helps.
Imagine you have a CSV file with only text in it and line ending \r\n.
like this:
abc
def
hij
...
xyz
You want to import this file in an existing multi-column table, where each line of text needs to go in a line of one specific (currently empty) column (lets name it needed) of the table. Like this:
| a | b | c |needed|
|foo|bar|baz|______|<-- abc
|foo|bar|baz|______|<-- def
...
|foo|bar|baz|______|<-- xyz
The data from the CSV file does not need to be inserted in a certain order. It really does not matter which field of needed has which data from the CSV in it, as long as every line gets imported, everything is fine.
I've tried lots of things and its driving me mad, but I can't figure out, how this could be done. Can this be solved somehow with LOAD DATA INFILE and update/replace/insert command? What would you do in a case like this? Is it even possible with mysql? Or do I need a custom php script for this?
I hope the question is clear enough. Please ask, if something is unclear to you.
OK, here you go...
Add a new column to stuff populated with 1-600,000
ALTER TABLE stuff ADD COLUMN newId INTEGER UNIQUE AUTO_INCREMENT;
Create a temp table for your CSV file
CREATE TABLE temp (
id INTEGER PRIMARY KEY AUTO_INCREMENT,
data VARCHAR(32)
);
I'm guessing the required length of the data.
Import into the temporary table
LOAD DATA INFILE <wherever> INTO TABLE temp;
Add the data to stuff
UPDATE stuff AS s
JOIN temp AS t ON t.id=s.newId
SET s.needed=t.data;
Tidy up
DROP TABLE temp;
ALTER TABLE stuff DROP COLUMN newId;
I must admit that I haven't tested the LOAD DATA INFILE or the UPDATE statements, but I have checked all the table fiddling. In particular, MySQL will populate the newId column with consecutive numbers.
I don't think there will be a problem if the number of rows in stuff doesn't match the number of lines in your csv file - obviously, there will be a few rows with needed still NULL if there are fewer lines than rows.
Most easiest way I found was with Libreoffice Calc+Base.
Import/Export Data in Base.
But becarefull,if you have "not null" options in columns and by mistake there is a cell which has no data in it, that row will be skiped.
So first disable the not null options from columns.
Oh and there is Microsoft Office Excel way,but I did not done it since it required a plugin to install and I was lazy.
Here's my query for loading mysql table using csv file.
LOAD DATA LOCAL INFILE table.csv REPLACE INTO TABLE table1 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '\'' LINES TERMINATED BY 'XXX' IGNORE 1 LINES
SET date_modified = CURRENT_TIMESTAMP;
Suppose my CSV contains 500 records with 15 columns. I changed three rows and terminated them with 'XXX'. I now want to update the mysql table with this file. My primary key is auto-incremented value. When I run this query, all the 500 rows are getting updated with old data and the rows I changed are getting added as new ones. I dont want the new ones. I want my table to be replaced with csv as-is. I tried changing my primary key to non-AI, it still didnt work. Any pointers please?? Thanks.
I am making some assumptions here.
1) You dont have the autonumber value in your file.
Since your primary key is not in your file MySQL will not be able to match rows. A autonumber primary key is a artificial key thus it is not part of the data. MySQL adds this artificial primary key when the row is inserted.
Lets assume your file contained some unique identifier lets call it Identification_Number. This number is both in the file and your table uses it as a primary key in this case MySQL will be able to identify the rows from the file and match them to the rows in the table.
While a lot of people will only use autonumbers in a database I always check if there is not a natural key in the data. If I identify one I do some performance testing with this natural key in a table. Then based on the performance metrics of both I then decide on a key.
Hopefully I did not get your question wrong but I suspect this might be the case.