I am attempting to load data from a csv file into a MySQL database using the LOAD DATA command.
My csv is structured like:
Index
Name
...
0
blah
1
blahbla
...
But when trying to read my data using
CREATE TABLE data (
id INT NOT NULL AUTO_INCREMENT,
KernelName VARCHAR(255) NOT NULL,
...,
primary key (id)
);
USE myDatabase;
LOAD DATA INFILE '/filepath/myFile.csv'
INTO TABLE myTable
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;
I receive the error ERROR 1062 (23000): Duplicate entry '1' for key 'myTable.PRIMARY'
I suspect this is happening because the AUTO_INCREMENT is creating a MySQL table where the id starts at 1 instead of the 0 that I'm reading. Causing the duplicate entry error.
New to MySQL and don't care if indexing starts at 0 or 1, just not sure what the easiest fix would be. Should I skip the index row? Change auto_increment to start at 0?
Update
I was able to fix this issue by omitting the AUTO_INCREMENT in my initial CREATE TABLE command. Then after importing data, I used ALTER TABLE table MODIFY id INTEGER NOT NULL AUTO_INCREMENT; to restore auto increment functionality.
Thanks to everyone who helped with this in the comments!
Related
I'm trying to load data from a .txt using LOAD DATA INFILE, the problem is that i get error 1452, but the foreign keys referred are present in my DB. Also other LOAD DATA are working, just can't solve this and 1 other load.
I've checked the referred data, and are present in DB before i load. Also the columns are of the same type. I tried reinstalling MySQL still not working (but a friend of mine using the same code/.txt can load the data). I can load the .txt if it consists of a single line, but when adding a second one i got the error.
-- The table referred:
CREATE TABLE Categoria (
Nome VARCHAR(50) NOT NULL,
Immagine MEDIUMBLOB,
PRIMARY KEY (Nome));
-- The table with FK:
CREATE TABLE Sottocategoria_Di (
Categoria1 VARCHAR(50) NOT NULL,
Categoria2 VARCHAR(50) NOT NULL,
PRIMARY KEY (Categoria1, Categoria2),
FOREIGN KEY (Categoria1) REFERENCES Categoria(Nome) ON DELETE NO ACTION,
FOREIGN KEY (Categoria2) REFERENCES Categoria(Nome) ON DELETE CASCADE);
INSERT INTO Categoria VALUES ('Chitarra', NULL);
INSERT INTO Categoria VALUES ('Chitarra Acustica', NULL);
INSERT INTO Categoria VALUES ('Chitarra Classica', NULL);
LOAD DATA INFILE "C:/ProgramData/MySQL/MySQL Server
8.0/Uploads/MusicShop/Sottocategoria.txt" INTO TABLE
Music.Sottocategoria_Di
CHARACTER SET latin1
FIELDS TERMINATED BY '|'
ENCLOSED BY ''
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(Categoria1,Categoria2);
-- Sottocategoria.txt
Categoria1,Categoria2
Chitarra Classica|Chitarra
Chitarra Acustica|Chitarra
A friend of mine reinstalled MySQL and using the same exact script/.txt can load the file but i still can't.
Error Code: 1452. Cannot add or update a child row: a foreign key constraint fails (music.sottocategoria_di, CONSTRAINT sottocategoria_di_ibfk_2 FOREIGN KEY (Categoria2) REFERENCES categoria (Nome) ON DELETE CASCADE)
The problem was using lines terminated by '\n', switched to '\r\n' and it worked! Still don't know why it works for other files .txt i'm using (same OS Windows)
I encounter an issue while I load data in my mysql database. I use this as a way to insert data in my database :
USE database;
ALTER TABLE country
ADD UNIQUE INDEX idx_name (`insee_code`,`post_code`,`city`);
LOAD DATA INFILE 'C:/wamp64/tmp/myfile-csv'
REPLACE
INTO TABLE `country` CHARACTER SET utf8
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
While my table are simply:
CREATE TABLE `country` (`insee_code` VARCHAR(250),
`post_code` VARCHAR(250),
`city` VARCHAR(250));
Before I use a php script to load other tables, it's pretty fast (3GB in 3 minutes) but with this one, it takes 17 min to
load 1 GB.
I don't know why, because with index, some rows are lost or corrupted and I'm just wondering why. If someone has an other way to delete duplicates rows while loading data from a CSV, I'll appreciate to ear it.
Thanks in advance.
With a REPLACE you basically delete the row first, then insert the new row. What you want to do is IGNORE instead.
read more about it here: 13.2.7 LOAD DATA INFILE Syntax
The REPLACE and IGNORE keywords control handling of input rows that
duplicate existing rows on unique key values:
If you specify REPLACE, input rows replace existing rows. In other words, rows that have the same value for a primary key or unique index
as an existing row. See Section 13.2.9, “REPLACE Syntax”.
If you specify IGNORE, rows that duplicate an existing row on a unique key value are discarded. For more information, see Comparison
of the IGNORE Keyword and Strict SQL Mode.
Also it would be better, if you would add a primary key. If you don't , MySQL creates one implicitly for you. This one is not visible and is either a uuid or a bigint. I don't remember that clearly. Anyway this is not optimal performance- and storagewise. Execute this:
ALTER TABLE country ADD column id int unsigned auto_increment primary key;
I have two tables: tblACTypeCharacteristics and tblAircrafts.
tblACTypeCharacteristics definition:
create table if not exists tblACTypeCharacteristics(
idAC_type varchar(255) not null,
numPassengers int,
primary key(idAC_type));
tblAircrafts definition:
create table if not exists tblAircrafts(
idAC int not null auto_increment,
txtAC_tag varchar(255) not null,
txtAC_type varchar(255) not null,
primary key(idAC, txtAC_tag));
In addition, I have added an foreign key like followed:
alter table tblaircrafts add foreign key(txtAC_type)
references tblactypecharacteristics(idAC_type);
In tblACTypeCharacteristics, the maximum number of passengers is defined for each type of aircraft.
In tblAircraft are all aircrafts available listed.
I am able to insert a new aircraft by typing for example:
insert into tblaircrafts (txtAC_tag, txtAC_type) values ('OE-LDA','A319-112');
But as there are loads of aircrafts around, I dont want to add each one by hand.
I want to import them by a csv file (I do have a list of a few aircrafts).
And I Import it as followed:
load data local infile 'C:\\Users\\t_lichtenberger\\Desktop\\tblAircrafts.csv'
into table tblaircrafts
character set utf8
fields terminated by ';'
lines terminated by '\n'
ignore 1 lines;
But as I want to Import the .csv file into the tblaircraft table, I get the following error:
15:08:37 alter table tblaircrafts add foreign key(txtAC_type) references tblactypecharacteristics(idAC_type) Error Code: 1452. Cannot add or update a child row: a foreign key constraint fails (`pilotproject`.`#sql-11d0_2da`, CONSTRAINT `#sql-11d0_2da_ibfk_1` FOREIGN KEY (`txtAC_type`) REFERENCES `tblactypecharacteristics` (`idAC_type`)) 0.641 sec
and I cannot explain why. The number of columns are the same and the datatypes of the columns are the same. And I have double-checked the csv for AC_types which arent in the tblACTypeCharacteristic tables and it should be good..
the first few rows of the csv file look like followed:
Any suggestions why the error still occurs?
Thank you so much in advance!
I finally got the solution. I just disabled the foreign key checks by executing
SET foreign_key_checks = 0;
before setting the foreign keys and it worked! I was able to add the csv records afterwards.
I am using an artificial primary key for a table. The table had two columns, one is the primary key and the other one is a Dates (datatype: Date) column. When I tried to load bulk data from a file (which contained values for the second column only), the YYYY part of the dates were added to the primary key column (which was the first column in the table) and the rest of the date was truncated.
So I needed to reset the table. I tried it using the Truncate table statement, but it failed with an error because this table was referenced in the foreign key constraint of another table. So I had to do it using the delete * from table; statement. I did delete all the records, but then when I inserted the records again (using the insert into statement this time), it started incrementing the ID starting from the year after the last year in the year I had previously inserted (i.e. it did not refresh it).
NOTE:- I am using MySQL 5.5 and InnoDB engine.
MY EFFORT SO FAR:-
I tried ALTER TABLE table1 AUTO_INCREMENT=0; (Reference Second Answer) ---> IT DID NOT HELP.
I tried ALTER TABLE table1 DROP column; (Reference- answer 1) ---> Error on rename of table1
Deleted the table again and tried to do:
DBCC CHECKIDENT('table1', RESEED, 0);
(Reference) ---> Syntax error at "DDBC" - Unexpected INDENT_QUOTED
(This statement is right after the delete table statement, if that
matters)
In this article, under the section named "Auto Increment Columns for INNODB Tables" and the heading "Update 17 Feb 2009:", it says that in InnoDB truncate does reset the AUTO_INCREMENT index in versions higher than MySQL 4.1... So I want some way to truncate my table, or do something else to reset the AUTO_INCREMENT index.
QUESTION:-
Is there a way to somehow reset the auto_increment when I delete the data in my table?
I need a way to fix the aforementioned DDBC CHECKINDENT error, or somehow truncate the table which has been referenced in a foreign key constraint of another table.
Follow below steps:
Step1: Truncate table after disabling foreign key constraint and then again enable-
set foreign_key_checks=0;
truncate table mytable;
set foreign_key_checks=1;
Step2: Now at the time of bulk uploading select columns in table only those are in your csv file means un-check rest one (auto id also) and make sure that colums in csv should be in same order as in your table. Also autoid columns should not in your csv file.
You can use below command to upload data.
LOAD DATA LOCAL INFILE '/root/myfile.csv' INTO TABLE mytable fields terminated by ',' enclosed by '"' lines terminated by '\n' (field2,field3,field5);
Note: If you are working in windows environment then change accordinglyl.
You can only reset the auto increment value to 1 (not 0). Therefore, unless I am mistaken you are looking for
alter table a auto_increment = 1;
You can query the next used auto increment value using
select auto_increment from information_schema.tables where
table_name='a' and table_schema=schema();
(Do not forget to replace 'a' with the actual name of your table).
You can play around with a test database (it is likely that your MySQL installation already has a database called test, otherwise create it using create database test;)
use test;
create table a (id int primary key auto_increment, x int); -- auto_increment = 1
insert into a (x) values (1), (42), (43), (12); -- auto_increment = 5
delete from a where id > 1; -- auto_increment = 5
alter table a auto_increment = 2; -- auto_increment = 2
delete from a;
alter table a auto_increment = 1; -- auto_increment = 1
I'm having to import, on a very regular basis, data from a CSV into a MySQL database.
LOAD DATA LOCAL INFILE '/path/to/file.csv' INTO TABLE `tablename` FIELDS TERMINATED BY ','
The data I'm importing doesn't have a primary key column and equally I can't alter the structure of the CSV file as I have no control over it.
So I need to import this CSV data into a temporary MySQL table which is fine but then I need to take this data and process it line by line. As each row is run through a process, I need to delete that row from the temporary table so that I don't re-process it.
Because the temporary table has no primary key I can't do DELETE FROM tablename WHERE id=X which would be the best option, instead I have to match against a bunch of alphanumeric columns (probably up to 5 in order to avoid accidentally deleting a duplicate).
Alternatively I was thinking I could alter the table AFTER the CSV import process was complete and add a primary key column, then process the data as previously explained. Then when complete, alter the table again to remove the primary key column ready for a new import. Can someone please tell if this is a stupid idea or not? What would be most efficient and quick?
Any ideas or suggestions greatly appreciated!
You can have an auto_increment column in your temporary table from the beginning and populate values as you load data
CREATE TEMPORARY TABLE tablename
(
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
col1 INT,
col2 VARCHAR(32),
col3 INT,
...
);
Then specify all columns in parentheses, but leave id out
LOAD DATA LOCAL INFILE '/path/to/file.csv'
INTO TABLE `tablename`
FIELDS TERMINATED BY ','
(col1, col2, col3,...); -- specify all columns, but leave id out
That way you don't need to add and remove id column before and after import. Since you're doing import on a regular basis you can consider to use a permanent table instead of temporary one and just TRUNCATE it after you done with the import to clear the table and reset id column.