Here is the SQL query I have so far:
LOAD DATA INFILE 'filename.csv'
INTO TABLE zt_accubid
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 4 LINES
I need to be able to end the process once a field with value="xyz" is encountered.
Is this possible?
LOAD DATA INFILE has no such option. There are a couple of workaraounds, though
You could solve the problem outside MySQL by manipulating the file first
As Alain Collins mentioned, if the column containing you marker has only unique values, and you don't have the LOAD DATA inside a transcation, you can use a unique key as a stopper.
You can use a trigger on the table as a stopper
If the marker is near the end of the table or the overhead is not important to you, you can do a full LOAD DATA into an interims table, then use INSERT INTO ... SELECT to move only the relevant data into your final table
Similarily you can load all data, then delete the irellevant part
Sure - put a row in the database before the load, and put a unique key on it. The LOAD should fail when it hits the duplicate.
Related
There is a table with some data in it, I'm inserting additional data from txt file with this function:
LOAD DATA LOCAL INFILE 'users.txt' IGNORE INTO TABLE users2
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n'
(#col1,#col2,#col3,#col4) set login=#col1,name=#col1,balance=#col2,email=#col3,reg_date=#col4;
It works, but if login already exists with a different balance, it creates a row with a duplicate login and a different balance.
I need the function to ignore the line if the login exists. Can anybody help me with that please?
Create a unique index on the login field.
ALTER TABLE USER2 add UNIQUE KEY(login);
thats all
I have a table with 18 columns and I want to update all rows of the column page_count using LOAD DATA INFILE. The file with the data is extremely simple, just \n separated values. I'm not trying to pull off anything impressive here; I just want to update the values in that one single column - about 3000 records. The code I'm using is
LOAD DATA INFILE '/tmp/sbfh_counter_restore' REPLACE INTO TABLE obits FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' (page_count);
And all this does is add one new row with the entire contents of the file dumped into the column page_count. What am I doing wrong? Shoud I use phpmyadmin? I'd be happy to use that as it better fits my skill set ;)
I created the file using SELECT page_count FROM obits INTO outfile '/tmp/sbfh_counter_restore'
based what I can understand from MySQL document, it does not support loading data into ONE COLUMN, but it will require ALL COLUMNS to be present in the file in correct order.
At least, you should use SELECT * FROM obits INTO outfile, then load it back as it will ensure the column order is consistent.
As all your file content was loaded into a new row, I think you should check the primary key (or unique key) of your table. The rows will be matched by the key and update or insert based on the matching result. It is likely that page_count is not your primary or unique key.
Hope that helps.
I am googling around and reading posts on this site but not clear what I should I do go with insert unique, records.
I basically have a giant file that that has a single column of data in it that needs to be imported into a db table where several thousand of the records from my text file already exist.
There are no id's in the text file I need to import and what I am reading insert ignore looks to be solving for duplicate ID's. I want new ids created for any of the new records added but obviously I can't have duplicates.
This would ideally be done with load data infile...but really not sure:
Insert IGNORE?
ON Duplicate Key?
The easiest way to achieve what you want is to read in the entire file using LOAD DATA, and then insert all non duplicates into the table which already exists.
LOAD DATA LOCAL INFILE 'large_file.txt' INTO TABLE new_table
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
(name);
This assumes that the first line of your giant file does not contain a column header. If it does, then you can add IGNORE 1 LINES to the statement above.
You can now INSERT the names in new_table into your already-existing table using INSERT INTO IGNORE:
INSERT IGNORE INTO your_table_archive (name)
SELECT name
FROM new_table
Any duplicate records which MySQL encounters will not be inserted, and instead the original ones will be retained. And you can drop new_table once you have what you need from it.
I have an existing table on MySQL with 7 fields - 4 are integer(one acting as primary key and auto incremete) and 3 text. This table already has data.
I am collecting new data in Excel and wish to bulk-add the data to the Mysql table. Some of the cells in Excel will be blank and need to show as null in MySQL and the primary key won't be part of excel, I let MYSQL auto add it. Previously I was manually adding each record through PHPmyadmin but it is far too time consuming (I have 1000's of records to add).
How do I go about this in terms of setting the Excel sheet in the right way and making sure I add to the exsisting data instead of replacing it? I have heard CSV files are the way? I use PHPmyadmin if that helps.
Thanks
If you want to append data to a MySQL table, I would use .csv files to do it.
In MySQL:
load data local infile 'yourFile'
into table yourTable (field1, field2, field3)
fields terminated by ',' optionally enclosed by '"'
lines terminated by '\n' -- On windows you may need to use `\r\n'
ignore 1 lines; -- Ignore the header line
Check the reference manual: http://dev.mysql.com/doc/refman/5.0/en/load-data.html
i am using the following command
LOAD DATA INFILE '/sample.txt' INTO TABLE TEST FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
whenever executed this query ,table "TEST" gets added the values from sample.txt file.
How to avoid this multiple entries in table.i want to load this file if table doesn't load this file already.
diEcho's comment is nearly right - identify unique column combinations in the data and declare them as a unique index on the table - but importantly you also need to modify your LOAD DATA statement - as its stands, it will fall over at the first occurrence of a duplicate record. You need to add an IGNORE:
LOAD DATA INFILE '/sample.txt' IGNORE INTO TABLE `test`
FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\n';
Although for preference you should consider loading the full dataset into a staging table then handle the duplicates a bit more gracefully.