I have a gridview in asp.net page with a checkbox column. I need to insert those checked rows on the gridview, to the mysql table.
One of the most easiest ways would be to find the selected rows and insert them one by one over loop.
However, it is time-consuming considering there may be 10000 rows at any instance of time. Given that this is time-consuming process, there is a risk of losing the connection on the course of insertion.
Is there any way to expedite insertion of huge number of rows?
Thanks,
Balaji G
You can first get all the checked records, then write them into to tab delimited or comma delimited format, then using syntax from LOAD DATA INFILE do the bulk insertion.
Here's the sample format:
LOAD DATA INFILE 'data.txt' INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n';
Related
I'm trying to import an csv file to my database table, the origin of this file was a previous database with the same structure. My issue is that it imports only 1000 rows instead of the whole 62k+ file. The script i'm using is:
LOAD DATA INFILE 'C:/ProgramData/MySQL/MySQL Server 8.0/Uploads/covid19.csv'
INTO TABLE covid19.covid19
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(id,date,iso2,iso3,country,continent,cases,deaths,population);
Some clients have a option, where they reduce the number of returned Rows with a LIMIT 1000.
You should check, how many rows you actually have with
SELECT COUNT(*) FROM covid19.covid19;
You should see the actual number of inserted rows, as the command didn't show any warnungs or errors.
I wanted to import a .csv file with ~14k rows to MySQL. However I only got a table in MySQL with about 13k rows. I checked and found out that MySQL skips some rows in the middle of my .csv file.
I used LOAD DATA INFILE and I really cannot understand why MySQL skips those rows. Really appreciate if someone could help me with this.
Here is my query:
LOAD DATA LOCAL INFILE 'd:/Data/Tanbinh - gui Tuan/hm2.csv'
INTO TABLE q.hmm
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n' IGNORE 1 LINES;
There is no warning message at all
I assume, that the difference in numbers of rows caused by unique key duplicates.
From the reference:
With LOAD DATA LOCAL INFILE, data-interpretation and duplicate-key
errors become warnings and the operation continues
Actually there are no warnings produced on duplicates. To check wich rows were skipped, you can load data into another similar table without unique keys and compare those tables.
I am loading multiple text files into a database using the Load Data infile statement. There seems to be an issue when trying to load numeric values into a their respective numeric fields. I did some research and per MySQL documentation, all data loaded in is treated as text, so all the values are being input as null.
LOAD DATA INFILE regards all input as strings, so you cannot use
numeric values for ENUM or SET columns the way you can with INSERT
statements. All ENUM and SET values must be specified as strings.
I tried casting the specific fields as numeric or decimal and I still get null values in the table.
I.E.
LOAD DATA INFILE 'blabla.txt' INTO TABLE example FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '' LINES TERMINATED BY '\r\n' ignore 1 lines
(field1, field2, field3, #problemfield,)
set problemfield= cast((REPLACE(REPLACE(REPLACE(#problemfield,',',''),'(', '-'), ')','')) as DECIMAL(10,0));
I am using the replaces because sometimes negatives are in parentheses in the data.
There were similar questions on stackoverflow about casting when loading in and many responses(cant find the links now) suggest loading in as text, then transferring to new numeric field and deleting old field, is that an optimal solution ? How is this issue usually handled? Since I am sure this scenario must happen a lot (load all this text data and perform operations on them)
Load your data into a staging table. Manipulate as required. Write to your real tables from your staging table.
Every day we load around 6GB of CSV files into MySQL using:
LOAD DATA LOCAL INFILE 'file$i.csv' INTO TABLE tableName FIELDS TERMINATED BY ',' ENCLOSED BY '\"' LINES TERMINATED BY '\n';
We have 6 files that goes through this process, so it takes some time. Since we generate these files ourselves, we're in control of what format it gets outputted as.
Originally we chose CSV because this was a smaller process and we needed the data to be moved around and easily read by a non-developer. Now however, it's not so much of an issue since the loading time is so dramatic, we're now talking hours.
Is it quicker to output each row as an INSERT query into a single file and execute that or is CSV still quicker?
We're using the InnoDB storage engine.
If you use MyISAM tables, try ALTER TABLE table_name DISABLE KEYS; before loading the data and ALTER TABLE table_name ENABLE KEYS; after data import is done. This will greatly reduce your time taken for huge data.
Load data is faster than separate insert statement for each row.
I need to insert about 300 millions data records into MySQL, I wonder does it make any sense that I use multi-processes to make it ?
Situation 1 : 300 millions records insert into only one table.
Situation 2 : 300 millions records insert into multi tables.
What are the bottlenecks is on these two situations ?
The data source is about 800+ txt files.
I know there's a command LOAD DATA INFILE, I just want to understand this question. :D
Since you have lots of data consider using LOAD DATA. It's the fastest method of importing data from files according to mysql docs.
LOAD DATA INFILE
The LOAD DATA INFILE statement reads rows from a text file into a table at a very high speed.
Speed of INSERT Statements
When loading a table from a text file, use LOAD DATA INFILE. This is
usually 20 times faster than using INSERT statements. See Section
13.2.6, “LOAD DATA INFILE Syntax”.
...
INSERT is still much slower for loading data than LOAD DATA INFILE, even when using the strategies just outlined.
LOAD DATA INFILE '/path/to/your/file.csv'
INTO TABLE table_name
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n' -- or '\r\n'
IGNORE 1 LINES; -- use IGNORE if you have a header line in your file