I have so many data records and I want to import it into a table in database. I'm using phpmyadmin. I typed
LOAD DATA INFILE 'C:/Users/Asus/Desktop/cobacobaa.csv' INTO TABLE akdhis_kelanjutanstudi
but the result came like this:
I do not know why it said that I have duplicate entry "0" for primary, but actually in my data there is no duplicate entry, here is a part of my data
could you please help me to solve this? what may I do to solve that problem? thanks in advance
I would guess that your primary key is a number. The problem would then be that the value starts with a double quote. When converting a string to a number, MySQL converts the leading numeric characters -- with no such characters, the value is zero.
The following might fix your problem:
LOAD DATA INFILE 'C:/Users/Asus/Desktop/cobacobaa.csv'
INTO TABLE akdhis_kelanjutanstudi
FIELDS TERMINATED BY ',' ENCLOSED BY '"';
I usually load data into a staging table, where all the values are strings, and then convert the staging table into the final version. This may use a bit more space in the database but I find that the debugging process goes much faster when something goes wrong.
Related
I have a series of CSV files that I had wanted to import into MySQL. To first populate the table I did the following;
mysql -u root -p apn -e "LOAD DATA LOCAL INFILE '/opt/cell.csv' INTO TABLE data FIELDS TERMINATED BY ',';"
Where the CSV contents as;
89xx,31xx,88xx,35xx,ACTIVATED,250.0MB,GPRS,96xx,0,0,2,false,DEFAULT
The one unique field is the first starting with '89xx' (which goes into column named 'iccid').
Now I want to do is update the table, but clueless how to use the first entry in the CSV to update the rest of the row? It will be like the 4th field that I need to get updated overtime as that is the value that will change (data usage for specific cellular device). I don't have that much of a problem emptying the table before doing a whole new import, I was thinking though it was a better practice to just update, I will eventually need update several times a day.
Since I have no practical skills in any language, or mysql for that matter, would it be best to just insert into a temp table and update from that?
You can use
REPLACE
before
INTO
keyword to update/replace your row.
LOCAL INFILE '/opt/cell.csv' REPLACE INTO TABLE data FIELDS TERMINATED BY ',';"
To ignore your duplicate index you can use
IGNORE
keyword before
INTO
Load data manual
I am currently making a mysql database. When I import data from a .csv file holding my companies records, about half of it gets imported normally while the other half gets changed to the same Chinese characters. At first I thought it was the Heidisql tool but after doing a manual load in mysql I still had Chinese characters in the data. Here is an example of the load statement (sorry if it doesn't format right):
LOAD DATA LOCAL INFILE 'c:/text.csv'
INTO TABLE test
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
In the pathing above, that is not the actual path it is similar to it. The pathing is correct as no errors were given when that was executed.
After I imported the data I used SELECT and UNION ALL to display my data (with another table because that is one of the major goals of this project). Below is that code snippet:
SELECT PartNum, Description, Quantity, Price, '','' FROM Test1
UNION ALL
SELECT * FROM Test2
The two single quote areas are because the Test1 table has 2 less columns than that of the Test2 table. I was getting an error because the Test1 table had fewer columns and I read that this was a workaround. Now to the issue. My first instinct, upon seeing the Chinese characters, was to try a manual load (opposed to Heidisql). After this did not help, my first thought was to check the .csv file. However, I inspected the table and saw that it was arranged alphabetically. My .csv file is not arranged that way. I have no start point or end point to go off of to inspect the .csv file. As a mysql noob, I thought it would be quicker to ask if anyone knows of anything to help me get these rows back to English. Examples of the Chinese characters are the following: 䍏䱌䕃呏剓㩍䌱㔲㌰㐱㠺䵃ㄵ㈱㌭㈭㌰㐱㠭啎, 䙌䵉千䕌䱁久何区吱〰䵓, etc.
Had the same problem. Using mysqlimport instead solved the issue for me.
I am loading multiple text files into a database using the Load Data infile statement. There seems to be an issue when trying to load numeric values into a their respective numeric fields. I did some research and per MySQL documentation, all data loaded in is treated as text, so all the values are being input as null.
LOAD DATA INFILE regards all input as strings, so you cannot use
numeric values for ENUM or SET columns the way you can with INSERT
statements. All ENUM and SET values must be specified as strings.
I tried casting the specific fields as numeric or decimal and I still get null values in the table.
I.E.
LOAD DATA INFILE 'blabla.txt' INTO TABLE example FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '' LINES TERMINATED BY '\r\n' ignore 1 lines
(field1, field2, field3, #problemfield,)
set problemfield= cast((REPLACE(REPLACE(REPLACE(#problemfield,',',''),'(', '-'), ')','')) as DECIMAL(10,0));
I am using the replaces because sometimes negatives are in parentheses in the data.
There were similar questions on stackoverflow about casting when loading in and many responses(cant find the links now) suggest loading in as text, then transferring to new numeric field and deleting old field, is that an optimal solution ? How is this issue usually handled? Since I am sure this scenario must happen a lot (load all this text data and perform operations on them)
Load your data into a staging table. Manipulate as required. Write to your real tables from your staging table.
I imported data from csv files into a MySQL database, but made the mistake of not
removing the trailing spaces in the csv columns. So the spaces are seen as '?' at
the end of some values in the database (of type Varchar). I want to get rid of these.
Can I somehow delete all these ?s in the database in one go? I know of the replace
command, but I think that works on a single column of a singe table at a time, which will
be very time consuming for me. Could anyone please suggest something better? Thanks!
You can use the trim function
UPDATE table SET column = TRIM(TRAILING '?' FROM column)
Originally, my question was related to the fact that PhpMyAdmin's SQL section wasn't working properly. As suggested in the comments, I realized that it was the amount of the input is impossible to handle. However, this didn't provide me with a valid solution of how to deal with the files that have (in my case - 35 thousand record lines) in format of (CSV):
...
20120509,126,1590.6,0
20120509,127,1590.7,1
20120509,129,1590.7,6
...
The Import option in PhpMyadmin is struggling just as the basic copy-paste input in SQL section does. This time, same as previously, it takes 5 minutes until the max execution time is called and then it stops. What is interesting tho, it adds like 6-7 thousand of records into the table. So that means the input actually goes through and does that almost successfully. I also tried halving the amount of data in the file. Nothing has changed however.
There is clearly something wrong now. It is pretty annoying to have to play with the data in php script when simple data import is not work.
Change your php upload max size.
Do you know where your php.ini file is?
First of all, try putting this file into your web root:
phpinfo.php
( see http://php.net/manual/en/function.phpinfo.php )
containing:
<?php
phpinfo();
?>
Then navigate to http://www.yoursite.com/phpinfo.php
Look for "php.ini".
To upload large files you need max_execution_time, post_max_size, upload_max_filesize
Also, do you know where your error.log file is? It would hopefully give you a clue as to what is going wrong.
EDIT:
Here is the query I use for the file import:
$query = "LOAD DATA LOCAL INFILE '$file_name' INTO TABLE `$table_name` FIELDS TERMINATED BY ',' OPTIONALLY
ENCLOSED BY '\"' LINES TERMINATED BY '$nl'";
Where $file_name is the temporary filename from php global variable $_FILES, $table_name is the table already prepared for import, and $nl is a variable for the csv line endings (default to windows line endings but I have an option to select linux line endings).
The other thing is that the table ($table_name) in my script is prepared in advance by first scanning the csv to determine column types. After it determines appropriate column types, it creates the MySQL table to receive the data.
I suggest you try creating the MySQL table definition first, to match what's in the file (data types, character lengths, etc). Then try the above query and see how fast it runs. I don't know how much of a factor the MySQL table definition is on speed.
Also, I have no indexes defined in the table until AFTER the data is loaded. Indexes slow down data loading.