I have an SQL-File defining a table with 2 columns - product-id and product-reference. I need to import that file into an existing table that has the same structure (plus some extra columns), where product-id corresponds to the product-id from the backup file. Is there a simple way to do that via phpmyadmin?
One approach is to use load data infile (see here) with the set option to assign column values. Columns that are not being set will be given their default values, which is typically NULL.
Personally, I would load the data into a staging table with two columns and then insert the data from the staging table into the final table. This makes it easier to validate the data before putting it into the "real" table.
Related
I have a series of large .csv files that I need to import. I would like to try and check that when I import the data that it doesn't have a duplicate across all columns in the tables that I am about to load. Is there a way to check and not load the records that are a match on all the columns except for a handful of derived columns that would be different for audit purposes.
Suppose we have table with a DECIMAL column with values, for example: 128.98, 283.98, 21.20.
I want to import some CSV Files to this table. However, in the columns of these files, I have values like 235,69, 23,23, with comma instead of points.
I know I can REPLACE that column, but is there some way of doing that before LOAD INFILE?
I do not believe you can simultaneously replace that column and load the data. Looks like you will have to do multiple steps to get the results you want.
Load the data first into a raw table using the LOAD INFILE command. This table can be identical to the main table. You can use the Create Table like command to create the table.
Process the data (i.e. change the comma to a . where applicable) in the raw table.
select the data from the raw table and insert into main table either with row by row processing or bulk insert.
This can all be done in a stored procedure (SP) or by a 3rd party script written in python, php, etc...
If you want to know more about SP's in Mysql, Here is a useful link.
I am trying to insert data from a text file (18.9GB large) that looks like this:
as8dyta89sd6892yhgeui2eg
asoidyaos8yd98t2y492g4n2
as8agfuigf98safg82b1hfdy
They are all a length of 32 characters. Currently I have a database named hashmasher and a table called combinations with columns named unhashed and sha256. Currently I have data stored in the unhashed columns. Looking like:
unhashed | sha256
data | (completely empty)
Now I am wondering, how I could insert the data into the existing columns aswell as only adding the data to the second column, so for example the above would become
unhashed | sha256
data | firstlineoftextfile
data | secondlineoftextfile
If I use LOAD DATA INFILE it will load it into NEW rows (that's what I've been told) and it will load it into the unhashed column aswell as the sha256 column.
TL;DR I want to insert data from a text file into the second column of pre-existing rows.
Insert your data with LOAD DATA INFILE into a new table. It may be temporary, to speed thing up a bit. Use INSERT ... SELECT ... JOIN to merge two tables.
I understand it can take a few hours with 19G table.
Things are more complicated, since your original file contains one value per row. You may want to fix it up with sed/awk script so that there are two values per row, so that LOAD DATA INFILE works.
The other approach is to go on with sed/awk scripting, and convert your original file into a file with a bunch of UPDATE statements, and then pipe the result to MySQL.
I use aa python program which inserts many new entries to database,
this new entries are spread across multiple tables.
I'm using load data infile to load the file, but this solution is only for one table, and I don't feel like to do this multiple times.
I found http://forge.mysql.com/worklog/task.php?id=875 this but I'm not quite
sure if its already implemented or not.
I am doing exactly what you are trying to do as follows:
Step 1: Create a temp table (holding all the fields of the import file)
Step 2: LOAD DATA LOCAL INFILE -> into the temp table
Step 3: INSERT INTO Table1 ( fieldlist ) SELECT FROM TempTable ( matching fieldlist ) ... include JOINS, WHERE, and ON PRIMARY KEY UPDATE as necessary
Step 4: Repeat step 3 with the second table insert query and so on.
Using this method I am currently importing each of my 22MB data files, and parsing them out to multiple tables (6 tables, including 2 audit/changes tables)
Without knowing your table structure and data file structure it is difficult to give you a more detailed explanation, but I hope this helps get you started
load data from local file to insert new data accross multiple tables isnt yet supported (v 5.1)
I don't think LOAD DATA can do that, but why not duplicate the table after importing?
See
Duplicating table in MYSQL without copying one row at a time
Or, if you can go outside mySQL, Easiest way to copy a MySQL database?
As the title says: I've got a bunch of tab-separated text files containing data.
I know that if I use 'CREATE TABLE' statements to set up all the tables manually, I can then import them into the waiting tables, using 'load data' or 'mysqlimport'.
But is there any way in MySQL to create tables automatically based on the tab files? Seems like there ought to be. (I know that MySQL might have to guess the data type of each column, but you could specify that in the first row of the tab files.)
No, there isn't. You need to CREATE a TABLE first in any case.
Automatically creating tables and guessing field types is not part of the DBMS's job. That is a task best left to an external tool or application (That then creates the necessary CREATE statements).
If your willing to type the data types in the first row, why not type a proper CREATE TABLE statement.
Then you can export the excel data as a txt file and use
LOAD DATA INFILE 'path/file.txt' INTO TABLE your_table;