MySQL: reorder rows from file association - mysql

A MySQL photo gallery script requires that I provide the display order of my gallery by pairing each image title to a number representing the desired order.
I have a list of correctly ordered data called pairs_list.txt that looks like this:
# title correct data in list
-- -------
1 kmmal
2 bub14
3 ili2
4 sver2
5 ell5
6 ello1
...
So, the kimmals image will be displayed first, then the bub14 image, etc.
My MySQL table called title_order has the same titles above, but they are not paired with the right numbers:
# title bad data in MySQL
-- -------
14 kmmal
100 bub14
31 ili2
47 sver2
32 ell5
1 ello1
...
How can I make a MySQL script that will look at the correct number-title pairings from pairs_list.txt and go through each row of title_order, replacing each row with the correct number? In other words, how can I make the order of the MySQL table look like that of the text file?
In pseudo-code, it might look like something like this:
Get MySQL row title
Search pair_list.txt for this title
Get the correct number-title pair in list
Replace the MySQL number with the correct number
Repeat for all rows
Thank you for any help!

if this is not a one time task but will be frequently called function, then maybe you can have the following scenario:
create a temp table, insert all the values from pairs_list.txt into this temp table using mysql load data infile function.
create a procedure (or a insert trigger maybe?) on that temp table which would update your main table according to whatever inserted.
in that procedure (or a insert trigger), I would have a cursor getting all values from temp table and for each value from that cursor update the selected title on your main table.
delete all from that temp table

I'd suggest you to do this simple way -
1 Remove all primary and unique keys from the title_order table, and create unique index (or primary key) on title field -
ALTER TABLE title_order
ADD UNIQUE INDEX UK_title_order_title (title);
2 Use LOAD DATA INFILE with REPLACE option to load data from the file and replace -
LOAD DATA INFILE 'pairs_list.txt'
REPLACE
INTO TABLE title_order
FIELDS TERMINATED BY ' '
LINES TERMINATED BY '\r\n'
IGNORE 2 LINES
(#col1, #col2)
SET order_number_field = #col1, title = TRIM(#col2);
...specify properties you need in LOAD DATA INFILE command.

Related

Trying to copy or somehow move the contents (values) in a TEXT column 3k+ large rows to another table in the same database without success

I have created a new column in the "destination" table with the same name, datatype and other values as appear in the "source" column in a different table. I have tried many suggested solutions as found on stackoverflow. This one appeared to work (found on Quora) but when I went to the destination table the previously empty column remains empty with nothing but NULL values noted. This is the Quora suggestion:
you can fill data in column from another existing one by using INSERT INTO statement and SELECT statement together like that
INSERT INTO `table1`(column_name)
SELECT column_name FROM `table2`
here you filled a single column in table 1 with data located in a single column in table 2
so if you want to fill the whole table 1 (all columns) with data located in table 2 you can make table 1 like a copy of table 2 by using the same code but without column name
INSERT INTO `table1`
SELECT * FROM `table2`
but note to do this (copy table content to another one) ensure that both of tables have the same column count and data types.
I'm not sure what is meant by column count (do the two table have to have the same number of columns?)
When I run it I get error # 1138.
Any help greatly appreciated. -JG

How to get row ids when using LOAD LOCAL DATA INFILE?

I have MySQL database with table into which I insert from multiple files using
LOAD DATA LOCAL INFILE ... statement. I have PRIMARY KEY ID set to auto_increment. The problem is, when I want to update only part of the table.
Say I've inserted file_1, file_2, file_3 in the past and now I want to update only file_2. I imagine the process in pseudo workflow
delete old data related to file_2
insert new data from file_2
However, it is hard to determine, which data are originally from file_2. In order to find out, I've come up with this idea:
When I insert the data, I will note the ids of the rows, which I've inserted, since I am using auto_increment I can note something like from_id, to_id for each of the file. Then, when I want to update only file_x I will delete only the data with from_id <= id <= to_id (where from_id, to_id relates to the file_x).
After little bit of searching, I've found out about ##identity and last_insert_id() (see), however, when I use select last_insert_id() after LOAD DATA LOCAL INFILE I get only one id, and not the maximal id corresponding to the data, but the last added (as it is defined). I am connecting to the database from Python using mysql.connnector using
cur.execute("select last_insert_id();")
print(cur.fetchall())
# gives
# [(<some_number>,)]
So, is there a way, how to retrieve all (or at least the minimal and maximal) ids which were assigned to the data imported using the LOAD DATA LOCAL INFILE... statement as mentioned above?
If you need to remember the source of each record from the table then you better store the information in a field.
I would add a new field (src) of type TINYINT to the table and store the ID of the source (1 for file_1, 2 for file_2 a.s.o.). I assume there won't be more than 255 sources; otherwise use SHORTINT for its type.
Then, when you need to update the records imported from file_2 you have two options:
delete all the records having src = 2 then load the new records from file into the table; this is not quite an update, it is a replacement;
load the new records from file into a new table then copy from it the values you need to update the existing records.
Option #1
Deletion is an easy job:
DELETE FROM table_1 WHERE src = 2
Loading the new data and setting the value of src to 2 is also easy (it is explained in the documentation):
LOAD DATA INFILE 'file.txt'
INTO TABLE table_1
(column1, column2, column42) # Put all the columns names here
# in the same order the values appear in the file
SET src = 2 # Set values for other columns too
If there are columns in the file that you don't need then load their values into variables and simply ignore the variables. For example, if the third column from the file doesn't contain useful information you can use:
INTO TABLE table_1 (column1, column2, #unused, column42, ...)
A single variable (I called it #unused but it can have any name) can be used to load data from all the columns you want to ignore.
Option #2
The second option requires the creation of a working table but it's more flexible. It allows updating only some of the rows, based on usual WHERE conditions. However, it can be used only if the records can be identified using the values loaded from the file (with or without the src column).
The working table (let's name it table_w) has the columns you want to load from the file and is created in advance.
When it's the time to update the rows imported from file_2 you do something like this:
truncate the working table (just to be sure it doesn't contain any leftovers from a previous import);
load the data from file into the working table;
join the working table and table_1 and update the records of table_1 as needed;
truncate the working table (cleanup of the current import).
The code:
# 1
TRUNCATE table_w;
# 2
LOAD DATA INFILE 'file.txt'
INTO TABLE table_w
(column_1, column_2, column 42); # etc
# 3
UPDATE table_1 t
INNER JOIN table_w w
ON t.column_1 = w.column_1
# AND t.src = 2 # only if column_1 is not enough
SET t.column_2 = w.column_2,
t.column_42 = w.column_42
# WHERE ... you can add extra conditions here, if needed
# 4
TRUNCATE TABLE table_w

loading 1 field from CSV into multiple columns in Oracle

I am trying to insert 1 column from CSV into 2 different oracle columns. but it looks like SQL Loader looks at least n fields from CSV to load n columns in oracle and my CTL script does not work for loading n field from CSV to n+1 column in Oracle where I am trying to load one of the field into 2 different oracle columns. Plz advise
Sample data file is:
id,name,imei,flag
1,aaa,123456,Y
my oracle table has below column
create table samp (
id number,
name varchar2(10),
imei varchar2(10),
tac varchar2(3),
flag varchar2(1) )
i need to load the imei from csv onto imei in Oracle Table and substr(imei,1,3) into tac Oracle column
my Control file is:
OPTIONS (SKIP=1)
load data
infile 'xxx.csv'
badfile 'xxx.bad'
into table yyyy
fields terminated by ","
TRAILING NULLCOLS
( id,name,imei,tac "substr(:imei,1,3)", flag)
Error from the log file:
Record 1: Rejected - Error on table yyyy, column flag
Column not found before end of logical record (use TRAILING NULLCOLS)
Ok, keep in mind the control file matches the input data by field in the order listed, THEN the name as defined is used to match to the table column.
The trick is to call the FIELD you need to use twice by something other than an actual column name, like imei_tmp and define it as BOUNDFILLER which means use it like a FILLER (don't load it) but remember it for future use. After the flag field, there are no more data fields so SQLLDR will try to match using the column names.
This is untested, but should get you started (the call to TRIM( ) may not be needed):
...
( id,
name,
imei_tmp BOUNDFILLER,
flag,
imei "trim(:imei_tmp)",
tac "substr(:imei_tmp,1,3)"
)

MySQL read parameter from file for select statement

I have a select query as follows:
select * from abc where id IN ($list);
The problem with the length of variable $list, it may have a length of 4000-5000 characters, as a result of which the length of actually executed query increases and its get pretty slow.
Is there a method to store the values of $list in a file and ask MySQL to read it from that file, similar to LOAD DATA INFILE 'file_name' for insertion into table?
Yes, you can (TM)!
First step: Use CREATE TEMPORARY TABLE temp (id ... PRIMARY KEY) and LOAD DATA INFILE ... to create and fill a temporary table holding your value list
Second step: Run SELECT abc.id FROM abc INNER JOIN temp ON abc.id=temp.id
I have the strong impression this only checks out as a win, if you use the same value list quite a few times.

MySQL LOAD DATA INFILE issue with updating + inserting

I am given a rather poorly structured table that has a Primary Key set to autoincrement and an UNIQUE key that is just unique. Conceptually, the UNIQUE key was supposed to be the primary key, but whoever made the table didn't have the UNQIUE key's column information at the time of the table's construction.
Now, we need to start doing regular update to this table where a provided text file contains updated rows and new rows. The challenge would be to replace the row if there's a matching value in the UNIQUE key and we actually don't care about the primary key itself as long as it autoincrements.
However, the way that LOAD DATA INFILE is structured is that it'd reset the PK we already have, which is bad - The reason we kept the PK is that it is foreign key to other legacy table (Sigh...).
So... is there a way I can make an elegant SQL-only update script that reads the updated table in text form and just updates based on the UNIQUE key column without screwing up the PK?
I guess a solution would be to export the table to tab form and do VLOOKUP to assign rows with the matching PK value (or NULL if it is a new row).
Any input?
Edit: Someone suggested that I do LOAD DATE INFILE into a temporary table and then do INSERT/UPDATE from there. Based on what this post and that post say, here's the script I propose:
// Create temporary table
CREATE TABLE tmp {
// my specifications
}
// Load into temporary table
LOAD DATA LOCAL INFILE *'[my tab file]'*
REPLACE INTO TABLE *mytable* FIELDS TERMINATED BY '\t' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
// Set copy all the columns over except the PK column. This is for rows with existing UNIQUE values
UPDATE mytable
RIGHT JOIN tmp ON mytable.unique = tmp.unique
SET mytable.col1 = tmp.col1, ..., mytable.coln = tmp.coln, mytable.unique = tmp.unique;
// Now insert the rows with new UNIQUE values
INSERT IGNORE INTO mytable (mytable.col1, mytable.col2, ...)
SELECT tmp.col1, tmp.col2, ... FROM tmp
// Delete the temporary table now.
DROP tmp;
Edit2: I updated the above query and tested it. It should work. Any opinions?
You can load data into new table using LOAD DATA INFILE. Then use INSERT, UPDATE statements to change your table with data from new table, in this case you can link tables as you want - by primary/unique key or by any field(s).