I have two databases named drupal and wordpress. I try to migrate post image paths from drupal 6 to wp.
Tables drupal.content_field_image and drupal_files contain necessary data:
drupal.content_field_image has field_image_fid - nid pair. drupal.drupal_files has fid - filepath pair (field_image_fid = fid).
I need to get the table that contains both nid and filepath, so I join this tables:
SELECT *
FROM `content_field_image`
JOIN `files` ON content_field_image.field_image_fid = files.fid;
Now I need to insert data to wordpress db so that:
meta_id = 34 + n (n is increment)
post_id = nid from joined table
meta_key = fifu_image_url
meta_value = filepath from joined table
So I have some questions:
How to make insert from joined table?
How to make while-like loop to insert every entry from joined table?
How to make n increment by 1 after every insert?
How to make insert from joined table?
Use a INSERT INTO .. SELECT FROM construct
How to make n increment by 1 after every insert?
Declare that column n as auto_increment column. Else, you will have to do it yourself if your concerned table already has a auto_increment column in place.
How to make while-like loop to insert every entry from joined table
You don't need that at all. INSERT .. SELECT construct will insert all the fetched rows to your referred table.
Your insert with select could be something like this:
insert into wordpress.table
SELECT (#i:=#i+1), nid, fifu_image_url, filepath
FROM drupal.content_field_image
JOIN drupal.files ON content_field_image.field_image_fid = files.fid;
join (select #i:=34) inc on true
To make it work, the columns of your select must have the same columns of the table you are inserting.
The incremental int could be made with this variable #i, initialized as 34 like you wanted and incremented 1 by 1 as the results as printed.
Related
I wrote a query to display certain record, but it is displaying extra data, for instance I have only 239 records in my database, but the query is displaying 356 records. Can anyone advice me on what I did wrong, I would really appreciate it. Here is the query:
SELECT DISTINCT
t.branchid,
t.occupancyid,
t.wardnumber,
t.bednumber,
t.admissiondate,
ti.patientname
FROM
bedoccupancydetail t
JOIN
consultationheader ti ON t.occupancyid = ti.occupancyid
WHERE
t.checkedout = '0'
There might not be any problem with your query, just it is how mysql (or any RDBMS) behaves. In your case in the two tables bedoccupancydetail and consultationheader are joined by occupancyid and it seems this columns is not unique and contains duplicate values, for each matching (duplicate) record it adds a row/record after joining.
Let's see the below example which I run at https://www.tutorialspoint.com/execute_sql_online.php:
BEGIN TRANSACTION;
CREATE TABLE NAMES(Id integer PRIMARY KEY, Name text);
INSERT INTO NAMES VALUES(1,'Tom');
INSERT INTO NAMES VALUES(2,'Lucy');
INSERT INTO NAMES VALUES(3,'TOM');
INSERT INTO NAMES VALUES(4,'TOM');
CREATE TABLE ABC(Id integer PRIMARY KEY, Name text, Another text);
INSERT INTO ABC VALUES(1,'Tom', 'A');
INSERT INTO ABC VALUES(2,'Lucy', 'B');
INSERT INTO ABC VALUES(3,'TOM', 'C');
INSERT INTO ABC VALUES(4,'TOM', 'D');
COMMIT;
/* Display all the records from the table */
SELECT ABC.Name, NAMES.Name, ABC.Another
FROM NAMES
JOIN ABC on ABC.Name = NAMES.Name;
As you see each table has 4 rows but the result has 6 rows:
$sqlite3 database.sdb < main.sql
Tom|Tom|A
Lucy|Lucy|B
TOM|TOM|C
TOM|TOM|D
TOM|TOM|C
TOM|TOM|D
I'd like to
UPDATE table SET column = 1 where column = 0;
INSERT (rows i just updated) INTO history_table;
Can I somehow store the ids from a select query, and then use those to UPDATE and subsequently INSERT rows matching those ids into the history table?
INSERT INTO history_table(id)
(SELECT id from table WHERE column = 0);
UPDATE table SET column = 1 where column = 0;
This way you are only getting the ID's that will be updated for the history_table and then you can update them to the correct values.
(I can't comment yet) Is there a specific reason to do it one query?
If not then you might use temporary table to store ids and fetch them for your update and insert using subquery.
Suppose I have a table with one column - called 'person' that contains a list of names. I want to find a specific person based off his index.
I tried using a sql variable to track each column index but the issue is - is that if I have a table of 5 records this will always output the 5th record.
SET #row_num = 0; SELECT #row_num := #row_num + 1 as row1 ,person FROM table;
SELECT row1 from table WHERE person = 'name'
I would recommend changing your database to add a second column for row_id. This is a fairly common practice. Then you can just use
SELECT * from table WHERE row_id = 3;
This will return the third row.
Another best possible way would be by means of a TEMPORARY TABLE as explained below
create a temp table
create temporary table temptab(ID INT NOT NULL AUTO_INCREMENT,Person VARCHAR(30))
Then insert data to temp table as
insert into temptab(Person) select Person from mytable
Then select the specific index person name from temp table like
select Person from temptab where ID = 5
I am trying to insert records into MySQL database from a MS SQL Server using the "OPENQUERY" but what I am trying to do is ignore the duplicate keys messages. so when the query run into a duplicate then ignore it and keep going.
What ideas can I do to ignore the duplicates?
Here is what I am doing:
pulling records from MySQL using "OpenQuery" to define MySQL "A.record_id"
Joining those records to records in MS SQL Server "with a specific criteria and not direct id" from here I find a new related "B.new_id" record identifier in SQL Server.
I want to insert the found results into a new table in MySQL like so A.record_id, B.new_id Here in the new table I have A.record_id set as a primary key for that table.
The problem is that when joining table A to Table B some times I find 2+ records into table B matching the criteria that I am looking for which causes the value A.record_id to 2+ times in my data set before inserting that into table A which causes the problem. Note I can use aggregate function to eliminate the records.
I don't think there is a specific option. But it is easy enough to do:
insert into oldtable(. . .)
select . . .
from newtable
where not exists (select 1 from oldtable where oldtable.id = newtable.id)
If there is more than one set of unique keys, you can add additional not exists statements.
EDIT:
For the revised problem:
insert into oldtable(. . .)
select . . .
from (select nt.*, row_number() over (partition by id order by (select null)) as seqnum
from newtable nt
) nt
where seqnum = 1 and
not exists (select 1 from oldtable where oldtable.id = nt.id);
The row_number() function assigns a sequential number to each row within a group of rows. The group is defined by the partition by statement. The numbers start at 1 and increment from there. The order by clause says that you don't care about the order. Exactly one row with each id will have a value of 1. Duplicate rows will have a value larger than one. The seqnum = 1 chooses exactly one row per id.
If you are on SQL Server 2008+, you can use MERGE to do an INSERT if row does not exist, or an UPDATE.
Example:
MERGE
INTO dataValue dv
USING tmp_holding_DataValue t
ON t.dateStamp = dv.dateStamp
AND t.itemId = dv.itemId
WHEN NOT MATCHED THEN
INSERT (dateStamp, itemId, value)
VALUES (dateStamp, itemId, value)
First we start with empty table
rows = 0
Second we insert random rows let say 3400
rows = 3400
For the third time i count how many rows are in the table, then insert the new rows and after that delete rows <= from the count.
This logic only work for the first time. If that repeat the count will always be 3400 but the id will increase so it will not delete the rows
I cant use last inserted ID since the rows are random and I dont how many it will load.
// Update
"SELECT count(*) from table" - the total count so far
"INSERT INTO tab_videos_watched (id , name) values (id , name)" - this is random can be 3400 or 5060 or 1200
"DELETE FROM table WHERE idtable <= $table_count"
If id is auto incremented, then you should use like:
select max(id) from my_table;
Read this maxId into a variable and then use when issued a delete query like:
delete from my_table where id <= ?;
Replace query parameter with the last found maxId value.
Alternatively you can define a column last_inserted as datetime type.
Before next insertions, select it into a local variable.
select max(last_inserted) as 'last_inserted' from my_table;
And after insertions are made, use the last_inserted to delete records.
delete from my_table where last_inserted <= ?;
Replace query parameter with the last found last_inserted value.