MySQL Insert from 2 source tables to one destination table - mysql

I am having issues inserting Id fields from two tables into a single record in a third table.
mysql> describe ing_titles;
+----------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------------+------+-----+---------+----------------+
| ID_Title | int(10) unsigned | NO | PRI | NULL | auto_increment |
| title | varchar(64) | NO | UNI | NULL | |
+----------+------------------+------+-----+---------+----------------+
2 rows in set (0.22 sec)
mysql> describe ing_categories;
+-------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------------+------+-----+---------+----------------+
| ID_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| category | varchar(64) | NO | UNI | NULL | |
+-------------+------------------+------+-----+---------+----------------+
2 rows in set (0.02 sec)
mysql> describe ing_title_categories;
+-------------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+------------------+------+-----+---------+----------------+
| ID_Title_Category | int(10) unsigned | NO | PRI | NULL | auto_increment |
| ID_Title | int(10) unsigned | NO | MUL | NULL | |
| ID_Category | int(10) unsigned | NO | MUL | NULL | |
+-------------------+------------------+------+-----+---------+----------------+
3 rows in set (0.04 sec)
Let's say the data from the tables is:
mysql> select * from ing_titles;
+----------+-------------------+
| ID_Title | title |
+----------+-------------------+
| 3 | Chicken |
| 2 | corn |
| 1 | Fettucini Alfredo |
+----------+-------------------+
3 rows in set (0.00 sec)
mysql> select * from ing_categories;
+-------------+----------+
| ID_Category | category |
+-------------+----------+
| 1 | Dinner |
| 3 | Meat |
| 2 | Veggie |
+-------------+----------+
3 rows in set (0.00 sec)
I want to insert into ing_title_categories the record "corn, Veggie" or where ID_Title = 2 and ID_Category = 2.
Here's what I tried:
INSERT INTO ing_title_categories (ID_Title, ID_Category)
SELECT ing_titles.ID_Title, ing_categories.ID_Category
FROM ing_title_categories
LEFT JOIN ing_titles ON ing_title_categories.ID_Title=ing_titles.ID_Title
LEFT JOIN ing_categories ON ing_title_categories.ID_Category=ing_categories.ID_Category
WHERE (ing_titles.ID_Title=2) AND (ing_categories.ID_Category = 2);
There is no data inserted into the table ing_title_categories, and here is the reply from MySQL:
Query OK, 0 rows affected (0.00 sec)
Records: 0 Duplicates: 0 Warnings: 0
What is the correct syntax for inserting the ing_titles.ID_Title and ing_categories.ID_Category into the table ing_titles_categories?
Please, no PHP or Python examples. Use SQL that I can copy and paste into the MySQL prompt. I will be adding this to a C++ program, not PHP, JavaScript or Python.
Edit 1:
The ing_title_categories.ID_Title and ing_title_categories.ID_Category are foreign keys into the other tables.

INSERT INTO
ing_title_categories (ID_Title, ID_Category)
SELECT
ing_titles.ID_Title, ing_categories.ID_Category
FROM
ing_titles, ing_categories
WHERE
ing_titles.ID_Title = ing_categories.ID_Category AND
ing_titles.ID_Title = 2 AND ing_categories.ID_Category = 2;
SQL Fiddle demo

After taking advice from #DrewPierce and #KaiserM11, here is the MySQL sequence:
mysql> INSERT INTO ing_title_categories (ID_Title, ID_Category)
-> SELECT
-> ing_titles.ID_Title,
-> ing_categories.ID_Category
-> FROM ing_titles, ing_categories
-> where (ing_titles.ID_Title = 2) AND (ing_categories.ID_Category = 2)
-> ;
Query OK, 1 row affected (0.07 sec)
Records: 1 Duplicates: 0 Warnings: 0
mysql> select * from ing_title_categories;
+-------------------+----------+-------------+
| ID_Title_Category | ID_Title | ID_Category |
+-------------------+----------+-------------+
| 17 | 2 | 2 |
+-------------------+----------+-------------+
1 row in set (0.00 sec)

In this case, only possible way I see is using a UNION query like
INSERT INTO ing_title_categories (ID_Title, ID_Category)
SELECT Title, NULL
FROM ing_title WHERE ID_Title = 2
UNION
SELECT NULL, category
FROM ing_categories
WHERE ID_Category = 2
(OR)
You can change your table design and use an AFTER INSERT trigger to perform the same in one go.
EDIT:
If you can change your table design to something like below (No need of that extra chaining table)
ing_titles(ID_Title int not null auto_increment PK, title varchar(64) not null);
ing_categories( ID_Category int not null auto_increment PK,
category varchar(64) not null,
ing_titles_ID_Title int not null,
FOREIGN KEY (ing_titles_ID_Title)
REFERENCES ing_titles(ID_Title));
Then you can use a AFTER INSERT trigger and do the insertion like
DELIMITER //
CREATE TRIGGER ing_titles_after_insert
AFTER INSERT
ON ing_titles FOR EACH ROW
BEGIN
-- Insert record into ing_categories table
INSERT INTO ing_categories
( category,
ing_titles_ID_Title)
VALUES
('Meat' NEW.ID_Title);
END; //
DELIMITER ;

Related

Why does MySQL keep changing my VARCHAR to a TEXT?

I'm experimenting with improving the performance of a certain table in my company's database. This table has 7.9 6.9 million rows with the format:
mysql> show fields from BroadcastLog;
+---------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| broadcast_id | int(10) unsigned | YES | MUL | NULL | |
| author_id | int(10) unsigned | YES | MUL | NULL | |
| type | int(11) | NO | MUL | NULL | |
| origin | int(11) | YES | MUL | NULL | |
| date_created | datetime | NO | MUL | NULL | |
| date_modified | datetime | NO | | NULL | |
| old_status | int(10) unsigned | YES | MUL | NULL | |
| new_status | int(10) unsigned | YES | MUL | NULL | |
| json_data | text | YES | | NULL | |
| log_text | text | YES | | NULL | |
+---------------+------------------+------+-----+---------+----------------+
11 rows in set (0.01 sec)
One of the first places I wanted to look to improve this was changing the two text fields to varchar fields which I know to generally be more efficient. So I gave it a try:
mysql> alter table BroadcastLog modify log_text varchar(2048);
Query OK, 0 rows affected, 1 warning (1 min 13.08 sec)
Records: 0 Duplicates: 0 Warnings: 1
mysql> show warnings;
+-------+------+---------------------------------------------------+
| Level | Code | Message |
+-------+------+---------------------------------------------------+
| Note | 1246 | Converting column 'log_text' from VARCHAR to TEXT |
+-------+------+---------------------------------------------------+
1 row in set (0.01 sec)
It didn't convert!
I tried to get clever. Let's create a new (temporary) column, copy the data, drop the old column, then rename the new one:
mysql> alter table BroadcastLog add column log_text_vc varchar(2048);
Query OK, 0 rows affected, 1 warning (1 min 13.08 sec)
Records: 0 Duplicates: 0 Warnings: 1
mysql> show warnings;
+-------+------+---------------------------------------------------+
| Level | Code | Message |
+-------+------+---------------------------------------------------+
| Note | 1246 | Converting column 'log_text' from VARCHAR to TEXT |
+-------+------+---------------------------------------------------+
1 row in set (0.01 sec)
Couldn't even create a new column!
I tried to get clever-er. Create a new table, copy the data, drop the old columns, copy the data back:
mysql> create table tmp (id INT UNSIGNED PRIMARY KEY, json_data VARCHAR(1024), log_text VARCHAR(2048));
Query OK, 0 rows affected (0.04 sec)
mysql> insert into tmp (id, json_data, log_text) select id, json_data, log_text from BroadcastLog;
Query OK, 6939076 rows affected (5 min 28.12 sec)
Records: 6939076 Duplicates: 0 Warnings: 0
mysql> alter table BroadcastLog drop column json_data;
Query OK, 0 rows affected (1 min 12.36 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> alter table BroadcastLog drop column log_text;
Query OK, 0 rows affected (1 min 9.10 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> alter table BroadcastLog add column json_data varchar(1024);
Query OK, 0 rows affected (1 min 11.52 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> alter table BroadcastLog add column log_text varchar(2048);
Query OK, 0 rows affected (1 min 15.41 sec)
Records: 0 Duplicates: 0 Warnings: 1
mysql> show warnings;
+-------+------+---------------------------------------------------+
| Level | Code | Message |
+-------+------+---------------------------------------------------+
| Note | 1246 | Converting column 'log_text' from VARCHAR to TEXT |
+-------+------+---------------------------------------------------+
1 row in set (0.01 sec)
mysql> show fields from BroadcastLog;
+---------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| broadcast_id | int(10) unsigned | YES | MUL | NULL | |
| author_id | int(10) unsigned | YES | MUL | NULL | |
| type | int(11) | NO | MUL | NULL | |
| origin | int(11) | YES | MUL | NULL | |
| date_created | datetime | NO | MUL | NULL | |
| date_modified | datetime | NO | | NULL | |
| old_status | int(10) unsigned | YES | MUL | NULL | |
| new_status | int(10) unsigned | YES | MUL | NULL | |
| json_data | varchar(1024) | YES | | NULL | |
| log_text | mediumtext | YES | | NULL | |
+---------------+------------------+------+-----+---------+----------------+
11 rows in set (0.01 sec)
So one field was created properly, but the other was still converted to TEXT, despite the field being completely empty with no data in it
I've been Googling around to try and find an answer, but thus far I've turned up nothing.
Create Table Statement
Per the comments, here's the create table statement (after my above changes, so the log_text column and json_data column on my local database may not match the original data I pulled from our production database this morning):
mysql> show create table BroadcastLog\G
*************************** 1. row ***************************
Table: BroadcastLog
Create Table: CREATE TABLE `BroadcastLog` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`broadcast_id` int(10) unsigned DEFAULT NULL,
`author_id` int(10) unsigned DEFAULT NULL,
`type` int(11) NOT NULL,
`origin` int(11) DEFAULT NULL,
`date_created` datetime NOT NULL,
`date_modified` datetime NOT NULL,
`old_status` int(10) unsigned DEFAULT NULL,
`new_status` int(10) unsigned DEFAULT NULL,
`log_text` mediumtext,
PRIMARY KEY (`id`),
KEY `old_status` (`old_status`),
KEY `new_status` (`new_status`),
KEY `broadcast_id` (`broadcast_id`),
KEY `author_id` (`author_id`),
KEY `log_type_and_origin` (`type`,`origin`),
KEY `log_origin` (`origin`),
KEY `bl_date_created` (`date_created`),
CONSTRAINT `fk_BroadcastLog_author_id` FOREIGN KEY (`author_id`) REFERENCES `User` (`id`) ON DELETE SET NULL ON UPDATE CASCADE,
CONSTRAINT `fk_BroadcastLog_broadcast_id` FOREIGN KEY (`broadcast_id`) REFERENCES `Broadcast` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=6941898 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
MySQL Version
mysql> select version();
+-----------+
| version() |
+-----------+
| 5.7.31 |
+-----------+
1 row in set (0.01 sec)
Updated
I updated MySQL and got the same results:
mysql> select version();
+-----------+
| version() |
+-----------+
| 8.0.23 |
+-----------+
1 row in set (0.00 sec)
mysql> show fields from BroadcastLog;
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| id | int unsigned | NO | PRI | NULL | auto_increment |
| broadcast_id | int unsigned | YES | MUL | NULL | |
| author_id | int unsigned | YES | MUL | NULL | |
| type | int | NO | MUL | NULL | |
| origin | int | YES | MUL | NULL | |
| date_created | datetime | NO | | NULL | |
| date_modified | datetime | NO | | NULL | |
| log_text | text | NO | | NULL | |
| json_data | text | YES | | NULL | |
| old_status | int unsigned | YES | MUL | NULL | |
| new_status | int unsigned | YES | MUL | NULL | |
+---------------+--------------+------+-----+---------+----------------+
11 rows in set (0.00 sec)
mysql> alter table BroadcastLog modify log_text varchar(2048);
Query OK, 6939076 rows affected, 1 warning (3 min 22.64 sec)
Records: 6939076 Duplicates: 0 Warnings: 1
mysql> show warnings;
+-------+------+---------------------------------------------------+
| Level | Code | Message |
+-------+------+---------------------------------------------------+
| Note | 1246 | Converting column 'log_text' from VARCHAR to TEXT |
+-------+------+---------------------------------------------------+
1 row in set (0.01 sec)
mysql> show fields from BroadcastLog;
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| id | int unsigned | NO | PRI | NULL | auto_increment |
| broadcast_id | int unsigned | YES | MUL | NULL | |
| author_id | int unsigned | YES | MUL | NULL | |
| type | int | NO | MUL | NULL | |
| origin | int | YES | MUL | NULL | |
| date_created | datetime | NO | | NULL | |
| date_modified | datetime | NO | | NULL | |
| log_text | mediumtext | YES | | NULL | |
| json_data | text | YES | | NULL | |
| old_status | int unsigned | YES | MUL | NULL | |
| new_status | int unsigned | YES | MUL | NULL | |
+---------------+--------------+------+-----+---------+----------------+
11 rows in set (0.02 sec)
I will note one difference I saw in the output, which could have an alternate explanation: it now asys "6939076 rows affected" instead of "0 rows affected". Although I spent a couple hours trying to make sense of this behavior so I had already run multiple ALTER TABLE statements before I even started this SO thread. It's possible you only get rows affected the first time you try to change the column and I just missed it. It's also possible MySQL 8 just uses a different metric for "affected rows" and has different output.
Either way, still not converting to VARCHAR for some reason
After some further experimentation and research, I figured out why MySQL wasn't allowing me to convert these field types. The issue for my specific case of only trying to make a VARCHAR(2048) is caused by a poorly configured MySQL instance, but the issue in general could apply to anyone trying to make a VARCHAR(16378) or larger with default configuration.
The main difference between VARCHAR is TEXT is that VARCHAR is stored in the same file on disk as all of the other data in the table whereas TEXT is stored in a separate file elsewhere on disk. This means that when you read a page from disk you are implicitly reading the data for VARCHAR fields, but the data for TEXT fields is only read explicitly if you request those fields (e.g. via SELECT * or naming the field in your SELECT statement). This is the source of the performance improvement of VARCHAR over TEXT.
Because VARCHAR is stored in the table file on disk, a field can only be set to a VARCHAR if the field would fit in the page file. While MySQL's documentation claims that VARCHAR fields have a maximum length of 65535, it does not enforce page files to be 65535 bytes long. In fact, that's not even the default:
https://dev.mysql.com/doc/refman/8.0/en/innodb-init-startup-configuration.html
A minimum file size is enforced for the first system tablespace data file to ensure that there is enough space for doublewrite buffer pages. The following table shows minimum file sizes for each InnoDB page size. The default InnoDB page size is 16384 (16KB). [emphasis mine]
Without modifying your page size, you can't make a VARCHAR field larger than 16384 (actually, you can't make a field larger than 16377, because of how VARCHAR is stored on disk: 4 bytes for a pointer, 2 bytes for a length, 1 byte boolean to declare whether or not it's null, then the actual data is stored in the "variable length" portion of the page)
My problem came from the fact that we have much smaller page sizes configured.
Conclusion
If you ever try to make a VARCHAR field and MySQL automatically converts it to a TEXT field, check your InnoDB page size:
SHOW VARIABLES LIKE "%page_size%";
You're probably trying to make a field that won't fit on the page.
Unfortunately, changing your page size isn't straight-forward. You have to create a whole new database and copy the data over.
Smaller VARCHARs have some obscure advantages over TEXT and especially TINYTEXT.
But, since you what 2K characters, there is no advantage of VARCHAR(2048) over TEXT except for one thing -- complaining if you try to insert more than 2048 characters.
DESCRIBE seems to say that log_text is TEXT. But the SHOW CREATE TABLE (which I trust more) says it is MEDIUMTEXT (with a limit of 16MBytes).

Auto numerate MySQL Query

Is there any mysql query, to update a table and set numbers starting from 1?
For example, the table "item" has 100000 rows, the query would just update first row and set id ="1", next to 2, 3, 4 etc.
Try:
ALTER TABLE item MODIFY id INT PRIMARY KEY NOT NULL AUTO_INCREMENT
If you have id in your item table.
But it will show error on id column having duplicate values.
add auto increment column (INT) and it should do what you want
Is there already a primary key in the table? If not then create one with AUTO_INCREMENT.
This would do the job done.
ALTER TABLE `your_table`
ADD COLUMN `id_primary` int NOT NULL AUTO_INCREMENT FIRST ,
ADD PRIMARY KEY (`id_primary`);
Note: Changing an existing column to primary key AUTO_INCREMENT would raise error if that column contains duplicate values.
Given this
MariaDB [sandbox]> select * from posts;
+------+--------+----------+
| id | userid | category |
+------+--------+----------+
| NULL | 1 | a |
| NULL | 1 | b |
| NULL | 1 | c |
| NULL | 2 | a |
| NULL | 1 | a |
+------+--------+----------+
5 rows in set (0.00 sec)
This code
use sandbox;
update posts p,(select #rn:=0) rn
set id=(#rn:=#rn+1)
where 1 = 1;
Results in
MariaDB [sandbox]> select * from posts;
+------+--------+----------+
| id | userid | category |
+------+--------+----------+
| 1 | 1 | a |
| 2 | 1 | b |
| 3 | 1 | c |
| 4 | 2 | a |
| 5 | 1 | a |
+------+--------+----------+
5 rows in set (0.00 sec)

MySQL INSERT shows more affected rows than affected

Consider the following three queries.
The first one can only return a single row as bids_buy.id is the primary key.
The second one shows an existing record in entities_has_documents for primary key 3099541982-2536988132, and the third one doesn't execute due to that existing record.
The forth one does execute as expected, but shows two affected rows.
Why does it show two affected rows, and not just one associated with primary key 3099541982-2536988132?
mysql> SELECT bb.bids_sell_id, 2536988132,"pub_bids", NOW(),506836355 FROM bids_buy bb WHERE bb.id=2453409798;
+--------------+------------+----------+---------------------+-----------+
| bids_sell_id | 2536988132 | pub_bids | NOW() | 506836355 |
+--------------+------------+----------+---------------------+-----------+
| 3099541982 | 2536988132 | pub_bids | 2016-04-16 08:19:13 | 506836355 |
+--------------+------------+----------+---------------------+-----------+
1 row in set (0.00 sec)
mysql> SELECT * FROM entities_has_documents;
+-------------+--------------+----------+---------------------+--------------+-----------+------------+-----------+-------------+
| entities_id | documents_id | type | date_added | date_removed | added_by | removed_by | purged_by | date_purged |
+-------------+--------------+----------+---------------------+--------------+-----------+------------+-----------+-------------+
| 2453409798 | 2536988132 | pub_bids | 2016-04-16 08:07:13 | NULL | 506836355 | NULL | NULL | NULL |
| 3099541982 | 2536988132 | pub_bids | 2016-04-16 08:18:53 | NULL | 506836355 | NULL | NULL | NULL |
+-------------+--------------+----------+---------------------+--------------+-----------+------------+-----------+-------------+
2 rows in set (0.00 sec)
mysql> INSERT INTO entities_has_documents(entities_id,documents_id,type,date_added,added_by)
-> SELECT bb.bids_sell_id, 2536988132,"pub_bids", NOW(),506836355 FROM bids_buy bb WHERE bb.id=2453409798;
ERROR 1062 (23000): Duplicate entry '3099541982-2536988132' for key 'PRIMARY'
mysql> INSERT INTO entities_has_documents(entities_id,documents_id,type,date_added,added_by)
-> SELECT bb.bids_sell_id, 2536988132,"pub_bids", NOW(),506836355 FROM bids_buy bb WHERE bb.id=2453409798
-> ON DUPLICATE KEY UPDATE type="pub_bids", added_by=506836355, date_added=NOW(), removed_by=NULL, date_removed=NULL;
Query OK, 2 rows affected (0.00 sec)
Records: 1 Duplicates: 1 Warnings: 0
mysql> EXPLAIN bids_buy;
+--------------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+------------------+------+-----+---------+-------+
| id | int(10) unsigned | NO | PRI | NULL | |
| bids_sell_id | int(10) unsigned | NO | MUL | NULL | |
| stage_buy_id | int(10) unsigned | NO | MUL | NULL | |
+--------------+------------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
mysql> EXPLAIN entities_has_documents;
+--------------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+------------------+------+-----+---------+-------+
| entities_id | int(10) unsigned | NO | PRI | NULL | |
| documents_id | int(10) unsigned | NO | PRI | NULL | |
| type | varchar(16) | NO | MUL | NULL | |
| date_added | datetime | NO | | NULL | |
| date_removed | datetime | YES | | NULL | |
| added_by | int(10) unsigned | NO | MUL | NULL | |
| removed_by | int(10) unsigned | YES | MUL | NULL | |
| purged_by | int(10) unsigned | YES | MUL | NULL | |
| date_purged | datetime | YES | | NULL | |
+--------------+------------------+------+-----+---------+-------+
9 rows in set (0.01 sec)
mysql>
EDIT
Per http://php.net/manual/en/pdostatement.rowcount.php
If the last SQL statement executed by the associated PDOStatement was
a SELECT statement, some databases may return the number of rows
returned by that statement. However, this behaviour is not guaranteed
for all databases and should not be relied on for portable
applications.
So, am I just seeing the number or rows returned from my SELECT statement, and not the number or rows affected by my INSERT? Why would MySQL do such a thing?
EDIT DONE
I think it is due to ON DUPLICATE KEY UPDATE modifier as the MYSQL Reference Manual 5.5 as well as MySQL Reference Manual 5.7 says
If you specify ON DUPLICATE KEY UPDATE, and a row is inserted that would cause a duplicate value in a UNIQUE index or PRIMARY KEY, an UPDATE of the old row is performed. The affected-rows value per row is 1 if the row is inserted as a new row, 2 if an existing row is updated, and 0 if an existing row is set to its current values. If you specify the CLIENT_FOUND_ROWS flag to mysql_real_connect() when connecting to mysqld, the affected-rows value is 1 (not 0) if an existing row is set to its current values.”.
In your case you already had a row with primary key value 3099541982-2536988132. Hence the MySQL lets you know that you are trying to insert a row with duplicate Primary or Unique key by indicating 2 rows affected. As the manual also say that ON DUPLICATE KEY UPDATE leads to the sequence of INSERT than UPDATE update command in case of duplicate key whereas it executes only the INSERT command in case the key is not present.
I hope this helps.
UPDATE
Also see this link.

How do I avoid same id being used in a many-2-many relationship

foo_bars is a many-2-many table with both columns pointing to foo.id
I want foo_bars.[id1, id] to for a unique key
How do I avoid same id being used in a foo_bars entry.
i.e. insert into foo_bars (2,2) - How do I avoid this?
mysql> create table foo (id int(11), name varchar(255));
Query OK, 0 rows affected (0.49 sec)
mysql> desc foo;
+-------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| name | varchar(255) | YES | | NULL | |
+-------+--------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
mysql> create table foo_bars (id1 int(11), id int(11));
Query OK, 0 rows affected (0.34 sec)
mysql> desc foo_bars;
+-------+---------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| id1 | int(11) | YES | | NULL | |
| id | int(11) | YES | | NULL | |
+-------+---------+------+-----+---------+-------+
2 rows in set (0.00 sec)
you can add a unique contatraint
CHECK (id1<>id)
create table foo_bars (id1 int(11), id int(11),CHECK (id1<>id));
You can create a trigger, as follows:
DELIMITER $$
CREATE TRIGGER `test_id_uniqueness` BEFORE INSERT ON `foo_bars`
FOR EACH ROW
BEGIN
IF NEW.id = NEW.id1 THEN
SIGNAL SQLSTATE '12345';
SET MESSAGE_TEXT := 'foo_bars.ID and foo_bars.ID1 cannot be the same';
END IF;
END$$
DELIMETER ;

Mysql insert data in existing table. how to automaticly give the data new id if already exists

I had two tables running in 2 different databases but the structure is identical. I want to import data of one table into the other but the id of the rows was autoincrement. This causes id's in both tables to have the same value but their content is different.
How do I insert the content of table1 into table 2 and auto update the id to a value that doesnt exist yet?
Because the table contains around 1000 rows I can't manually change the numbers or declare each individual row.
Something like ON DUPLICATE 'id' AUTO INCREMENT 'id'
?
this could be the way
Hitesh> desc test;
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| name | varchar(200) | YES | | NULL | |
| id | int(11) | NO | PRI | NULL | auto_increment |
+-------+--------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
Hitesh> desc test_new;
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| name | varchar(200) | YES | | NULL | |
| id | int(11) | NO | PRI | NULL | auto_increment |
+-------+--------------+------+-----+---------+----------------+
2 rows in set (0.00 sec)
Hitesh> insert into test_new(name) select name from test;
Query OK, 9 rows affected (0.03 sec)
Records: 9 Duplicates: 0 Warnings: 0
Hitesh> select * from test_new;
+-------------------------+----+
| name | id |
+-------------------------+----+
| i am the boss | 1 |
| You will get soon | 2 |
| Happy birthday bro | 3 |
| the beautiful girl | 4 |
| oyee its sunday | 5 |
| cat and dog in a park | 6 |
| dog and cat are playing | 7 |
| cat | 8 |
| dog | 9 |
+-------------------------+----+
9 rows in set (0.00 sec)
INSERT INTO new_db.new_tbl SELECT * FROM old_db.old_tbl;
Above will not generate new ids for new_tbl.
Let me explain it a little further, we consider you have both tables with id as auto increment enabled.
Override the auto increments
insert into B select * from A;
If you insert a value into new_tbl's (B) id column. i.e. if you select all columns, This will override the auto increment for the new table.
Activate the auto increment
insert into B (col1, col2) select col1, col2 from A;
insert into B select 0, col1, col2 from A;
If you want activate the auto increment on new_tbl (B) you can not pass ids to the insert stmnt, so you will need to skip the id (chose the columns you want to migrate without id column) or send DEFAULT/NULL/0 for the id.