I am developing an app that needs to match people together. Each person can only match with one other person.. So, in the table below I am trying to make it so that values from user1 & user2 are unique across both fields:
CREATE TABLE `match_table` (
`user1` int(11) NOT NULL,
`user2` int(11) NOT NULL,
UNIQUE KEY `user2` (`user2`),
UNIQUE KEY `user1` (`user1`))
So for example the following INSERT statement should ignore rows 2 and 4. Or at a minimum I need to be able to flag those rows to be ignored. Note that line 5 is OK because of the fact that line 2 and 4 have been ignored.
INSERT IGNORE INTO match_table (user1, user2)
VALUES
(1,2),
(2,3),
(4,5),
(6,4),
(3,6)
Is there any index that can accomplish this? .. Otherwise is there some UPDATE I could do after insertion that could flag the ones I want to ignore?
Assuming that the matching has no direction, you can use your existing table design but store the match both ways in the table:
INSERT IGNORE INTO match_table (user1, user2)
VALUES
(1,2),
(2,1),
(2,3),
(3,2),
(4,5),
(5,4),
(6,4),
(4,6),
(3,6),
(6,3)
If you just fetch all rows you will got each matching twice. You can avoid this as follows:
SELECT * FROM match_table WHERE user1 < user2
1, 2
3, 6
4, 5
One potential issue with this design is that it is possible to insert a row with a user who matches himself.
If you are just asking for a database constraint, you can get that with UNIQUE INDEX ('user1','user2')
Related
I have tables Alpha and Beta. Beta belongs to Alpha.
create table Alpha
(
id int auto_increment primary key
);
create table Beta
(
id int auto_increment primary key,
alphaId int null,
orderValue int,
constraint Alpha_ibfk_1 foreign key (alphaId) references Alpha (id)
);
Here are a few test records:
insert into Alpha (id) values (1);
insert into Alpha (id) values (2);
insert into Beta (id, alphaId, orderValue) values (1, 1, 23);
insert into Beta (id, alphaId, orderValue) values (2, 1, 43);
insert into Beta (id, alphaId, orderValue) values (3, 2, 73);
I want to create a pagination for them, that would make sense in terms of my application logic. So when I set limit 2, for example, I expect to get a list of two Alpha records and their related records, but in fact when I set limit 2:
select *
from Alpha
inner join Beta on Alpha.id = Beta.alphaId
order by Beta.orderValue
limit 2;
I am resulted with only one Alpha record and its related data:
While I want to figure out a way for my LIMIT construct to only count unique occurrences of Alpha records and return me something like this:
Is it possible to do it in MySQL in one query? Maybe different RDBMS? Or going with multiple queries is the only option?
=== EDIT
The reason for such requirements is that I want to create an API with paging that returns records of Alpha, and their related Beta records. The problem is that the way limit works does not make sense from the user's standpoint: "Hey, I said I want 2 records of Alpha with its related data, not 1. What is that?"
There are a couple of issues with your example:
Your foreign key seems to be wrongly established.
Limiting overwhelmingly requires an explicit order of the rows. Otherwise the result will be unstable and non-reproducible.
Anyway, having said that, you can place a limit on rows for table Alpha and then perform the join against table Beta.
For example:
select *
from (
select *
from Alpha
order by id
limit 2 -- this limit only affects table Alpha
) x
join Beta b on b.alphaId = x.id
I have a table which consists of columns for users, categories and amount.
A user can buy an amount products from each category. I want to store only the very last purchase.
User Category Amount
1 100 15
1 103 25
Imagine that this user has just bought 30 pieces from 100 or from 110. Either additional category or a new category. This can be handled using following pseudo code:
SELECT amount FROM table WHERE user=1 AND category=100
if row exists
UPDATE table SET amount=30 WHERE user=1 AND category=100
else
INSERT INTO table (user, category, amount) VALUES(1, 100, 30)
The other way to do is, just always deleting the old value (ignoring the error message when not exists( and always inserting a new one.
DELETE FROM table WHERE user=1 AND category=100
INSERT INTO table VALUES(1, 100, 30)
Which of these patterns is preferred from performance point of view?
Does it matter which PK and FK exists?
mysql supports replace, so no need of delete insert or update. But this one assumes a unique key or primary key on your table as reference
REPLACE
INTO yourtable (user, category, amount)
VALUES (1, 100, 30);
My columns are like this. column "a" is primary and auto incremantal.
a | b | x | y
When inserting new data, i need to check x and y columns shouldn't be exist together.
To clarify, imagine this row is at database with these values
(2, "example.com" , "admin", "123456")
I should able to insert both of these columns
(3, "example.com" , "user", "123456")
(4, "example2.com" , "admin", "123456")
But i shouldn't able to insert this column
(5, "example.com" , "admin", "5555555")
Because "example.com" and "admin" values already in database on a row. It doesn't matter column "y" is same or not.
How can i do this?
Create a composite unique index. This will allow any number of duplicates in the individual fields, but the combination needs to be unique.
CREATE UNIQUE INDEX ix_uq ON tablename (b, x);
...and use INSERT IGNORE to insert if the unique index is not violated. If it is, just ignore the insert.
INSERT IGNORE INTO test (a,b,x,y) VALUES (5, "example.com" , "admin", "5555555");
If you want to insert unless there's a duplicate, and update if there is, you can also use INSERT INTO ... ON DUPLICATE KEY UPDATE;
Ref: MySQL only insert new row if combination of columns (which allow duplicates) is unique
You want to let the database do the work. Although you can set up a condition within a query, that condition may not be universally true or someone might use another query.
The database can check this with a unique constraint or index. Actually, the unique constraint is implementing using a unique index:
create unique index unq_t_b_x on t(b, x);
(The columns can be in either order.)
The insert would then look like:
insert into t(b, x, y)
values ('example.com', 'admin', '5555555')
on duplicate key update b = values(b);
Note that the auto-incremented value is not included in the update.
The on duplicate key update just prevents the insert from generating an error. It is better than insert ignore because the latter will ignore all errors, and you just want to ignore the one caused by the duplicate key.
I am getting values of a business by the API and inserting it into a database.
SNO ID category
1 aaa Machine learning
2 aaa AI
3 bbb mobile
Where SNO is the primary key which is set to auto increment.
Now after 2 days, for keeping my database up to date I need to get the new data from the API.
So suppose that I come to know that ID ‘aaa’ now has one more category as “data structures”
Question:
How can I update my table to reflect this new category?
I am expecting something like
SNO ID category
1 aaa Machine learning
2 aaa AI
3 bbb mobile
4 aaa data structures
I don’t know the SNO as they are Auto Incremented.
Deleting all the rows which has ID = “aaa” and then Inserting it again is one option but I am trying to avoid that as it might increase the overheads.
I am getting new values from the API,
So I am getting Machine learning, AI and Data structures ( all 3 )
If I use , so in my code when I iterate over the category variable 3 SQL Will be generated
INSERT INTO tablename (ID, category) VALUES ('aaa', ‘Machine learning’);
INSERT INTO tablename (ID, category) VALUES ('aaa', ‘AI’);
INSERT INTO tablename (ID, category) VALUES ('aaa', 'data structures');
So in this case 1st and 2nd insert statement will be duplicated, my table will have duplicate rows
.
.
I basically need to check if the ID and Category exists in the BD if they do not then INSERT ?( The Primary Key SNO is set to auto Increment)
+ Options
Field Type Null Key Default Extra
uuid varchar(50) NO NULL
categoryName varchar(50) NO NULL
sno int(5) NO PRI NULL
If you just insert the new "data structures" row, it will have the next auto-incremented number, which in your case would likely be 4. The only time it wouldn't be 4 is if you inserted some rows and deleted them, which would cause the auto increment counter to be higher. You could just insert the row like this:
INSERT INTO tablename (ID, category) VALUES ('aaa', 'data
structures');
Or maybe I'm not understanding the question correctly. If not reply back and I'll try to help.
You can simply INSER the new row by background checking or INSERT ... ON DUPLICATE KEY UPDATE Syntax or UPDATE if exist.
I have a lookup table that has 3 rows:
id | name
=== =======
1 Pendig
2 Sent
3 Failed
When I insert this data into the another table on another server/database, how can I ensure the same values (names) are created with the same auto-incrementing ids?
Since it is a lookup table, can I just insert into the table and specify the id?
You can always overwrite the insert ID in an auto_increment table by specifying it:
INSERT INTO yourtable (id, name) VALUES (2, 'Sent');
and 2 will be stuff into the table as the ID value. This works perfectly until any OTHER inserts are performed and the table's built-in auto_increment value happens to come up to a value that you yourself inserted, e.g, assuming a brand new freshly created table:
INSERT INTO yourtable (id, name) VALUES (2, 'Sent'); // force an ID
INSERT INTO yourtable (id, name) VALUES (NULL, 'foo'); // DB assigns 1
INSERT INTO yourtable (name) VALUES ('foo'); // DB assigns 2
The first query succeeds, the second query succeeds, the third query fails with a primary key violation, because the DB-generated '2' id now conflicts with the one you inserted manually.
If you want to ensure ID and auto_increment compatibility between different DB instances, then use mysql_dump to spit out the table for you. It'll ensure that all the IDs in the table are preserved, and that the internal auto_increment pointer is also sent across properly.
Since you have the lookup table to begin with it should be irrelevant what the id actually is. That is to say that you should never look at the id itself and only use it for joins and the like. If you want to look up this transparent id, you should use the name.
To answer your specific question, yes you can specify auto increment IDs during inserts:
INSERT INTO t1 (id, name) VALUES (1, 'Pendig')