SQL Multiple Tables Insertion - mysql

So I'm new to the use of multiple tables. Prior to today, 1 table suited my needs (and I could probably get away with using 1 here as well).
I'm creating a plugin for a game I play but I'm using a MySQL database to store all the information. I have 3 tables, Players, Warners and Warns. Warns has 2 foreign keys in it (one referencing to Players and the other to Warners).
At the moment I need to do 3 queries. Add the information to Players & Warners, and then to Warns. Is there a way I can cut down the amount of queries and what would happen if I were to just omit the first 2 queries?
Query Examples:
INSERT INTO slimewarnsplayers VALUES ('123e4567-e89b-12d3-a456-426655440000', 'Spedwards');
INSERT INTO slimewarnswarners VALUES ('f47ac10b-58cc-4372-a567-0e02b2c3d479', '_Sped');
INSERT INTO slimewarnswarns VALUES ('', '123e4567-e89b-12d3-a456-426655440000', 'f47ac10b-58cc-4372-a567-0e02b2c3d479', 'spamming', 'medium');
Tables:
CREATE TABLE IF NOT EXISTS SlimeWarnsPlayers (
uuid VARCHAR(36) NOT NULL,
name VARCHAR(26) NOT NULL,
PRIMARY KEY (uuid)
);
CREATE TABLE IF NOT EXISTS SlimeWarnsWarners (
uuid VARCHAR(36) NOT NULL,
name VARCHAR(26) NOT NULL,
PRIMARY KEY (uuid)
);
CREATE TABLE IF NOT EXISTS SlimeWarnsWarns (
id INT NOT NULL AUTO_INCREMENT,
pUUID VARCHAR(36) NOT NULL,
wUUID VARCHAR(36) NOT NULL,
warning VARCHAR(60) NOT NULL,
level VARCHAR(60) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (pUUID) REFERENCES SlimeWarnsPlayers(uuid),
FOREIGN KEY (wUUID) REFERENCES SlimeWarnsWarners(uuid)
);

Is there a way I can cut down the amount of queries?
NO, I don't see that. From your posted INSERT statements (as depicted below) it's clear that those are 3 different tables and you are inserting different data to them. so, you will have to perform the INSERT operation separately for them.
INSERT INTO slimewarnsplayers
INSERT INTO slimewarnswarners
INSERT INTO slimewarnswarns
Another option would be (May not be considered good), creating a procedure which will accept the data and table name and create a prepared statement/dynamic query to achieve what you are saying. something like (A sample pseudo code)
create procedure sp_insert(tablename varchar(10), data1 varchar(10),
data2 varchar(10))
as
begin
--dynamic query here
INSERT INTO tablename VALUES (data1, data2);
end
To explain further, you can then call this procedure from your application end passing the required data. Do note that, if you have a Foreign Key relationship with other table then you will have to catch the last inserted key from your master table and then pass the same to procedure.

No, you can't insert into multiple tables in one MySQL command. You can however use transactions.
BEGIN;
INSERT INTO slimewarnsplayers VALUES(.....);
last_id = LAST_INSERT_ID()
INSERT INTO SlimeWarnsWarners VALUES(last_id, ....);
INSERT INTO SlimeWarnsWarns VALUES(last_id, ....);
COMMIT;
I would also take a look at http://dev.mysql.com/doc/refman/5.0/en/getting-unique-id.html
and this post MySQL Insert into multiple tables? (Database normalization?)

Related

How can auto-Incrementing be maintained when concurrent transactions occur on a compound key In MYSQL?

I recently encountered an error in my application with concurrent transactions. Previously, auto-incrementing for compound key was implemented using the application itself using PHP. However, as I mentioned, the id got duplicated, and all sorts of issues happened which I painstakingly fixed manually afterward.
Now I have read about related issues and found suggestions to use trigger.
So I am planning on implementing a trigger somewhat like this.
DELIMITER $$
CREATE TRIGGER auto_increment_my_table
BEFORE INSERT ON my_table FOR EACH ROW
BEGIN
SET NEW.id = SELECT MAX(id) + 1 FROM my_table WHERE type = NEW.type;
END $$
DELIMITER ;
But my doubt regarding concurrency still remains. Like what if this trigger was executed concurrently and both got the same MAX(id) when querying?
Is this the correct way to handle my issue or is there any better way?
An example - how to solve autoincrementing in compound index.
CREATE TABLE test ( id INT,
type VARCHAR(192),
value INT,
PRIMARY KEY (id, type) );
-- create additional service table which will help
CREATE TABLE test_sevice ( type VARCHAR(192),
id INT AUTO_INCREMENT,
PRIMARY KEY (type, id) ) ENGINE = MyISAM;
-- create trigger which wil generate id value for new row
CREATE TRIGGER tr_bi_test_autoincrement
BEFORE INSERT
ON test
FOR EACH ROW
BEGIN
INSERT INTO test_sevice (type) VALUES (NEW.type);
SET NEW.id = LAST_INSERT_ID();
END
db<>fiddle here
creating a service table just to auto increment a value seems less than ideal for me. – Mohamed Mufeed
This table is extremely tiny - you may delete all records except one per group with largest autoincremented value in this group anytime. – Akina
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=61f0dc36db25dd5f0cf4647d8970cdee
You may schedule excess rows removing (for example, daily) in service event procedure.
I have managed to solve this issue.
The answer was somewhat in the direction of Akina's Answer. But not quite exactly.
The way I solved it did indeed involved an additional table but not like the way He suggested.
I created an additional table to store meta data about transactions.
Eg: I had table_key like this
CREATE TABLE `journals` (
`id` bigint NOT NULL AUTO_INCREMENT,
`type` smallint NOT NULL DEFAULT '0',
`trans_no` bigint NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `transaction` (`type`,`trans_no`)
)
So I created a meta_journals table like this
CREATE TABLE `meta_journals` (
`type` smallint NOT NULL,
`next_trans_no` bigint NOT NULL,
PRIMARY KEY (`type`),
)
and seeded it with all the different types of journals and the next sequence number.
And whenever I insert a new transaction to the journals I made sure to increment the next_trans_no of the corresponding type in the meta_transactions table. This increment operation is issued inside the same database TRANSACTION, i.e. inside the BEGIN AND COMMIT
This allowed me to use the exclusive lock acquired by the UPDATE statement on the row of meta_journals table. So when two insert statement is issued for the journal concurrently, One had to wait until the lock acquired by the other transaction is released by COMMITing.

MySQL Procedure to insert data to a table that use auto_increment, get the PK auto_increment generated and insert it into a bridgetable?

I´m creating a restApi with PHP over the courses I have studied. When it comes to the database I´m not sure whats the best practise for this problem ->
I have data over the languages each course had, to normalize data I have languages in a separate table and a bridge to connect them.
So one table for Courses, one for Languages and one bridge table to connect them.
CREATE TABLE `Courses`
(`Course_ID` INT(11),
`Education_ID` INT(11),
`CourseName` VARCHAR(100) NOT NULL,
`Points` VARCHAR (5),
`Grade` VARCHAR(3),
PRIMARY KEY (`Course_ID`)
);
CREATE TABLE `Language` (
`Language_ID` INT(11),
`Language` VARCHAR(100) NOT NULL,
`Img_url` VARCHAR (200),
PRIMARY KEY (`Language_ID`)
);
CREATE TABLE `Bridge_language` (
`Course_ID` INT(11) NOT NULL,
`Language_ID` INT(11) NOT NULL,
KEY `PKFK` (`Course_ID`, `Language_ID`)
);
ALTER TABLE Courses MODIFY Course_ID INTEGER AUTO_INCREMENT;
ALTER TABLE Language MODIFY Language_ID INTEGER AUTO_INCREMENT;
When adding a new course, in the SQL I know the id of the languages, (i will have a function in the admin page where you add new languages) then when you create a new course you just click add languages and the id for the language is added.
But what I don't have is the ID for the course which is created with auto_increment. Is there a smart way you with a function/procedure in SQL, can grab the id that auto_increment has generated and use it to add that into the bridge table?
Or do I need to make a query to the database and grab the latest ID and add one and send that into the bridge table?
In MySQL, you can use last_insert_id() to retrieve the auto-generated id of the last insert query that you executed. You don't give much details about your code, but the logic is like:
insert into course (education_id, coursename, points, grade)
values (?, ?, ?, ?);
insert into bridge_language (course_id, language_id)
values (last_insert_id(), ?);

How to enforce insert with specific field?

Given the following table:
DROP TABLE IF EXISTS my_table;
CREATE TABLE IF NOT EXISTS my_table(
id INT NOT NULL,
timestamp TIMESTAMP(3) DEFAULT CURRENT_TIMESTAMP(3) NOT NULL,
data BLOB NULL,
PRIMARY KEY (id)
);
I can insert on it with:
INSERT INTO my_table (timestamp, data) VALUES
('2014-07-11 11:25:48.185', LOAD_FILE('sql/file.bin'));
In the above insert I was not enforced to insert the id field.
How may I create the table (my_table) so that it prevents inserts without id?
I would every insert to be made (providing the id) like, i.e.:
INSERT INTO my_table (id, timestamp, data) VALUES
(7, '2014-07-11 11:25:48.185', LOAD_FILE('sql/file.bin'));
I was thinking NOT NULL was there for it.
To prevent inserts with an empty value for ID (or not value passed), simply define the column as NOT NULL as you defined it.
I can't see how your example worked (i.e. inserting only into (timestamp, data)).
Now, the fact that there is another table with a trigger that inserts in this one does not have any effect on the ID column of this table. If you define it as AUTO_INCREMENT, whenever you insert a new row, the ID will automatically get a new value which will be fully independent from any data of the first table.
You can have as many tables as you wish with auto-incremented fields, each running a different sequence (and hence their numbering will be fully independent).
To summarize:
CREATE TABLE IF NOT EXISTS my_table(
id INT NOT NULL AUTO_INCREMENT ,
timestamp TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3) ,
data BLOB NULL ,
PRIMARY KEY (id)
);

Prevent auto increment on duplicated entry?

I have Table called url_info and the structure of the table is:
url_info:
url_id ( auto_increment, primary key )
url ( unique,varchar(500) )
When I insert into table like this:
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Jerry');
The output is:
1 Tom
2 Jerry
When I insert like this
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Jerry');
The output is
1 Tom
3 Jerry
The auto-increment id is incremented when I try to insert to duplicate entry. I have also tried Insert Ignore
How to prevent it from incrementing when I try to insert a duplicate entry?
It's probably worth creating a stored procedure to insert what you want into the table. But, in the stored procedure check what items you have already in the table. If these match what you're trying to insert, then the query should not even attempt the insert.
Ie. The procedure needs to contain something like this:
IF NOT EXISTS(SELECT TOP 1 url_id FROM url_info WHERE url = 'Tom')
INSERT INTO url_info(url) VALUES('Tom')
So, in your stored procedure, it would look like this (assuming the arguments/variables have been declared)
IF NOT EXISTS(SELECT TOP 1 url_id FROM url_info WHERE url = #newUrl)
INSERT INTO url_info(url) VALUES(#newUrl)
This is expected behaviour in InnoDB. The reason is that they want to let go of the auto_increment lock as fast as possible to improve concurrency. Unfortunately this means they increment the AUTO_INCREMENT value before resolving any constraints, such as UNIQUE.
You can read more about the idea in the manual on AUTO_INCREMENT Handling in InnoDB, but the manual is also unfortunately buggy and doesn't tell why your simple insert will give non-consecutive values.
If this is a real problem for you and you really need consecutive numbers, consider setting the innodb_autoinc_lock_mode option to 0 in your server, but this is not recommended as it will have severe effects on your database (you cannot do any inserts concurrently).
Auto_increment is performed updated by the engine. This is done before hand of checking a value is unique or not. And we can't roll back the operation to get back to former value of auto_increment.
Hence NO to start from where you last read on auto_increment.
And it is not an issue in loosing some intermediate values on auto_increment field.
The MAX value you can store into a SIGNED INT field is 2^31-1 equal to 2,147,483,647. If you read it loud, it sounds 2 billion+.
And I don't think it is small and won't suite your requirement.
CREATE TABLE `url_info` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`url` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=4 ;
When I execute:
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Jerry');
I get:
Make sure you ID column is UNIQUE too.
As the manual says:
A UNIQUE index creates a constraint such that all values in the index
must be distinct. An error occurs if you try to add a new row with a
key value that matches an existing row. This constraint does not apply
to NULL values except for the BDB storage engine. For other engines, a
UNIQUE index permits multiple NULL values for columns that can contain
NULL. If you specify a prefix value for a column in a UNIQUE index,
the column values must be unique within the prefix.

where is the duplicate in ON DUPLICATE KEY query?

Description:
I am trying to insert user's preferences into a database. If the user hasn't yet placed any, I want a insert, otherwise, I want an update. I know I can insert default values in the creation of the user and than exclusively use update, but that adds another query (I think)
Problem:
I have read up on ON DUPLICATE KEY UPDATE but I don't understand it. This is almost the exact question I have but without the answer. The answer says:
It does sound like it will work for what you want to do as long as you hav the proper column(s) defined as UNIQUE KEY or PRIMARY KEY.
If I do a simple insert like so:
INSERT INTO table (color, size) VALUES ('blue', '18') ...
How will that ever produce at DUPLICATE KEY? As far as mysql knows it's just another insert and the id is auto-incremented. I have the primary key in the table set to unique, but the insert isn't going to check against that, right?
Table:
CREATE TABLE `firm_pref` (
`id` int(9) NOT NULL AUTO_INCREMENT,
`firm_id` int(9) NOT NULL, //user_id in this case
`header_title` varchar(99) NOT NULL,
`statement` varchar(99) NOT NULL,
`footer_content` varchar(99) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1
Well, unless you want your application to be used by a single person only, you would have to specify someone's user_id in that INSERT - when this 'someone' guy or girl updates his/her preferences, right?
This field (user_id) is exactly what would be checked by ON DUPLICATE KEY UPDATE clause.
And if you want to insert a new record, just send NULL instead:
INSERT INTO table (user_id, color, size) VALUES (NULL, 'blue', 18);
... so auto-increment will have a chance to move on and save the day. )
UPDATE: Take note that to understand that some field should be considered a unique identifier, you should mark it as such. Usually it's done naturally, as this field is used as a PRIMARY KEY. But sometimes it's not enough; it means some work for UNIQUE constraint. For example, in your table it can be used like this:
CREATE TABLE `prefs` (
`id` int(9) NOT NULL AUTO_INCREMENT,
`firm_id` int(9) NOT NULL,
...
PRIMARY KEY (`id`),
UNIQUE KEY (`firm_id`)
);
(or you can add this constraint to the existing table with ALTER TABLE prefs ADD UNIQUE (firm_id) command)
Then insert/update query will look like...
INSERT INTO prefs(firm_id, header_title, statement, footer_content)
VALUES(17, 'blue', '18', 'some_footer')
ON DUPLICATE KEY UPDATE
header_title = 'blue',
statement = '18',
footer_content = 'some_footer';
I've built a sort of demo in SQL Fiddle. You can play with it some more to better understand that concept. )
For options, you would normally have an options table that has a list of available options (like color, size etc), and then a table that spans both your options table and users table with the users' values.
For example, your options table:
id | name
=========
1 | color
2 | size
Your users table:
id | name
================
1 | Martin Bean
And an options_users join table:
option_id | user_id | value
===========================
1 | 1 | Blue
2 | 1 | Large
With the correct foreign keys set up in your options_users table, you can have redundant values removed when an option or user is removed from your system. Also, when saving a user's preferences, you can first delete their previous answers and insert the new ones.
DELETE FROM `options_users`
WHERE `user_id` = #user_id;
INSERT INTO `options_users` (`option_id`, `user_id`, `value`)
VALUES (1, #user_id, 'Blue'), (2, #user_id, 'Large');
Hope that helps.