I'm about to set up a large server that assigns ports to specific objects. Every object is granted a port on creation and the ports can theoretically be assigned for years. If someone destroys their object, the port is released and is up for grabs. If I had a MySQL table containing all objects AND their ports, how would I retrieve a random free port within the range specified?
So:
Table has id and port. Query should select number where between <min> and <max> and not in table. Is anything like this possible in MySQL or would I have to get another table to jump in?
Edit
I currently have the following two tables:
CREATE TABLE `scjs_relays` (
`id` INT(100) UNSIGNED NOT NULL AUTO_INCREMENT,
`displayname` VARCHAR(50) NOT NULL COMMENT 'Name the relay shows up with at the overview',
`origin` VARCHAR(15) NOT NULL COMMENT 'IP the packets originate from',
`auth` BIT(1) NOT NULL DEFAULT b'',
`authurl` VARCHAR(255) NULL DEFAULT '0' COMMENT 'URL to retrieve auth token from',
`authtimeout` INT(10) UNSIGNED NULL DEFAULT '0' COMMENT 'Time before person has to be reauthed',
`steamid` BIT(1) NOT NULL DEFAULT b'',
`teamchat` BIT(1) NOT NULL DEFAULT b'',
`timestamp` BIT(1) NOT NULL DEFAULT b'',
`disconnect` BIT(1) NOT NULL DEFAULT b'',
`playerip` BIT(1) NOT NULL DEFAULT b'',
`node` INT(10) UNSIGNED NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
)
COMMENT='Chat relays'
COLLATE='utf8_general_ci'
ENGINE=InnoDB
;
CREATE TABLE `scjs_ports` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`port` INT(10) UNSIGNED NOT NULL DEFAULT '0',
`node` INT(10) UNSIGNED NOT NULL DEFAULT '0',
`relayid` INT(100) UNSIGNED NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
COMMENT='Ports per server'
COLLATE='utf8_general_ci'
ENGINE=InnoDB
;
However, if I try to add a foreign key I get errors. the id in the scjs_users table is AUTO_INCREMENT because this is going to be an incremental ID, every server will have their own ID that never will be reused. The foreign key I currently have is the following:
ALTER TABLE `scjs_ports`
ADD CONSTRAINT `FK_relayid` FOREIGN KEY (`relayid`) REFERENCES `scjs_relays` (`id`) ON UPDATE CASCADE ON DELETE SET NULL;
Should that work fine? If I update the row in scjs_relay, id will never change, only the other columns, so is CASCADE the right option here? And if a row is deleted, it's port is obviously free'd up, so it should be set to NULL.
FOr your application you would probably wantt a single table with all your available ports in it and a reference to the object that has the port (or is null if no object is currently assigned to port)
So that might look like this:
id - autoincrement primary key
port - port number, has unique index
object_id - foreign key reference to object that has port, NULL-able, unique index if needed for reverse object to port lookups
You would need to think about allowing for consistent reads so when you are about to search for open row and assign it an object, you would need to do this within a transaction and with a lock upon first select so no other connection can touch that row while you are trying to update it.
That sequence might look like this:
START TRANSACTION;
SELECT #update_id:=id
FROM table
WHERE object_id IS NULL
ORDER BY port ASC
/*
Note you could order by RAND here if you truly want random
but query would not be optimized
*/
LIMIT 1
FOR UPDATE;
UPDATE table SET object_id = ?
WHERE id = #update_id;
COMMIT;
Here ? is the object reference for object you want to assign to the port. As you can see, this might lend itself well to be a stored procedure, where you pass in the object id as parameter.
I'd implement this by having a table with all the available ports for usage.
Then when issuing a new port I'd check availability by LEFT JOIN this table to the issued ports table like this:
SELECT table1.port from table1 LEFT JOIN table2 on table1.port = table2.port
WHERE table2 is null
LIMIT 1;
Related
I'm trying to denormalize a few MySQL tables I have into a new table that I can use to speed up some complex queries with lots of business logic. The problem that I'm having is that there are 2.3 million records I need to add to the new table and to do that I need to pull data from several tables and do a few conversions too. Here's my query (with names changed)
INSERT INTO database_name.log_set_logs
(offload_date, vehicle, jurisdiction, baselog_path, path,
baselog_index_guid, new_location, log_set_name, index_guid)
(
select STR_TO_DATE(logset_logs.offload_date, '%Y.%m.%d') as offload_date,
logset_logs.vehicle, jurisdiction, baselog_path, path,
baselog_trees.baselog_index_guid, new_location, logset_logs.log_set_name,
logset_logs.index_guid
from
(
SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(path, '/', 7), '/', -1) as offload_date,
SUBSTRING_INDEX(SUBSTRING_INDEX(path, '/', 8), '/', -1) as vehicle,
SUBSTRING_INDEX(path, '/', 9) as baselog_path, index_guid,
path, log_set_name
FROM database_name.baselog_and_amendment_guid_to_path_mappings
) logset_logs
left join database_name.log_trees baselog_trees
ON baselog_trees.original_location = logset_logs.baselog_path
left join database_name.baselog_offload_location location
ON location.baselog_index_guid = baselog_trees.baselog_index_guid);
The query itself works because I was able to run it using a filter on log_set_name however that filter's condition will only work for less than 1% of the total records because one of the values for log_set_name has 2.2 million records in it which is the majority of the records. So there is nothing else I can use to break this query up into smaller chunks from what I can see. The problem is that the query is taking too long to run on the rest of the 2.2 million records and it ends up timing out after a few hours and then the transaction is rolled back and nothing is added to the new table for the 2.2 million records; only the 0.1 million records were able to be processed and that was because I could add a filter that said where log_set_name != 'value with the 2.2 million records'.
Is there a way to make this query more performant? Am I trying to do too many joins at once and perhaps I should populate the row's columns in their own individual queries? Or is there some way I can page this type of query so that MySQL executes it in batches? I already got rid of all my indexes on the log_set_logs table because I read that those will slow down inserts. I also jacked my RDS instance up to a db.r4.4xlarge write node. I am also using MySQL Workbench so I increased all of it's timeout values to their maximums giving them all nines. All three of these steps helped and were necessary in order for me to get the 1% of the records into the new table but it still wasn't enough to get the 2.2 million records without timing out. Appreciate any insights as I'm not adept to this type of bulk insert from a select.
'CREATE TABLE `log_set_logs` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`purged` tinyint(1) NOT NULL DEFAUL,
`baselog_path` text,
`baselog_index_guid` varchar(36) DEFAULT NULL,
`new_location` text,
`offload_date` date NOT NULL,
`jurisdiction` varchar(20) DEFAULT NULL,
`vehicle` varchar(20) DEFAULT NULL,
`index_guid` varchar(36) NOT NULL,
`path` text NOT NULL,
`log_set_name` varchar(60) NOT NULL,
`protected_by_retention_condition_1` tinyint(1) NOT NULL DEFAULT ''1'',
`protected_by_retention_condition_2` tinyint(1) NOT NULL DEFAULT ''1'',
`protected_by_retention_condition_3` tinyint(1) NOT NULL DEFAULT ''1'',
`protected_by_retention_condition_4` tinyint(1) NOT NULL DEFAULT ''1'',
`general_comments_about_this_log` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1736707 DEFAULT CHARSET=latin1'
'CREATE TABLE `baselog_and_amendment_guid_to_path_mappings` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`path` text NOT NULL,
`index_guid` varchar(36) NOT NULL,
`log_set_name` varchar(60) NOT NULL,
PRIMARY KEY (`id`),
KEY `log_set_name_index` (`log_set_name`),
KEY `path_index` (`path`(42))
) ENGINE=InnoDB AUTO_INCREMENT=2387821 DEFAULT CHARSET=latin1'
...
'CREATE TABLE `baselog_offload_location` (
`baselog_index_guid` varchar(36) NOT NULL,
`jurisdiction` varchar(20) NOT NULL,
KEY `baselog_index` (`baselog_index_guid`),
KEY `jurisdiction` (`jurisdiction`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1'
'CREATE TABLE `log_trees` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`baselog_index_guid` varchar(36) DEFAULT NULL,
`original_location` text NOT NULL, -- This is what I have to join everything on and since it's text I cannot index it and the largest value is above 255 characters so I cannot change it to a vachar then index it either.
`new_location` text,
`distcp_returncode` int(11) DEFAULT NULL,
`distcp_job_id` text,
`distcp_stdout` text,
`distcp_stderr` text,
`validation_attempt` int(11) NOT NULL DEFAULT ''0'',
`validation_result` tinyint(1) NOT NULL DEFAULT ''0'',
`archived` tinyint(1) NOT NULL DEFAULT ''0'',
`archived_at` timestamp NULL DEFAULT NULL,
`created_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
`updated_at` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`dir_exists` tinyint(1) NOT NULL DEFAULT ''0'',
`random_guid` tinyint(1) NOT NULL DEFAULT ''0'',
`offload_date` date NOT NULL,
`vehicle` varchar(20) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `baselog_index_guid` (`baselog_index_guid`)
) ENGINE=InnoDB AUTO_INCREMENT=1028617 DEFAULT CHARSET=latin1'
baselog_offload_location has not PRIMARY KEY; what's up?
GUIDs/UUIDs can be terribly inefficient. A partial solution is to convert them to BINARY(16) to shrink them. More details here: http://localhost/rjweb/mysql/doc.php/uuid ; (MySQL 8.0 has similar functions.)
It would probably be more efficient if you have a separate (optionally redundant) column for vehicle rather than needing to do
SUBSTRING_INDEX(SUBSTRING_INDEX(path, '/', 8), '/', -1) as vehicle
Why JOIN baselog_offload_location? Three seems to be no reference to columns in that table. If there, be sure to qualify them so we know what is where. Preferably use short aliases.
The lack of an index on baselog_index_guid may be critical to performance.
Please provide EXPLAIN SELECT ... for the SELECT in your INSERT and for the original (slow) query.
SELECT MAX(LENGTH(original_location)) FROM .. -- to see if it really is too big to index. What version of MySQL are you using? The limit increased recently.
For the above item, we can talk about having a 'hash'.
"paging the query". I call it "chunking". See http://mysql.rjweb.org/doc.php/deletebig#deleting_in_chunks . That talks about deleting, but it can be adapted to INSERT .. SELECT since you want to "chunk" the select. If you go with chunking, Javier's comment becomes moot. Your code would be chunking the selects, hence batching the inserts:
Loop:
INSERT .. SELECT .. -- of up to 1000 rows (see link)
End loop
I'm encountering an unexpected situation with inserting/querying particular records in the following table during periods of high contention. I believe there is a race condition in the database.
CREATE TABLE `business_objects` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`obj_id` varchar(255) DEFAULT NULL,
`obj_type` varchar(255) DEFAULT NULL,
`created_at` datetime(6) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_business_objects_on_obj_type_and_obj_id`
(`obj_type`,`obj_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The steps to reproduce are:
Check if record exists in table using this query
SELECT *
FROM business_objects
WHERE obj_type = 'Tip'
AND obj_id = '7616904'
If this query is null, attempt to create
INSERT INTO business_objects (obj_type, obj_id)
VALUES ('Tip', '7616904')
If another thread has already created a record with the same attributes this insert will fail and raise a uniqueness MySQL error. In this scenario, I catch the error and run the same query in step 1 to get the record.
SELECT *
FROM business_objects
WHERE obj_type = 'Tip'
AND obj_id = '7616904'
The query returns an empty result.
My expectation is that if the index uniqueness constraint is violated than the record should be committed to the table. What am I missing?
I have a first table containing my ips stored as integer (500k rows), and a second one containing ranges of black listed ips and the reason of black listing (10M rows)
here is the table structure :
CREATE TABLE `black_lists` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`ip_start` INT(11) UNSIGNED NOT NULL,
`ip_end` INT(11) UNSIGNED NULL DEFAULT NULL,
`reason` VARCHAR(3) NOT NULL,
`excluded` TINYINT(1) NULL DEFAULT NULL,
PRIMARY KEY (`id`),
INDEX `ip_range` (`ip_end`, `ip_start`),
INDEX `ip_start` ( `ip_start`),
INDEX `ip_end` (`ip_end`),
)
COLLATE='latin1_swedish_ci'
ENGINE=InnoDB
AUTO_INCREMENT=10747741
;
CREATE TABLE `ips` (
`id` INT(11) NOT NULL AUTO_INCREMENT COMMENT 'Id ips',
`idhost` INT(11) NOT NULL COMMENT 'Id Host',
`ip` VARCHAR(45) NULL DEFAULT NULL COMMENT 'Ip',
`ipint` INT(11) UNSIGNED NULL DEFAULT NULL COMMENT 'Int ip',
`type` VARCHAR(45) NULL DEFAULT NULL COMMENT 'Type',
PRIMARY KEY (`id`),
INDEX `host` (`idhost`),
INDEX `index3` (`ip`),
INDEX `index4` (`idhost`, `ip`),
INDEX `ipsin` (`ipint`)
)
COLLATE='latin1_swedish_ci'
ENGINE=InnoDB
AUTO_INCREMENT=675651;
my problem is when I try to run this query no index is used and it takes an eternity to finish :
select i.ip,s1.reason
from ips i
left join black_lists s1 on i.ipint BETWEEN s1.ip_start and s1.ip_end;
I'm using MariaDB 10.0.16
True.
The optimizer has no knowledge that start..end values are non overlapping, nor anything else obvious about them. So, the best it can do is decide between
s1.ip_start <= i.ipint -- and use INDEX(ip_start), or
s1.ip_end >= i.ipint -- and use INDEX(ip_end)
Either of those could result in upwards of half the table being scanned.
In 2 steps you could achieve the desired goal for one ip; let's say #ip:
SELECT ip_start, reason
FROM black_lists
WHERE ip_start <= #ip
ORDER BY ip_start DESC
LIMIT 1
But after that, you need to see if the ip_end corresponding to that ip_start is <= #ip before deciding whether you have a black-listed item.
SELECT reason
FROM ( ... ) a -- fill in the above query
JOIN black_lists b USING(ip_start)
WHERE b.ip_end <= #ip
That will either return the reason or no rows.
In spite of the complexity, it will be very fast. But, you seem to have a set of IPs to check. That makes it more complex.
For black_lists, there seems to be no need for id. Suggest you replace the 4 indexes with only 2:
PRIMARY KEY(ip_start, ip_end),
INDEX(ip_end)
In ips, isn't ip unique? If so, get rid if id and change 5 indexes to 3:
PRIMARY KEY(idint),
INDEX(host, ip),
INDEX(ip)
You have allowed more than enough in the VARCHAR for IPv6, but not in INT UNSIGNED.
More discussion.
I have simple categories table. Category can have parent category (par_cat column) or null if it is main category and with the same parent category there shouldn't be 2 or more categories with the same name or url.
Code for this table:
CREATE TABLE IF NOT EXISTS `categories` (
`id` int(10) unsigned NOT NULL,
`par_cat` int(10) unsigned DEFAULT NULL,
`lang` varchar(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'pl',
`name` varchar(100) COLLATE utf8_unicode_ci NOT NULL,
`url` varchar(120) COLLATE utf8_unicode_ci NOT NULL,
`active` tinyint(3) unsigned NOT NULL DEFAULT '1',
`accepted` tinyint(3) unsigned NOT NULL DEFAULT '1',
`priority` int(10) unsigned NOT NULL DEFAULT '1000',
`entries` int(10) unsigned NOT NULL DEFAULT '0',
`created_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ;
ALTER TABLE `categories`
ADD PRIMARY KEY (`id`),
ADD UNIQUE KEY `categories_name_par_cat_unique` (`name`,`par_cat`),
ADD UNIQUE KEY `categories_url_par_cat_unique` (`url`,`par_cat`),
ADD KEY `categories_par_cat_foreign` (`par_cat`);
ALTER TABLE `categories`
MODIFY `id` int(10) unsigned NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=3;
ALTER TABLE `categories`ADD CONSTRAINT `categories_par_cat_foreign`
FOREIGN KEY (`par_cat`) REFERENCES `categories` (`id`);
The problem is that even if I have unique keys it doesn't work. If I try to insert into database 2 categories that have par_cat set to null and same name and url, those 2 categories can be inserted into database without a problem (and they shouldn't). However if I select for those categories other par_cat (for example 1 assuming category with id 1 exists), only first record will be inserted (and that's desired behaviour).
Question - how to handle this case? I read that:
A UNIQUE index creates a constraint such that all values in the index
must be distinct. An error occurs if you try to add a new row with a
key value that matches an existing row. This constraint does not apply
to NULL values except for the BDB storage engine. For other engines, a
UNIQUE index permits multiple NULL values for columns that can contain
NULL. If you specify a prefix value for a column in a UNIQUE index,
the column values must be unique within the prefix.
however if I have unique on multiple columns I expected it's not the case (only par_cat can be null, name and url cannot be null). Because par_cat references to id of the same table but some categories don't have parent category it should allow null values.
This works as defined by the SQL standard. NULL means unknown. If you have two records of par_cat = NULL and name = 'X', then the two NULLs are not regarded to hold the same value. Thus they don't violate the unique key constraint. (Well, one could argue that the NULLs still might mean the same value, but applying this rule would make working with unique indexes and nullable fields almost impossible, for NULL could as well mean 1, 2 or whatever other value. So they did well to define it such as they did in my opinion.)
As MySQL does not support functional indexes where you could have an index on ISNULL(par_cat,-1), name, your only option is to make par_cat a NOT NULL column with 0 or -1 or whatever for "no parent", if you want your constraints to work.
I see that this was asked in 2014.
However it is often requested from MySQL: https://bugs.mysql.com/bug.php?id=8173 and https://bugs.mysql.com/bug.php?id=17825 for example.
People can click on affects me to try and get attention from MySQL.
Since MySQL 5.7 we can now use the following workaround:
ALTER TABLE categories
ADD generated_par_cat INT UNSIGNED AS (ifNull(par_cat, 0)) NOT NULL,
ADD UNIQUE INDEX categories_name_generated_par_cat (name, generated_par_cat),
ADD UNIQUE INDEX categories_url_generated_par_cat (url, generated_par_cat);
The generated_par_cat is a virtual generated column, so it has no storage space. When a user inserts (or updates) then the unique indexes cause the value of generated_par_cat to be generated on the fly which is a very quick operation.
Just in case you come from Laravel...
This is Laravel's Migration version for Virtual Column to workaround the UNIQUE issue when one of the columns is NULL in value
$table->integer('generated_par_cat')->virtualAs('ifNull(par_cat, 0)');
$table->unique(['name', 'generated_par_cat'], 'name_par_cat_unique');
Why do I get an error of the form:
Error in query: Duplicate entry '10' for key 1
...when doing an INSERT statement like:
INSERT INTO wp_abk_period (pricing_id, apartment_id) VALUES (13, 27)
...with 13 and 27 being valid id-s for existing pricing and apartment rows, and the table is defined as:
CREATE TABLE `wp_abk_period` (
`id` int(11) NOT NULL auto_increment,
`apartment_id` int(11) NOT NULL,
`pricing_id` int(11) NOT NULL,
`type` enum('available','booked','unavailable') collate utf8_unicode_ci default NULL,
`starts` datetime default NULL,
`ends` datetime default NULL,
`recur_type` enum('daily','weekly','monthly','yearly') collate utf8_unicode_ci default NULL,
`recur_every` char(3) collate utf8_unicode_ci default NULL,
`timedate_significance` char(4) collate utf8_unicode_ci default NULL,
`check_in_times` varchar(255) collate utf8_unicode_ci default NULL,
`check_out_times` varchar(255) collate utf8_unicode_ci default NULL,
PRIMARY KEY (`id`),
KEY `fk_period_apartment1_idx` (`apartment_id`),
KEY `fk_period_pricing1_idx` (`pricing_id`),
CONSTRAINT `fk_period_apartment1` FOREIGN KEY (`apartment_id`) REFERENCES `wp_abk_apartment` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,
CONSTRAINT `fk_period_pricing1` FOREIGN KEY (`pricing_id`) REFERENCES `wp_abk_pricing` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Isn't key 1 id in this case and having it on auto_increment sufficient for being able to not specify it?
Note: If I just provide an unused value for id, like INSERT INTO wp_abk_period (id, pricing_id, apartment_id) VALUES (3333333, 13, 27) it works fine, but then again, it is set as auto_increment so I shouldn't need to do this!
Note 2: OK, this is a complete "twilight zone" moment: so after running the query above with the huge number for id, things started working normally, no more duplicate entry errors. Can someone explain me WTF was MySQL doing to produce this weird behavior?
It could be that your AUTO_INCREMENT value for the table and the actual values in id column have got out of whack.
This might help:
Step 1 - Get Max id from table
select max(id) from wp_abk_period
Step 2 - Align the AUTO_INCREMENT counter on table
ALTER TABLE wp_abk_period AUTO_INCREMENT = <value from step 1 + 100>;
Step 3 - Retry the insert
As for why the AUTO_INCREMENT has got out of whack I don't know. Added auto_increment after data was in the table? Altered the auto_increment value after data was inserted into the table?
Hope it helps.
I had the same problem and here is my solution :
My ID column had a bad parameter. It was Tinyint, and MySql want to write a 128th line.
Sometimes, your problem you think the bigger you have is only a tiny parameter...
Late to the party, but I just ran into this tonight - duplicate key '472817' and the provided answers didn't help.
On a whim I ran:
repair table wp_abk_period
which output
Number of rows changed from 472816 to 472817
Seems like mysql had the row count wrong, and the issue went away.
My environment:
mysql Ver 14.14 Distrib 5.1.73, for Win64 (unknown)
Create table syntax:
CREATE TABLE `env_events` (
`tableId` int(11) NOT NULL AUTO_INCREMENT,
`deviceId` varchar(50) DEFAULT NULL,
`timestamp` int(11) DEFAULT NULL,
`temperature` float DEFAULT NULL,
`humidity` float DEFAULT NULL,
`pressure` float DEFAULT NULL,
`motion` int(11) DEFAULT NULL,
PRIMARY KEY (`tableId`)
) ENGINE=MyISAM AUTO_INCREMENT=528521 DEFAULT CHARSET=latin1
You can check the current value of the auto_increment with the following command:
show table status
Then check the max value of the id and see if it looks right. If not change the auto_increment value of your table.
When debugging this problem check the table name case sensitivity (especially if you run MySql not on Windows).
E.g. if one script uses upper case to 'CREATE TABLE my_table' and another script tries to 'INSERT INTO MY_TABLE'. These 2 tables might have different contents and different file system locations which might lead to the described problem.