I want to generate a unique random integer (from 10000 to 99999) identity just by clean mySQL; any ideas?
I don't want to generate this number in php by cycling (generate number -> check it in database) because I want to use some intelligent solution in a mySQL query.
While it seems somewhat awkward, this is what can be done to achieve the goal:
SELECT FLOOR(10000 + RAND() * 89999) AS random_number
FROM table
WHERE random_number NOT IN (SELECT unique_id FROM table)
LIMIT 1
Simply put, it generates N random numbers, where N is the count of table rows, filters out those already present in the table, and limits the remaining set to one.
It could be somewhat slow on large tables. To speed things up, you could create a view from these unique ids, and use it instead of nested select statement.
EDIT: removed quotes
Build a look-up table from sequential numbers to randomised id values in range 1 to 1M:
create table seed ( i int not null auto_increment primary key );
insert into seed values (NULL),(NULL),(NULL),(NULL),(NULL),
(NULL),(NULL),(NULL),(NULL),(NULL);
insert into seed select NULL from seed s1, seed s2, seed s3, seed s4, seed s5, seed s6;
delete from seed where i < 100000;
create table idmap ( n int not null auto_increment primary key, id int not null );
insert into idmap select NULL, i from seed order by rand();
drop table seed;
select * from idmap limit 10;
+----+--------+
| n | id |
+----+--------+
| 1 | 678744 |
| 2 | 338234 |
| 3 | 469412 |
| 4 | 825481 |
| 5 | 769641 |
| 6 | 680909 |
| 7 | 470672 |
| 8 | 574313 |
| 9 | 483113 |
| 10 | 824655 |
+----+--------+
10 rows in set (0.00 sec)
(This all takes about 30 seconds to run on my laptop. You would only need to do this once for each sequence.)
Now you have the mapping, just keep track of how many have been used (a counter or auto_increment key field in another table).
I struggled with the solution here for a while and then realised it fails if the column has NULL entries. I reworked this with the following code;
SELECT FLOOR(10000 + RAND() * 89999) AS my_tracker FROM Table1 WHERE "tracker" NOT IN (SELECT tracker FROM Table1 WHERE tracker IS NOT NULL) LIMIT 1
Fiddle here;
http://sqlfiddle.com/#!2/620de1/1
Hope its helpful :)
The only half-way reasonable idea I can come up with is to create a table with a finite pool of IDs and as they are used remove them from that table. Those keys can be unique and a script could be created to generate that table. Then you could pull one of those keys by generating a random select from the available keys. I said 'half-way' reasonable and honestly that was being way to generous, but it beats randomly generating keys until you create a unique one I suppose.
My solution, implemented in Cakephp 2.4.7, is to create a table with one auto_incremental type field
CREATE TABLE `unique_counters` (
`counter` int(11) NOT NULL AUTO_INCREMENT,
`field` int(11) NOT NULL,
PRIMARY KEY (`counter`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I then created a php function so that every time insert a new record, reads the generated id and delete it immediately.
Mysql keeps in its memory counter status. All the numbers generated are unique until you reset the mysql counter or you run a TRUNCATE TABLE operation
find below the Model created in Cakephp to implement all
App::uses('AppModel', 'Model');
/**
* UniqueCounter Model
*
*/
class UniqueCounter extends AppModel {
/**
* Primary key field
*
* #var string
*/
public $primaryKey = 'counter';
/**
* Validation rules
*
* #var array
*/
public $validate = array(
'counter' => array(
'numeric' => array(
'rule' => array('numeric'),
//'message' => 'Your custom message here',
//'allowEmpty' => false,
//'required' => false,
//'last' => false, // Stop validation after this rule
//'on' => 'create', // Limit validation to 'create' or 'update' operations
),
),
);
public function get_unique_counter(){
$data=array();
$data['UniqueCounter']['counter']=0;
$data['UniqueCounter']['field']=1;
if($this->save($data)){
$new_id=$this->getLastInsertID();
$this->delete($new_id);
return($new_id);
}
return(-1);
}
}
Any checks on the range of belonging of the result can be implemented in the same function, by manipulating the result obtained
The RAND() function will generate a random number, but will not guarantee uniqueness. The proper way to handle unique identifiers in MySQL is to declare them using AUTO_INCREMENT.
For example, the id field in the following table will not need to be supplied on inserts, and it will always increment by 1:
CREATE TABLE animal (
id INT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (id)
);
I tried to use this answer, but it didn't work for me, so I had to change the original query a little.
SELECT FLOOR(1000 + RAND() * 89999) AS random_number
FROM Table
WHERE NOT EXISTS (SELECT ID FROM Table WHERE Table.ID=random_number) LIMIT 1
Create a unique index on the field you want the unique random #
Then run
Update IGNORE Table
Set RandomUniqueIntegerField=FLOOR(RAND() * 1000000);
I worry about why you want to do this, but you could simply use:
SELECT FLOOR(RAND() * 1000000);
See the full MySQL RAND documentation for more information.
However, I hope you're not using this as a unique identifier (use an auto_increment attribute on a suitably large unsigned integer field if this is what you're after) and I have to wonder why you'd use MySQL for this and not a scripting language. What are you trying to achieve?
Related
Trying to
create trigger that is called on INSERT & sets originId = id (AUTO_INCREMENT),
I've used SQL suggested here in 1st block:
CREATE TRIGGER insert_example
BEFORE INSERT ON notes
FOR EACH ROW
SET NEW.originId = (
SELECT AUTO_INCREMENT
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = DATABASE()
AND TABLE_NAME = 'notes'
);
Due to information_schema caching I have also set
information_schema_stats_expiry = 0
in my.cnf file. Now information gets updated almost instantly on every INSERT, as I've noticed..
But, performing "direct" INSERTs via console with ~2min intervals, I keep getting not updated AUTO_INCREMENT values in originId.
(They shoud be equal to id fields)
While explicit queries, fetching AUTO_) result in updated correct values.
Thus I suspect that the result of SELECT AUTO_INCREMENT... subquery gets somehow.. what? cached?
How can one get around this?
Thank you.
Edit 1
I intended to implement sort of VCS this way:
User creates new Note, app marks it as 'new' and performs an INSERT in MySQL table. It is the "origin" note.
Then user might edit this Note (completely) in UI, app will mark is as 'update' and INSERT it in MySQL table as a new row, again. But this time originId should be filled with an id of "origin" Note (by app logics). And so on.
This allows PARTITIONing by originId on SELECT, fetching only latest versions to UI.
initial Problem:
If originId of "origin" Note is NULL, MySQL 8 window function(s) in default (and only?) RESPECT_NULL mode perform(s) framing not as expected ("well, duh, it's all about your NULLs in grouping-by column").
supposed Solution:
Set originId of "origin" Notes to id on their initial and only INSERT, expecting 2 benefits:
Easily fetch "origin" Notes via originId = id,
perform correct PARTITION by originId.
resulting Problem:
id is AUTO_INCREMENT, so there's no way (known to me) of getting its new value (for the new row) on INSERT via backend (namely, PHP).
supposed Solution:
So, I was hoping to find some MySQL mechanism to solve this (avoiding manipulations with id field) and TRIGGERs seemed a right way...
Edit 2
I believed automated duplicating id AUTO_INCREMENT field (or any field) within MySQL to be extra fast & super easy, but it totally doesn't appear so now..
So, possibly, better way is to have vcsGroupId UNSIGNED INT field, responsible for "relating" Note's versions:
On create and "origin" INSERT - fill it with MAX(vcsGroupId) + 1,
On edit and "version" INSERT - fill it with "sibling"/"origin" vcsGroupId value (fetched with CTE),
On view and "normal" SELECT - perform framing with Window Function by PARTITION BY vcsGroupId, ORDER BY id or timestamp DESC, then just using 1st (or ascending order by & using last) row,
On view and "origin" SELECT - almost the same, but reversed..
It seems easier, doesn't it?
What you are doing is playing with fire. I don't know exactly what can go wrong with your trigger (beside that it doesn't work for you already), but I have a strong feeling that many things can and will go wrong. For example: What if you insert multiple rows in a single statement? I don't think, that the engine will update the information_schema for each row. And it's going to be even worse if you run an INSERT ... SELECT statement. So using the information_schema for this task is a very bad idea.
However - The first question is: Why do you need it at all? If you need to save the "origin ID", then you probably plan to update the id column. That is already a bad idea. And assuming you will find a way to solve your problem - What guarantees you, that the originId will not be changed outside the trigger?
However - the alternative is to keep the originId column blank on insert, and update it in an UPDATE trigger instead.
Assuming this is your table:
create table vcs_test(
id int auto_increment,
origin_id int null default null,
primary key (id)
);
Use the UPDATE trigger to save the origin ID, when it is changed for the first time:
delimiter //
create trigger vcs_test_before_update before update on vcs_test for each row begin
if new.id <> old.id then
set new.origin_id = coalesce(old.origin_id, old.id);
end if;
end;
delimiter //
Your SELECT query would then be something like this:
select *, coalesce(origin_id, id) as origin_id from vcs_test;
See demo on db-fiddle
You can even save the full id history with the following schema:
create table vcs_test(
id int auto_increment,
id_history text null default null,
primary key (id)
);
delimiter //
create trigger vcs_test_before_update before update on vcs_test for each row begin
if new.id <> old.id then
set new.id_history = concat_ws(',', old.id_history, old.id);
end if;
end;
delimiter //
The following test
insert into vcs_test (id) values (null), (null), (null);
update vcs_test set id = 5 where id = 2;
update vcs_test set id = 4 where id = 5;
select *, concat_ws(',', id_history, id) as full_id_history
from vcs_test;
will return
| id | id_history | full_id_history |
| --- | ---------- | --------------- |
| 1 | | 1 |
| 3 | | 3 |
| 4 | 2,5 | 2,5,4 |
View on DB Fiddle
The primary key of my table is a char of fixed length 11.
I have precomputed the key in my script, just like this:
SET pes = LPAD(person_number, 11, '0');
Now, my variable meets my critera. It only needs to be added to the database.
INSERT INTO db VALUES(pes, ...);
This is where my problem arises - since the primary key is supposed to be a char, I need to put the value of my variable in single quotes.
However, I don't know how to do that.
I tried to escape them, but
'\''pes'\''
doesn't work.
Is there a simple way to accomplish that?
This should not be a problem !
The mySql varchar data-type should accept numbers!
CREATE TABLE USERS (USER_ID VARCHAR(255) NOT NULL UNIQUE PRIMARY KEY,
USER_NAME VARCHAR(255));
SET #U_ID = LPAD(2018, 11, '0');
INSERT INTO USERS VALUES (#U_ID, 'USER_1'),(2, 'USER_2');
SELECT * FROM USERS;
Will output :
USER_ID | USER_NAME
-------------------------
00000002018 | USER_1
2 | USER_2
This is a working fiddle
You can try to use multi-single-quotes with concat function.
INSERT INTO db VALUES(concat('''', #pes,''''), ...);
Here is a sample
Schema (MySQL v5.7)
CREATE TABLE T(col varchar(50));
SET #pes = '1234';
INSERT INTO T VALUES (concat('''', #pes,''''));
Query #1
select col
FROM T;
| col |
| ------ |
| '1234' |
View on DB Fiddle
I am trying to copy a table from one database to another database.
Already there are several solutions for this problem. I use this method to solve this problem.
select *
into DbName.dbo.NewTable
from LinkedServer.DbName.dbo.OldTableSourceDB..MySourceTable
I have two databases. One is blog including a table named engineers; another database named msdata including a table named ms. I am trying to copy the table engineers to the database msdata. My query is:
select * into msdata.dbo.ms from linkedserver.blog.dbo.engineers;
but output is
Undeclared variable: msdata
I don't know it is the problem here. Any help will be appreciated.
Just an illustration:
create table so_gibberish.fred1
(
id int auto_increment primary key,
what varchar(40) not null
);
insert so_gibberish.fred1 (what) values ('this'),('that');
insert into newdb789.fred1 select * from so_gibberish.fred1;
-- failed, error code 1146: Table 'newdb789.fred1' doesn't exist
create table newdb789.fred1
(
id int auto_increment primary key,
what varchar(40) not null
);
insert into newdb789.fred1(id,what) select id,what from so_gibberish.fred1;
insert into newdb789.fred1 (what) values ('a new thing');
select * from newdb789.fred1;
+----+-------------+
| id | what |
+----+-------------+
| 1 | this |
| 2 | that |
| 3 | a new thing |
+----+-------------+
good, auto_increment preserved and resumes at 3 for new things
Try this alternative query below, make sure you have already created the table in destination database:
INSERT INTO DestinationDatabase..DestinationTable
SELECT * FROM SourceDatabase..SourceTable;
Lets say I want to store users and groups in a MySQL database. They have a relation n:m. To keep track of all changes each table has an audit table user_journal, group_journal and user_group_journal. MySQL triggers copy the current record to the journal table on each INSERT or UPDATE (DELETES are not supported, because I would need the information which application user has deleted the record--so there is a flag active that will be set to 0 instead of a deletion).
My question/problem is: Assuming I am adding 10 users into a group at once. When I'm later clicking through the history of that group in the user interface of the application I want to see the adding of those 10 users as one step and not as 10 independent steps. Is there a good solution to group such changes together? Maybe it is possible to have a counter that is incremented each time the trigger is ... triggered? I have never worked with triggers.
The best solution would be to put together all changes made within a transaction. So when the user updates the name of the group and adds 10 users in one step (one form controller call) this would be one step in the history. Maybe it is possible to define a random hash or increment a global counter each time a transaction is started and access this value in the trigger?
I don't want to make the table design more complex than having one journal table for each "real" table. I don't want to add a transaction hash into each database table (meaning the "real" tables, not the audit tables--there it would be okay of course). Also I would like to have a solution in the database--not in the application.
I played a bit around and now I found a very good solution:
The Database setup
# First of all I create the database and the basic table:
DROP DATABASE `mytest`;
CREATE DATABASE `mytest`;
USE `mytest`;
CREATE TABLE `test` (
`id` INT PRIMARY KEY AUTO_INCREMENT,
`something` VARCHAR(255) NOT NULL
);
# Then I add an audit table to the database:
CREATE TABLE `audit_trail_test` (
`_id` INT PRIMARY KEY AUTO_INCREMENT,
`_revision_id` VARCHAR(255) NOT NULL,
`id` INT NOT NULL,
`something` VARCHAR(255) NOT NULL
);
# I added a field _revision_id to it. This is
# the ID that groups together all changes a
# user made within a request of that web
# application (written in PHP). So we need a
# third table to store the time and the user
# that made the changes of that revision:
CREATE TABLE `audit_trail_revisions` (
`id` INT PRIMARY KEY AUTO_INCREMENT,
`user_id` INT NOT NULL,
`time` DATETIME NOT NULL
);
# Now we need a procedure that creates a
# record in the revisions table each time an
# insert or update trigger will be called.
DELIMITER $$
CREATE PROCEDURE create_revision_record()
BEGIN
IF #revision_id IS NULL THEN
INSERT INTO `audit_trail_revisions`
(user_id, `time`)
VALUES
(#user_id, #time);
SET #revision_id = LAST_INSERT_ID();
END IF;
END;
# It checks if a user defined variable
# #revision_id is set and if not it creates
# the row and stores the generated ID (auto
# increment) into that variable.
#
# Next I wrote the two triggers:
CREATE TRIGGER `test_insert` AFTER INSERT ON `test`
FOR EACH ROW BEGIN
CALL create_revision_record();
INSERT INTO `audit_trail_test`
(
id,
something,
_revision_id
)
VALUES
(
NEW.id,
NEW.something,
#revision_id
);
END;
$$
CREATE TRIGGER `test_update` AFTER UPDATE ON `test`
FOR EACH ROW BEGIN
CALL create_revision_record();
INSERT INTO `audit_trail_test`
(
id,
something,
_revision_id
)
VALUES
(
NEW.id,
NEW.something,
#revision_id
);
END;
$$
The application code (PHP)
$iUserId = 42;
$Database = new \mysqli('localhost', 'root', 'root', 'mytest');
if (!$Database->query('SET #user_id = ' . $iUserId . ', #time = NOW()'))
die($Database->error);
if (!$Database->query('INSERT INTO `test` VALUES (NULL, "foo")'))
die($Database->error);
if (!$Database->query('UPDATE `test` SET `something` = "bar"'))
die($Database->error);
// To simulate a second request we close the connection,
// sleep 2 seconds and create a second connection.
$Database->close();
sleep(2);
$Database = new \mysqli('localhost', 'root', 'root', 'mytest');
if (!$Database->query('SET #user_id = ' . $iUserId . ', #time = NOW()'))
die($Database->error);
if (!$Database->query('UPDATE `test` SET `something` = "baz"'))
die($Database->error);
And … the result
mysql> select * from test;
+----+-----------+
| id | something |
+----+-----------+
| 1 | baz |
+----+-----------+
1 row in set (0.00 sec)
mysql> select * from audit_trail_test;
+-----+--------------+----+-----------+
| _id | _revision_id | id | something |
+-----+--------------+----+-----------+
| 1 | 1 | 1 | foo |
| 2 | 1 | 1 | bar |
| 3 | 2 | 1 | baz |
+-----+--------------+----+-----------+
3 rows in set (0.00 sec)
mysql> select * from audit_trail_revisions;
+----+---------+---------------------+
| id | user_id | time |
+----+---------+---------------------+
| 1 | 42 | 2013-02-03 17:13:20 |
| 2 | 42 | 2013-02-03 17:13:22 |
+----+---------+---------------------+
2 rows in set (0.00 sec)
Please let me know if there is a point I missed. I will have to add an action column to the audit tables to be able to record deletions.
Assuming you're rate of adding a batch of users to a group is less than once a second....
I would suggest simply adding a column of type timestamp named something like added_timestamp to the user_group and user_group_journal. DO NOT MAKE THIS AN AUTO UPDATE TIMESTAMP OR DEFAULT IT TO CURRENT_TIMESTAMP, instead, in your code when you insert by batch into the user_group, calculate the current date and time, then manually set this for all the new user_group record.
You may need to tweak your setup to add the field to be copied the rest of the new user_group record into the user_group_journal table.
Then when you could create a query/view that groups on a group_id and the new added_timestamp column.
If more fidelity is needed then 1 second you could use a string column and populate it with a string representation of a more granular time (which you'd need to generate however the libraries your language of use allows).
Is there a way to create a table in MySql that it has an automatic ID field, but the ID is not sequential. For example, a random or pseudo random ID.
I have found solutions that suggest generating an ID and try to insert it until an unused ID is found (generating an sequential five digit alphanumerical ID).
but nothing that can be done directly in the table definition, or a simpler trick.
MySQL has a native function UUID() which will generate a globally unique identifier:
mysql> SELECT UUID();
-> '6ccd780c-baba-1026-9564-0040f4311e29'
You can store its output in a CHAR(36) column.
INSERT INTO table (`uuid`, `col1`, `col2`) VALUES (UUID(), 'someval', 'someval');
According to the documentation though,
Although UUID() values are intended to be unique, they are not necessarily unguessable or unpredictable. If unpredictability is required, UUID values should be generated some other way.
Addendum Another option is UUID_SHORT() for a 64-bit unsigned INT rather than a character field.
mysql> SELECT UUID_SHORT();
-> 92395783831158784
Since you asked for a trick, you could use a common auto_incremented id and "fake" it by multiplying with a big prime (and then modulo 2^32):
CREATE TABLE AutoIncPrime
(id int unsigned auto_increment primary key
) ;
Insert values, from 1 to 10:
INSERT INTO AutoIncPrime
VALUES (),(),(),(),(),(),(),(),(),() ;
SELECT * FROM AutoIncPrime ;
Output:
id
---
1
2
3
4
5
6
7
8
9
10
Fake the id, with a View:
CREATE VIEW AutoIncPrime_v AS
SELECT
((id*1798672429 ) & 0xFFFFFFFF)
AS FakeUUID
FROM AutoIncPrime ;
Lets see our "UUIDs":
SELECT * FROM AutoIncPrime_v ;
Output:
FakeUUID
----------
1798672429
3597344858
1101049991
2899722420
403427553
2202099982
4000772411
1504477544
3303149973
806855106
You could even make it look more random with (more complicated bit mixing):
CREATE VIEW AutoIncPrime_v2 AS
SELECT
( (((id*1798672429 ) & 0x55555555) << 1)
| (((id*1798672429 ) & 0xAAAAAAAA) >> 1)
)
AS FakeUUID
FROM AutoIncPrime ;
SELECT * FROM AutoIncPrime_v2 ;
FakeUUID
----------
2537185310
3918991525
2186309707
1558806648
604496082
1132630541
3719950903
2791064212
3369149034
808145601
The trick is that you still have a sequential id in the table - which you can use to join to other tables. You just don't show it to the users - but only show the fake one.
If the table is to get big and the calculations slow, you can add another column in the table and store the FakeUUID value there with an INSERT trigger.
Would a composite key work? A regular standard auto_increment field. You insert your new record, retrieve its new ID, then hash that ID with a salt, and update the record with that hash value.
If you do this all within a transaction, the in-progress version of the record without the hash will never be visible until the hash is generated. And assuming you've done proper salting, the resulting hash value will be for all intents and purposes 'random'.
Note that you can't do this in a single step, as the value of last_insert_id() in mysql is not updated with the new id until the record is actually written. The value retrieved during the actual insert parseing stage would be whatever id was inserted BEFORE this one.
The only automatically generated default in the table definition allowed would be autoincrement (MySQL Guide).
You should be able to write a trigger to automate this process though, maybe through the UUID function as Michael suggested.