Mysql Auto Increment For Group Entries - mysql

I need to setup a table that will have two auto increment fields. 1 field will be a standard primary key for each record added. The other field will be used to link multiple records together.
Here is an example.
field 1 | field 2
1 1
2 1
3 1
4 2
5 2
6 3
Notice that each value in field 1 has the auto increment. Field 2 has an auto increment that increases slightly differently. records 1,2 and 3 were made at the same time. records 4 and 5 were made at the same time. record 6 was made individually.
Would it be best to read the last entry for field 2 and then increment it by one in my php program? Just looking for the best solution.

You should have two separate tables.
ItemsToBeInserted
id, batch_id, field, field, field
BatchesOfInserts
id, created_time, field, field field
You would then create a batch record, and add the insert id for that batch to all of the items that are going to be part of the batch.
You get bonus points if you add a batch_hash field to the batches table and then check that each batch is unique so that you don't accidentally submit the same batch twice.
If you are looking for a more awful way to do it that only uses one table, you could do something like:
$batch = //Code to run and get 'SELECT MAX(BATCH_ID) + 1 AS NEW_BATCH_ID FROM myTable'
and add that id to all of the inserted records. I wouldn't recommend that though. You will run into trouble down the line.

MySQL only offers one auto-increment column per table. You can't define two, nor does it make sense to do that.
Your question doesn't say what logic you want to use to control the incrementing of the second field you've called auto-increment. Presumably your PHP program will drive that logic.
Don't use PHP to query the largest ID number, then increment it and use it. If you do your system is vulnerable to race conditions. That is, if more than one instance of your PHP program tries that simultaneously, they will occasionally get the same number by mistake.
The Oracle DBMS has an object called a sequence which gives back guaranteed-unique numbers. But you're using MySQL. You can obtain unique numbers with a programming pattern like the following.
First create a table for the sequence. It has an auto-increment field and nothing else.
CREATE TABLE sequence (
sequence_id INT NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`sequence_id`)
)
Then when you need a unique number in your program, issue these three queries one after the other:
INSERT INTO sequence () VALUES ();
DELETE FROM sequence WHERE sequence_id < LAST_INSERT_ID();
SELECT LAST_INSERT_ID() AS sequence;
The third query is guaranteed to return a unique sequence number. This guarantee holds even if you have dozens of different client programs connected to your database. That's the beauty of AUTO_INCREMENT.
The second query (DELETE) keeps the table from getting big and wasting space. We don't care about any rows in the table except for the most recent one.

Related

How to block auto_increment to set the next number always as maxnumber + 1

I have a table with auto_incremented field.
The way normally auto increment works is it always start with the max value + 1.
For e.g. if I insert two records, auto increment field takes 1 and 2 as the value initially.
Now when I add a third row by explicitly mentioning the id field value as 100.
After this, If I add a fourth record, auto increment value will be 101, not 3.
My Question :
Is there any way in mysql to enforce auto increment to follow its series? If it encounters a duplicate, it can skip that.
I doubt if this can be done. Imagine this scenario: You insert 5 rows with ids 1 through 5 and delete the rows with ids 2 through 4, and end up with two rows, the one with id 1 and id 5. Now you insert another row. What id would you expect the DB to use now, 2, or 6?
I for one wouldn't want the database to do the former, because my program could rely on some primary keys not being there (think a deleted blog post with a unique id. Would you want someone to see a different blogpost when they hit the URL corresponding to that id versus just showing a 404?)
Coming back to your question, the DB doesn't really know the difference between the following two situations:
The row with id = 100 was inserted like you mention, manually.
There existed 100 rows with ids 1 through 100 and rows 3 through 99 were deleted.
Of course, you might have a use-case for recycling ids, but you'll have to do it yourself. But before doing that, make sure you really want it :)

Maintaining order of elements in MySQL database tables OR inserting new rows in specific positions for MySQL

I have a database table that maintains some information and is required to preserve order. Essentially if I have elements 1 through 5 listed and I want to add a new element, then it could be inserted anywhere in the existing row, either at the last, after 5, the beginning before 1 or somewhere in the middle such as after 3. Is there a way to do this using MySQL INSERT statements and specifying after which row we should insert the index?
I presume not. So my strategy to go about doing this is to create another column 'order_number' that basically records the order of the elements.
For instance, if the record table has primary key (record_id) and the order_number listed side by side, it would look like this:
record_id order_number
1 1
2 2
3 3
4 4
5 5
TO add a new element to this row after row 3, the resulting end table will look like this:
record_id order_number
1 1
2 2
3 3
**6** **4** <------ added row
4 **5** <-- changed order_number
5 **6** <-- changed order_number
In such a situation, I can clearly achieve the order that I want by simply selecting the data that i want and providing an Order By order_number asc clause.
However, as you can see, to do a simple Insert, it requires me to update every other row's order_number
that appears after it. The table is expected to have an extensive amount of rows to it (magnitude of 100,000) at minimum and simply updating every other row (hence locking the table) at every single insert operation is not at all feasible.
What is a better recommended strategy in this case ?
If the order_number is not to be shown but only used for ordering, I suggest you use a decimal datatype instead of integer. This way, when you have to insert a row "between" two existing rows, you can set as order_number, the average of the two existing order numbers.
In your example:
record_id order_number
1 1.0
2 2.0
3 3.0
**6** 3.5 <---- added row
4 4.0 <-- no change
5 5.0 <-- no change
There is a problem though that if you keep inserting numbers in the same area, some order numbers may result to be too close for the precision of the datatype you have chosen, close enough as to not be distinguished form one another.
To avoid this, your insert procedure will have to examine whether the two existing order number are too close. In that case, it could reassign some order numbers of other nearby rows, "stretching" the order numbers above and below to "make space" for a new value.
You could also have a "cleanup" procedure that runs periodically and does this "stretching" in the whole or large parts of the table.
I found this answer for a similar question: https://stackoverflow.com/a/6333717/1010050
In summary, it increments all of the record IDs below the one you will be adding, to maintain consistency. That still requires you to update all of the record IDs, so it isn't the most efficient. It does have the benefit, compared to your method, of maintaining the physical order in the database, rather than just a virtual order like you have.
Another way I can think of would be to record the child and parent record IDs for each record, rather than an order number, similar to a Doubly Linked List. Inserting an element in the middle would then only require updating two other records regardless of table size. This has the same disadvantage as your solution where the physical ordering would be wrong, so reading from the table in an ordered fashion would be more costly.
For example:
record_id parent_id child_id
0 NULL 1
1 0 2
2 1 NULL
When we insert a record after record_id = 1, the table becomes:
record_id parent_id child_id
0 NULL 1
1 0 3
2 3 NULL
3 1 2
Note how only the parent_id and child_id for IDs 1 and 2 had to change.
I think between these two solutions, the biggest thing to consider is what is your most common operation: reading out the values in order, or writing a new value in the middle somewhere. If it's reading, then updating the record IDs would be your best option in order to maintain the physical order of the database. If writing, then you can optimize for that by using the method I suggested similar to a doubly linked list, or your own order method.
Summary after question update:
Seeing that updating most of the records is not feasible, then the other answer I found is definitely not valid. The solution of treating it similar to a doubly linked list is still plausible, however.

Primary Key Index Automatic

I m currently doing a project using mysql and am a perfect beginner in it.....
I made a table with the following columns.....
ID // A integer type column which is a primary key........
Date // A Date type column.........
Day // A String column.........
Now i just wanna know whether there exist any method by which the ID column insertion value is automatically generated......??
for eg: - If i insert a date - 4/10/1992 and Day - WED as values. The Mysql Server should automatically generate any integer value starting from 1 checking whether they exist.
i.e in a table containing the values
ID Date Day
1 01/02/1987 Sun
3 04/08/1990 Sun
If i m inserting the Date value and Day value(specified in the example) in the above table. It should be inserted as
2 04/10/1992 WED
I tried methods like using auto incrementer.....But i m afraid it just only increments the ID value.
There's a way to do this, but it's going to affect performance. Go ahead and keep auto_increment on the column, just for the first insert, or for when you want to insert more quickly.
Even with auto_increment on a column, you can specify the value, so long as it doesn't collide with an existing value.
To get the next value or first gap:
SELECT a.ID + 1 AS NextID FROM tbl a
LEFT JOIN tbl b ON b.ID = a.ID + 1
WHERE b.ID IS NULL
ORDER BY a.ID
LIMIT 1
If you get an empty set, just use 1, or let auto_increment do its thing.
For concurrency sake, you will need to lock the table to keep other sessions from using the next ID which you just found.
Well...i understood your problem...You want to generate the entries in such a way that it can control it's limit...
Well i've got a solution which is quite whacky...you may accept it if u feel like....
create your table with your primary key in auto increment mode using unsigned int (as every one suggested here)....
now consider two situations....
If your table needs to be cleared every single year or within certain duration(if such a situation exist)....
perform alter table operation to disable autoincrement mode and delete all your contents...
and then enable it again......
if what you are doing is some sort of datawarehousing.....so that a database for years....
then included a sql query to find the smallest primary key value using predefined key functions before you insert and if it is more than the 2^33 create a new table with the same details and you should maintain a seperate table to track the number of tables of this types
The trick is bit complicated and i m afraid....there don't exist a simple way as you expected....
You really don't need to cover the gaps created by deleting values from integer primary key columns. They were especially designed to ignore those gaps.
The auto increment mechanism could have been designed to take into consideration either the gaps at the top (after you delete some products with the biggest id values) or all gaps. But it wasn't because it was designed not to save space but to save time and to ensure that different transactions don't accidentally generate the same id.
In fact PostgreSQL implements it's SEQUENCE data type / SERIAL column (their equivalent to MySQL auto_increment) in such a way that if a transaction requests the sequence to increment a few times but ends up not using those ids, they never get used. That's also designed to avoid the possibility of transactions ever accidentally generating and using the same id.
You can't even save space because when you decide your table is going to use SMALLINT that's a fixed length 2 byte integer, it doesn't matter if the values are all 0 or maxed out. If you use a normal INTEGER that's a fixed length 4 byte integer.
If you use an UNSIGNED BIGINT that's an 8 byte integer which means it uses 8*8 bits = 64 bits. With an 8 byte integer you can count up to 2^64, even if your application works continuously for years and years it shouldn't reach a 20 digit number like 18446744070000000000 (if it does what the hell are you counting the molecules in the known universe?).
But, assuming you really have a concern that the ids might run out in a couple of years perhaps you should be using UUIDs in stead of integers.
Wikipedia states that "Only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%".
UUIDs can be stored as BINARY(16) if you convert them into raw binary, as CHAR(32) if you strip the dashes or as CHAR(36) if you leave the dashes.
Out of the 16 bytes = 128 bits of data UUIDs use 122 random bits and 6 validation bits and they are constructed using information about when and where they were created. Meaning it is safe to create billions of UUIDs on different computers and the likelihood of collision would be overwhelmingly minuscule (as opposed to generating auto-incremented integers on different machines).

Index counter shared by multiple tables in mysql

I have two tables, each one has a primary ID column as key. I want the two tables to share one increasing key counter.
For example, when the two tables are empty, and counter = 1. When record A is about to be inserted to table 1, its ID will be 1 and the counter will be increased to 2. When record B is about to be inserted to table 2, its ID will be 2 and the counter will be increased to 3. When record C is about to be inserted to table 1 again, its ID will be 3 and so on.
I am using PHP as the outside language. Now I have two options:
Keep the counter in the database as a single-row-single-column table. But every time I add things to table A or B, I need to update this counter table.
I can keep the counter as a global variable in PHP. But then I need to initialize the counter from the maximum key of the two tables at the start of apache, which I have no idea how to do.
Any suggestion for this?
The background is, I want to display a mix of records from the two tables in either ASC or DESC order of the creation time of the records. Furthermore, the records will be displayed in page-style, say, 50 records per page. Records are only added to the database rather than being removed. Following my above implementation, I can just perform a "select ... where key between 1 and 50" from two tables and merge the select datasets together, sort the 50 records according to IDs and display them.
Is there any other idea of implementing this requirement?
Thank you very much
Well, you will gain next to nothing with this setup; if you just keep the datetime of the insert you can easily do
SELECT * FROM
(
SELECT columnA, columnB, inserttime
FROM table1
UNION ALL
SELECT columnA, columnB, inserttime
FROM table2
)
ORDER BY inserttime
LIMIT 1, 50
And it will perform decently.
Alternatively (if chasing last drop of preformance), if you are merging the results it can be an indicator to merge the tables (why have two tables anyway if you are merging the results).
Or do it as SQL subclass (then you can have one table maintain IDs and other common attributes, and the other two reference the common ID sequence as foreign key).
if you need creatin time wont it be easier to add a timestamp field to your db and sort them according to that field?
i believe using ids as a refrence of creation is bad practice.
If you really must do this, there is a way. Create a one-row, one-column table to hold the last-used row number, and set it to zero. On each of your two data tables, create an AFTER INSERT trigger to read that table, increment it, and set the newly-inserted row number to that value. I can't remember the exact syntax because I haven't created a trigger for years; see here http://dev.mysql.com/doc/refman/5.0/en/triggers.html

MySql unique id for several records

I use one table withe some casual columns such as id, name, email, etc...also I'm inserting a variable numbers of records in each transaction, to be much efficient I need to have one unique id lets call it transaction id, that would be the same for each group of data which are inserted in one transaction, should be increment. What is the best approach for doing that?
I was thought to use
select max(transaction_id) from users
and increment that value on server side, but that seams like old fashion solution.
You could have another table usergroups with an auto-incrementing primary key, you first insert a record there (maybe including some other useful information about the group). Then get the group's unique id generated during this last insert using mysql_insert_id(), and use that as the groupid for your inserts into the first table.
This way you're still using MySQL's auto-numbering which guarantees you a unique groupid. Doing select max(transaction_id) from users and incrementing this isn't safe, since it's non-atomic (another thread may have read the same max(transaction_id) before you've had a change to increment it, and will start inserting records with a conflicting groupid).
Add new table with auto_increment column
You can create new table with auto_increment column. So you'll be able to generate unique integers in thread safe way. It'll work like this:
DB::insert_into_transaction_table()
transaction_id = DB::mysql_last_insert_id() ## this is integer value
for each record:
DB::insert_into_table(transaction_id, ...other parameters...)
And you don't require mysql transactions for this.
Generate unique string on server side before inserting
You can generate unique id (for example GUID) on server side and use it for all records inserting. But your transaction_id field should be long enough to store values generated this way (some char(...) type). It'll work like this:
transaction_id = new_GUID() ## this is usually a string value
for each record:
DB::insert_into_table(transaction_id, ...other parameters...)