MySQL - Insert Row Without Disrupting AUTO_INCREMENT - mysql

Quick question:
I have a sports league database with a list of games (let's say 40 or so). Each game is auto-assigned an ID number as the primary key when importing the entire schedule from a spreadsheet. The games are then displayed on the web page in descending order thanks to this invisible (to the user) primary key. Here's an example: League Schedule
Works great. The only problem is that sometimes the games are rescheduled and moved to a later date or a new game is added and has to be inserted into an already existing schedule. To this point, I've had to manually edit each affected row's ID (using PhpMyAdmin) to account for the changes and this can be quite tedious and time consuming.
What I'd really like to do is set the table to readjust primary key values on the fly. Meaning, if I inserted a brand new game into the fifth row of the table, all games thereafter would automatically be readjusted (ID 5 would become 6, ID 6 would become 7, and so on).
Is there a way to set-up the table to do this, or a particular SQL command I can use to accomplish it just the same? Apologies if this has already been asked many times in different ways. Any and all feedback is appreciated.

You should not use your PRIMARY KEY for that. Add a special column like sort with a regular INDEX, not UNIQUE. It does not have to be INT either, you can use real numbers. This way you will always be able to insert new row between any two rows of your schedule.

No, auto-increment is required to be unique, but it is not required to be in any particular order or even contiguous. The fact that auto-increment is monotonically increasing is only by coincidence of its implementation. Don't rely on the values being in chronological order.
Trying to adjust the values is not only manual and awkward, but it risks race conditions, or else would require locking a lot of rows. What if you insert a row with id 5, but your table has 1 billion rows greater than id 5?
There's also a risk of renumbering primary key columns, because any user who got an email telling them that they need to go to game 42 may end up going to the wrong game.
If you have need to view the rows in a particular order (e.g. chronological), then use a DATE column for that, not an auto-increment column.

Related

How to implement temporal data in MySQL

I currently have a non-temporal MySQL DB and need to change it to a temporal MySQL DB. In other words, I need to be able to retain a history of changes that have been made to a record over time for reporting purposes.
My first thought for implementing this was to simply do inserts into the tables instead of updates, and when I need to select the data, simply doing a GROUP BY on some column and ordering by the timestamp DESC.
However, after thinking about things a bit, I realized that that will really mess things up because the primary key for each insert (which would really just be simulating a number of updates on a single record) will be different and thus mess up any linkage that uses the primary key to link to other records in the DB.
As such, my next thought was to continue updating the main tables in the DB, but also create a new insert into an "audit table" that is simply a copy of the full record after the update, and then when I needed to report on temporal data, I could use the audit table for querying purposes.
Can someone please give me some guidance or links on how to properly do this?
Thank you.
Make the given table R temporal(ie, to maintain the history).
One design is to leave the table R as it is and create a new table R_Hist with valid_start_time and valid_end_time.
Valid time is the time when the fact is true.
The CRUD operations can be given as:
INSERT
Insert into both R
Insert into R_Hist with valid_end_time as infinity
UPDATE
Update in R
Insert into R_Hist with valid_end_time as infinity
Update valid_end_time with the current time for the “latest” tuple
DELETE
Delete from R
Update valid_end_time with the current time for the “latest” tuple
SELECT
Select from R for ‘snapshot’ queries (implicitly ‘latest’ timestamp)
Select from R_Hist for temporal operations
Instead, you can choose to design new table for every attribute of table R. By this particular design you can capture attribute level temporal data as opposed to entity level in the previous design. The CRUD operations are almost similar.
I did a column Deleted and a column DeletedDate. Deleted defaults to false and deleted date null.
Complex primary key on IDColumn, Deleted, and DeletedDate.
Can index by deleted so you have real fast queries.
No duplicate primary key on your IDColumn because your primary key includes deleted and deleted date.
Assumption: you won't write to the same record more than once a millisecond. Could cause duplicate primary key issue if deleted date is not unique.
So then I do a transaction type deal for updates: select row, take results, update specific values, then insert. Really its an update to deleted true deleted date to now() then you have it spit out the row after update and use that to get primary key and/or any values not available to whatever API you built.
Not as good as a temporal table and takes some discipline but it builds history into 1 table that is easy to report on.
I may start updating the deleted date column and change it to added/Deleted in addition to the added date so I can sort records by 1 column, the added/deleted column while always updated the addedBy column and just set the same value as the added/Deleted column for logging sake.
Either way could just do a complex case when not null as addedDate else addedDate as addedDate order by AddedDate desc. so, yeah, whatever, this works.

Database design for time dependent fields

I am making a MySQL database and am fairly confident I know how to normalize it. However, there is an issue I am not sure how to deal with.
Say I have a table
users
----------
user_id primary key
some_field
some_field2
start_date
user_level
Now, user_level gives the user's level, which can be 1,2,3,4,5 say. But as time passes the user may change levels. Obviously if they change levels I can simply do an UPDATE to the users table. But I want to keep a historical record of the users' past levels
For this reason, I am considering a new table called user_level_history
user_level_history
--------------
id autoincrement primary key
user_id
level_start_date
and then modify the users table:
users
----------
user_id primary key
some_field
some_field2
start_date
user_level_history_id
Then to get the user's current level I check the
user_level_history_id = user_level_history.id
And to get the user's history I can SELECT from user_level_history all rows with the user_id and order chronologically.
Is this the standard way to do this? I can't imagine I'm the first person to come across this problem.
One more point: I am imagining less than 5000 users. Would having many, many more users require a different solution?
Thanks in advance.
I think that could be designed like this:
Have a table for level information like value(1,2,3,4,5) , description ...
Have an association table for user_level_history containing user_id, level_id,level_start_date ...
Have a foreign key from level table to user table with the role user-active-level.
You need to develop a mechanism that when user level is changing, inserting to history table occurs.
No, you aren't the first. Querying temporal data is a common requirement, especially in data warehouse/data mining.
The relational data model doesn't have any native, built in support for storing or querying "temporal data".
A lot of work has been done; I have a book by C.J.Date et al. that covers the topic decently: "Temporal Data and the Relational Model". I've also come across several white papers.
One typical, reasonably simplistic approach to storing a "history" is to have a "current" table (like the one you already have, and then add a "history" table. Whenever a row is changed (inserted,updated,deleted) in the "current" table, you add a row to the "history" table, along with the date the row was changed. (You can store a copy of the pre-change row, or a copy of the post-change row, or both.)
With this approach, there's no need to add any columns to the "current" table.

MySQL Many to One relationship - is a primary key required?

Can someone help me understand this as I'm not sure if I should include a primary key in this one, because it doesn't look like I need it.
I have two table structures as follow:
Table 1: programs
program_id
cycle_unit
Table 2: program_has_days
day
week
program_id
A program can take many days to complete, so the program has a schedule which is in table 2. The schedule list the day in the week (example day 1 of week 1, then day 3 of week 2) that the program can be completed. So here, it has a one to many relationship. I'm wondering should I put an primary key (id) to table number 2?
I don't think I'll need the primary key, as I won't be referring to the schedule directly. I always refer to the program_id to get the schedule. In this case, program_id can't be the primary key because it is not unique.
Yes, good practice; No, not required. It's good practice for every table to have a primary key. This will help you later if you decide you do actually need to reference the table's data - even if it's just to delete a few rows without having to specify some other unique set of its fields.
If you're not going to have many programs running over the same days/weeks, there's no need for a PK on table 2.
But if you're going to have many programs running on the same days/weeks then you may wish to add a PK on Table 2, and have a third joining table between them. This way, you won't end up with multiple rows in Table 2 for the same day/week combination (ie no point keeping a row for every program that exists on that particular day/week). Although this would add complexity in checking if an appropriate day/week already exists.
This approach would be particularly relevant if you're interested in efficiently searching on which programs run on a particular day/week (but you've said you're not...).
Another case might be if you need a program to run on multiple schedules (eg the program will run for a few weeks (with different records for each day as you've outline), but then again the program will be re-run in 6 months on a different schedule day/week, etc. Or perhaps 2 different, but concurrent schedules?) . This would require a PK on table 2 with a joining table to keep track of the separate schedules OR another key on table 2 to distinguish which instance of a schedule a particular day/week/program combination a record belongs to.

MySQL PhpMyAdmin: Alter AUTO_INCREMENT and/or INSERT_ID

I have an invoices table which stores a single record for each invoice, with the id column (int AUTO_INCREMENT) being the primary key, but also the invoice reference number.
Now, unfortunately I've had to manual migrate some invoices generated on an old system which have a five digit id, instead of a four digit one which the current system uses.
However, even when I reset the AUTO_INCREMENT through PhpMyAdmin (Table Operations) back to the next four digit id, it still inserts a five digit one being the higher id currently in the table plus one.
From searching around, it would seem that I actually need to change the insert_id as well as the AUTO_INCREMENT ? I've tried to execute ALTER TABLE invoices SET insert_id=8125 as well as ALTER TABLE invoices insert_id=8125 but neither of these commands seem to be valid.
Can anyone explain the correct way that I can reset the AUTO_INCREMENT so that it will insert records with id's 8125 onwards, and then when it gets to 10962 it will skip over the four records I've manually added and continue sequential id's from 10966 onwards. If it won't skip over 10962 - 10966 then this doesn't really matter, as the company doesn't generate that many invoices each year so this will occur in a subsequent year hence not causing a problem hopefully.
I would really appreciate any help with this sticky situation I've found myself in! Many Thanks
First thing I'll suggest is to ditch PHPMyAdmin because it's one of the worst "applications" ever made to be used to work with MySQL. Get a proper GUI. My favourite is SQLYog.
Now on to the problem. Never, ever tamper with the primary key, don't try to "reset" it as you said or to update columns that have an integer generated by the database. As for why, the topic is broad and can be discussed in another question, just never, ever touch the primary key once you've set it up.
Second thing is that someone was deleting records of invoices hence the autoincrement is now at 10k+ rather than at 8k+. It's not a bad thing, but if you need sequential values for your invoices (such as there can't be a gap between invoices 1 and 5) then use an extra field called sequence_id or invoice_ref and use triggers to calculate that number. Don't rely on auto_increment feature that it'll reuse numbers that have been lost trough DELETE operation.
Alternatively, what you can do is export the database you've been using, find the CREATE TABLE definition for the invoices table, and find the line where it says "AUTO_INCREMENT = [some number]" and delete that statement. Import into your new database and the auto_increment will continue from the latest invoice. You could do the same by using ALTER TABLE however it's safer to re-import.

Different database tables joining on single table

So imagine you have multiple tables in your database each with it's own structure and each with a PRIMARY KEY of it's own.
Now you want to have a Favorites table so that users can add items as favorites. Since there are multiple tables the first thing that comes in mind is to create one Favorites table per table:
Say you have a table called Posts with PRIMARY KEY (post_id) and you create a Post_Favorites with PRIMARY KEY (user_id, post_id)
This would probably be the simplest solution, but could it be possible to have one Favorites table joining across multiple tables?
I've though of the following as a possible solution:
Create a new table called Master with primary key (master_id). Add triggers on all tables in your database on insert, to generate a new master_id and write it along the row in your table. Also let's consider that we also write in the Master table, where the master_id has been used (on which table)
Now you can have one Favorites table with PRIMARY KEY (user_id, master_id)
You can select the Favorites table and join with each individual table on the master_id and get the the favorites per table. But would it be possible to get all the favorites with one query (maybe not a query, but a stored procedure?)
Do you think that this is a stupid approach? Since you will perform one query per table what are you gaining by having a single table?
What are your thoughts on the matter?
One way wold be to sub-type all possible tables to a generic super-type (Entity) and than link user preferences to that super-type. For example:
I think you're on the right track, but a table-based inheritance approach would be great here:
Create a table master_ids, with just one column: an int-identity primary key field called master_id.
On your other tables, (users as an example), change the user_id column from being an int-identity primary key to being just an int primary key. Next, make user_id a foreign key to master_ids.master_id.
This largely preserves data integrity. The only place you can trip up is if you have a master_id = 1, and with a user_id = 1 and a post_id = 1. For a given master_id, you should have only one entry across all tables. In this scenario you have no way of knowing whether master_id 1 refers to the user or to the post. A way to make sure this doesn't happen is to add a second column to the master_ids table, a type_id column. Type_id 1 can refer to users, type_id 2 can refer to posts, etc.. Then you are pretty much good.
Code "gymnastics" may be a bit necessary for inserts. If you're using a good ORM, it shouldn't be a problem. If not, stored procs for inserts are the way to go. But you're having your cake and eating it too.
I'm not sure I really understand the alternative you propose.
But in general, when given the choice of 1) "more tables" or 2) "a mega-table supported by a bunch of fancy code work" ..your interests are best served by more tables without the code gymnastics.
A Red Flag was "Add triggers on all tables in your database" each trigger fire is a performance hit of it's own.
The database designers have built in all kinds of technology to optimize tables/indexes, much of it behind the scenes without you knowing it. Just sit back and enjoy the ride.
Try these for inspiration Database Answers ..no affiliation to me.
An alternative to your approach might be to have the favorites table as user_id, object_id, object_type. When inserting in the favorites table just insert the type of the favorite. However i dont see a simple query being able to work with your approach or mine. One way to go about it might be to use UNION and get one combined resultset and then identify what type of record it is based on the type. Another thing you can do is, turn the UNION query into a MySQL VIEW and simply query that VIEW.
The benefit of using a single table for favorites is a simplicity, which some might consider as against the database normalization rules. But on the upside, you dont have to create so many favorites table and you can add anything to favorites easily by just coming up with a new object_type identifier.
It sounds like you have an is-a type relationship that needs to be modeled. All of the items that can be favourited are a type of "item". It sounds like you are on the right track, but I wouldn't use triggers. What could be the right answer if I have understood correctly, is to pull all the common fields into a single table called items (master is a poor name, master of what?), this should include all the common data that would be needed when you need a users favourite items, I'd expect this to include fields like item_id (primary key), item_type and human_readable_name and maybe some metadata about when the item was created, modified etc. Each of your specific item types would have its own table containing data specific to that item type with an item_id field that has a foreign key relationship to the item table. Then you'd wrap each item type in its own insertion, update and selection SPs (i.e. InsertItemCheese, UpdateItemMonkey, SelectItemCarKeys). The favourites table would then work as you describe, but you only need to select from the item table. If your app needs the specific data for each item type, it would have to be queried for each item (caching is your friend here).
If MySQL supports SPs with multiple result sets you could write one that outputs all the items as a result set, then a result set for each item type if you need all the specific item data in one go. For most cases I would not expect you to need all the data all the time.
Keep in mind that not EVERY use of a PK column needs a constraint. For example a logging table. Even though a logging table has a copy of the PK column from the table being logged, you can't build a constraint.
What would be the worst possible case. You insert a record for Oprah's TV show into the favorites table and then next year you delete the Oprah Show from the list of TV shows but don't delete that ID from the Favorites table? Will that break anything? Probably not. When you join favorites to TV shows that record will fall out of the result set.
There are a couple of ways to share values for PK's. Oracle has the advantage of sequences. If you don't have those you can add a "Step" to your Autonumber fields. There's always a risk though.
Say you think you'll never have more than 10 tables of "things which could be favored" Then start your PK's at 0 for the first table increment by 10, 1 for the second table increment by 10, 2 for the third... and so on. That will guarantee that all the values will be unique across those 10 tables. The risk is that a future requirement will add table 11. You can always 'pad' your guestimate