MySQL Database Structure with revisions/history - mysql

I've been looking into various DB Structures for the task I'm trying to achieve but it seems like my ideas are flawed. I first looked into wiki's DB but seemed a bit complicated for what I want to do and then I saw this which looks closer to what I am trying to do.
I was thinking of having a table which will keep the final form and an extra table where it will keep all the revisions/history. I am not sure though if that would be too much. Although I am not sure the above example is using this method.

I've done something similar - a database table with the ability to fork into multiple revisions and unlimited undo capability, without slowing down the database. I used an additional table to keep track of the "change vectors". Each change can be undone.
There are several types of transactions, so your change table has to keep track. For example, the simple one would be value change. You record what the position (unique ID and column name) is, and the value before and after the change. During undo, the previous value is restored.
The most expensive change is the addition or removal of a column. This is where you utilize an external storage if you don't want to have an excessive longtext/longblob column. A nosql database such as mongodb is suitable for this use.
Hope this helps get you started.

Related

MySQL database activity log: fields vs table

So basically I am in the process of creating a personal finance tracking system. It occurred to be that keeping tabs on when each instance and transaction was last edited or updated might be of relevant information some day.
Now as far as I can see there are two approaches to implement something like this:
Create "updated" fields to all the tables I want to keep track of and then let mysql update those fields for me (ON UPDATE clause)
Create a completely seperate table for holding the log data and then update that with a triggers and transactions
Now it seems that 1st approach would have the benefit of keeping things simple and easy to maintain. However how this will impact the performance if I suddenly decide to get every log in the database for review. Also this would kind of goes against normalization (not by much though) with same data stored in multiple tables.
The second approach would allow more flexibility to the logging system and might actually shorten the sql query necessary to retrieve certain data. However it would make the schema more complex as two additional tables would have to be created (the actual log table and many-to-many relation table for holding the keys) and maintained. On the other hand if I ever want to implement an activity history this approach would propably be the only one capable of doing it.
As such I would like to know some more pros and cons to each method. Since 2nd option allows more flexibility I am considering implementing it but I am not sure about performance issues. In the end it comes down to two guestions:
Are there any real life examples where both approaches are
implemented?
And:
Are there any studies, comparisons or other resource that might shed
some light on which is considered more performance friendly and "best
practices" approach?
It depends on what kind of reporting you need and your current architecture.
If you just want to know last update date, then having 2 fields (creation date and last update) should be enough. That's because having separate table won't give any perfomance boost, but will make your code harder to maintain.
It's another story if you want to have something more elaborate, like reporting differences (what was changed) and/or have full change log on each transaction (there might be few updates to one transaction, right?). In this case you actually must have separate table, because otherwise it will bloat your table and reduce perfomance.
Based on my experience, I'd go with separate table. That's because it will be easier to maintain - your logging logic will be practically separated from everything else and I think one day you'll need that additional info on your transactions and full transaction history.
As far as perfomance goes, you won't notice any formidable difference unless your system will be under serious load. But as your system is personal, any choice would suffice, just don't forget about proper indexing.
Note that I'm making alot of assumptions here, so if you want something more specific, please provide your actual architecture and reporting needs. I'd suggest some books on high availability/perfomance, but they are not on your specific needs, but on general availability/perfomance.

Memcached with row data that change constantly

I have a question that I didn't find an answer. Yet ;-)
I have a Django/MySQL application that runs memcached in the background. One of my tables change every access. I mean, when the user access the page I have a "count" field that is incremented, this same table contains all data that is going to be displayed.
Is recommended to use memcached in this scenario? Or should I create a new relation table that will contain only "id" and "count" field?
Thanks!
Sure, that's a valid use for memcached. The basic rule is that anytime you update, or delete in the mysql sense of the words, you need to do something to keep the memcache record consistent. Usually that is done by either adjusting the value right there, or deleting it so the next access of it builds it and saves it.
In your case, I would just get the value, increment it, and then set it. Depending on how important accuracy is to you, and how much concurrent traffic you get, you should consider atomicity of the transactions outlined in this post.

How to synchronize two MySQL databases which have different schemes?

I have two completely different MySQL databases and they both have their own user table schemes. I want to synchronize user tables (in real time) so, when a user is added to either database, the other one should be updated accordingly. My question is, is it possible to do that kind of synchronization? If yes, what are/is the effective ways/way to do this?
If you want this to happen in the database whenever a row is inserted/updated, then it sounds like a candidate for a TRIGGER. This will allow you to write code so that whenever a change happens to table A, you can automatically make the 'mirror' change to table B. Be careful, since you're going to have triggers going in both directions, that you identify a way of telling that an insert/update is coming from the trigger of the other table, and you don't get caught in an infinite loop.
Yes, it's possible, but since your schema are different, the best way is just to do it manually; in the code that updates one of the databases, simply have it appropriately update the other database. This is pretty much necessary because your schemas are different; you're effectively implementing a manual mapper which is invoked at update time.
There ARE other ways to do it, but this is the simplest to put in place, and is very effective.
Edit: Okay, other ways to do this: Have a regular job (cronjob or similar) which queries one table based upon update since last query, and propagates those updates to the second table; this method suffers from potential, lag, however. Alternatively, you could do something based upon triggers for each user table, but I'd recommend avoiding this approach, since it introduces some potentially serious execution time increases, depending on how the triggers are implemented. But I'd still say the far simplest way is to modify your user table update code to modify both tables instead of just one.

Migrating and comparing a SQL Server database

We downloaded today RedGate's Toolbet, in oder to automatize some tasks that take so long in our company when it comes to databases.
The first one appear with a 15 GB database we have, with a lot of indexes, constrains and also several triggers. We want this database to be migrated exactly with the schema, all the data, triggers, etc to a new DB with the idea to reduce the size an also to get a better performance hidding all the mistakes commited in the past. Unfortunately this was the first customer's release DB of one products, and we used it to test lot of things that no always worked pretty well. We are sure that if we do something like this, we will get more tha 50% of the size back into our disk.
Can one or some Toolbet tools combined be useful to do this? If answer is not, is there available other tool useful for this task?
One common way this can happen is if you are not selecting all your tables to be included in the compare. For example, you may have selected a child table and not the parent table. This could lead to a FK error like you describe.

Never delete entries? Good idea? Usual?

I am designing a system and I don't think it's a good idea to give the ability to the end user to delete entries in the database. I think that way because often then end user, once given admin rights, might end up making a mess in the database and then turn to me to fix it.
Of course, they will need to be able to do remove entries or at least think that they did if they are set as admin.
So, I was thinking that all the entries in the database should have an "active" field. If they try to remove an entry, it will just set the flag to "false" or something similar. Then there will be some kind of super admin that would be my company's team who could change this field.
I already saw that in another company I worked for, but I was wondering if it was a good idea. I could just make regular database backups and then roll back if they commit an error and adding this field would add some complexity to all the queries.
What do you think? Should I do it that way? Do you use this kind of trick in your applications?
In one of our databases, we distinguished between transactional and dictionary records.
In a couple of words, transactional records are things that you cannot roll back in real life, like a call from a customer. You can change the caller's name, status etc., but you cannot dismiss the call itself.
Dictionary records are things that you can change, like assigning a city to a customer.
Transactional records and things that lead to them were never deleted, while dictionary ones could be deleted all right.
By "things that lead to them" I mean that as soon as the record appears in the business rules which can lead to a transactional record, this record also becomes transactional.
Like, a city can be deleted from the database. But when a rule appeared that said "send an SMS to all customers in Moscow", the cities became transactional records as well, or we would not be able to answer the question "why did this SMS get sent".
A rule of thumb for distinguishing was this: is it only my company's business?
If one of my employees made a decision based on data from the database (like, he made a report based on which some management decision was made, and then the data report was based on disappeared), it was considered OK to delete these data.
But if the decision affected some immediate actions with customers (like calling, messing with the customer's balance etc.), everything that lead to these decisions was kept forever.
It may vary from one business model to another: sometimes, it may be required to record even internal data, sometimes it's OK to delete data that affects outside world.
But for our business model, the rule from above worked fine.
A couple reasons people do things like this is for auditing and automated rollback. If a row is completely deleted then there's no way to automatically rollback that deletion if it was in error. Also, keeping a row around and its previous state is important for auditing - a super user should be able to see who deleted what and when as well as who changed what, etc.
Of course, that's all dependent on your current application's business logic. Some applications have no need for auditing and it may be proper to fully delete a row.
The downside to just setting a flag such as IsActive or DeletedDate is that all of your queries must take that flag into account when pulling data. This makes it more likely that another programmer will accidentally forget this flag when writing reports...
A slightly better alternative is to archive that record into a different database. This way it's been physically moved to a location that is not normally searched. You might add a couple fields to capture who deleted it and when; but the point is it won't be polluting your main database.
Further, you could provide an undo feature to bring it back fairly quickly; and do a permanent delete after 30 days or something like that.
UPDATE concerning views:
With views, the data still participates in your indexing scheme. If the amount of potentially deleted data is small, views may be just fine as they are simpler from a coding perspective.
I prefer the method that you are describing. Its nice to be able to undo a mistake. More often than not, there is no easy way of going back on a DELETE query. I've never had a problem with this method and unless you are filling your database with 'deleted' entries, there shouldn't be an issue.
I use a combination of techniques to work around this issue. For some things adding the extra "active" field makes sense. Then the user has the impression that an item was deleted because it no longer shows up on the application screen. The scenarios where I would implement this would include items that are required to keep a history...lets say invoice and payment. I wouldn't want such things being deleted for any reason.
However, there are some items in the database that are not so sensitive, lets say a list of categories that I want to be dynamic...I may then have users with admin privileges be allowed to add and delete a category and the delete could be permanent. However, as part of the application logic I will check if the category is used anywhere before allowing the delete.
I suggest having a second database like DB_Archives whre you add every row deleted from DB. The is_active field negates the very purpose of foreign key constraints, and YOU have to make sure that this row is not marked as deleted when it's referenced elsewhere. This becomes overly complicated when your DB structure is massive.
There is an acceptable practice that exists in many applications (drupal's versioning system, et. al.). Since MySQL scales very quickly and easily, you should be okay.
I've been working on a project lately where all the data was kept in the DB as well. The status of each individual row was kept in an integer field (data could be active, deleted, in_need_for_manual_correction, historic).
You should consider using views to access only the active/historic/... data in each table. That way your queries won't get more complicated.
Another thing that made things easy was the use of UPDATE/INSERT/DELETE triggers that handled all the flag changing inside the DB and thus kept the complex stuff out of the application (for the most part).
I should mention that the DB was a MSSQL 2005 server, but i guess the same approach should work with mysql, too.
Yes and no.
It will complicate your application much more than you expect since every table that does not allow deletion will be behind extra check (IsDeleted=false) etc. It does not sound much but then when you build larger application and in query of 11 tables 9 require chech of non-deletion.. it's tedious and error prone. (Well yeah, then there are deleted/nondeleted views.. when you remember to do/use them)
Some schema upgrades will become PITA since you'll have to relax FK:s and invent "suitable" data for very, very old data.
I've not tried, but have thought a moderate amount about solution where you'd zip the row data to xml and store that in some "Historical" table. Then in case of "must have that restored now OMG the world is dying!1eleven" it's possible to dig out.
I agree with all respondents that if you can afford to keep old data around forever it's a good idea; for performance and simplicity, I agree with the suggestion of moving "logically deleted" records to "old stuff" tables rather than adding "is_deleted" flags (moving to a totally different database seems a bit like overkill, but you can easily change to that more drastic approach later if eventually the amount of accumulated data turns out to be a problem for a single db with normal and "old stuff" tables).