How to scale mysql table to keep historic data without increasing table size - mysql

I have a questionnaire in my app, using which I am creating data corresponding to the user who has submitted it and at what time(I have to apply further processing on the last object/questionnaire per user). This data is saved inside my server's MySQL DB. As this questionnaire is open for all my users and as it will be submitted multiple times, I do not want new entries to be created every time for the same user because this will increase the size of the table(users count could be anything around 10M), But I also want to keep the old data as a history for later processing.
Now I have this option in mind:
Create two tables. One main table to keep new objects and one history table to keep history objects. Whenever a questionnaire is submitted it will create a new entry in the history table, but update the existing entry in the main table.
So, is there any better approach to this and how do other companies tackle such situations?

I think you should go through the SCD (Slowly Changing Dimension) Concepts and decide which one is better approach to you.
Please read this and i think you will find the best way for yourself :
Here

Related

Database Design: Difference between using boolean fields and duplicate tables

I have to design a database schema for an application I'm building. I will be using MySQL. In this application, users enter data and it gets saved in the database obviously. However, this data is not accessible to the public until the user publishes the data. Currently, I have one column for storing all the data. I was wondering if a boolean field in this table that indicates whether the data has been published is a good idea. Or, is it much better design to create one table for saved data and one table for published data and move the saved data to the published data table when the user presses Publish.
What are the advantages and disadvantages of using each one and is one of them considered better design than the other?
Case: Binary
They are about equal. Use this as a learning exercise -- Implement it one way; watch it for a while, then switch to the other way.
(same) Space: Since a row exists exactly once, neither option is 'better'.
(favor 1 table) When "publishing" it takes a transaction to atomically delete from one table and insert into the other.
(favor 2 tables) Certain SELECTs will spend time filtering out records with the other value for published. (This applies to deleted, embargoed, approved, and a host of other possible boolean flags.)
Case: Revision history
If there are many revisions of a record, then two tables, Current data and History, is better. That is because the 'important' queries involve fetching the only Current data.
(PARTITIONs are unlikely to help in either case.)

I came up with this SQL structure to allow rolling back and auditing user information, will this be adequate?

So, I came up with an idea to store my user information and the updates they make to their own profiles in a way that it is always possible to rollback (as an option to give to the user, for auditing and support purposes, etc.) while at the same time improving (?) the security and prevent malicious activity.
My idea is to store the user's info in rows but never allow the API backend to delete or update those rows, only to insert new ones that should be marked as the "current" data row. I created a graphical explanation:
Schema image
The potential issues that I come up with this model is the fact that users may update the information too frequently, bloating up the database (1 million users and an average of 5 updates per user are 5 million entries). However, for this I came up with the idea of putting apart the rows with "false" in the "current" column through partitioning, where they should not harm the performance and will await to be cleaned up every certain time.
Am I right to choose this model? Is there any other way to do such a thing?
I'd also use a second table user_settings_history.
When a setting is created, INSERT it in the user_settings_history table, along with a timestamp of when it was created. Then also UPDATE the same settings in the user_settings table. There will be one row per user in user_settings, and it will always be the current settings.
So the user_settings would always have the current settings, and the history table would have all prior sets of settings, associated with the date they were created.
This simplifies your queries against the user_settings table. You don't have to modify your queries to filter for the current flag column you described. You just know that the way your app works, the values in user_settings are defined as current.
If you're concerned about the user_settings_history table getting too large, the timestamp column makes it fairly easy to periodically DELETE rows over 180 days old, or whatever number of days seems appropriate to you.
By the way, 5 million rows isn't so large for a MySQL database. You'd want your queries to use an index where appropriate, but the size alone isn't disadvantage.

Indefinite number of tables vs indefinite number of row with multiple columns

Which one would be better (performance wise and maintenance), a database which creates table dynamically or just adding rows dynamically?
Suppose I am building a project in which I let users to register. Say I have a table which store only basic personal infos, like name, dob, Date of joining, address, phone, etc. Say 10 columns.
Now is the tricky part.
Scene 1: Creating multiple tables
When a user complete registration, a message table is created. So each table is created for each users. The rows of each message table varies for each user.
In the same way there is a cart table for each user like the message table.
For this scene 1, 2 tables are created with every registration.
Scene 2: Adding Rows
The scenario is same here as well, but in this case I have 2 tables for message and cart. Rows are added only when there is an activity.
Note:
You must assume that the number of users is more than 2000 and expect 50+ users to be active all the time. Which means the message and cart tables are always busy for both the cases. Like there is always a query for update, add, delete, insert, select etc. simultaneously.
Also which scene will consume more disk space.
While writing this, it make me wonder what technique would Facebook and others use. If they use the Scene 2 style (all users (billions) use the same big long message table)... Just wondering
Databases has some basic rules defined for Database Design called
"Database Normalization", These basic rules allow us eliminating
redundant data.
1st Normal Form
Store One piece of information in only One Column, A column should store only One piece of information.
2ns Normal Form
A Table should have only the columns that are related to each other. All the related columns should be in One table.
Now if you look at your advised design, A Separate Table for each USER
will split SAME information/Columns about all the user in 1000's of
tables. Which violates the 2nd Normal Form.
You need to Create One Table and put all the related Columns in that
one table for all the users. and you can make use of normal t-sql to
query your data but if you have a table for each user my guess is your
every query that you execute from your application will be built
dynamically and for every query you will be using dynamic sql. which
is one of the Sql Devils and you want to avoid using it whenever
possible.
My suggestion would be read more about Database Design. Once you have
some basic understanding of database design. Draw it on a piece of
paper and see if it provides you everything that your business
requires / expects from this application , Spend sometime on it now it
will save you a lot of pain later.

Efficient way to keep clone of table row

I have a PHP application which use MySQL database. It has table called profile which store user's details. Now there is a need of keeping snapshot of that profile when he perform a task. Which means whole table row related to a user must be cloned.
I found two ways of doing that.
1) Add another column to table to mention whether it was cloned. Then his original profile can be separated. (original/cloned). Profile data will be maintained in one table.
Other method is ..
2) Add another table similar to profile (with same fields) and store cloned profiles in that. Profile data will be maintained in two tables.
What is most efficient in terms of performance and usability ?
If you have it in one table with just a column/field to distinguish whether it's a clone or original then you will have always double the number of records to handle. Whereas if it is on another table you have only one table to worry about each time unless you need both at one time. Another thing is if you have a separate table you have a virtual back-up for your main table. So, in any case that one is in trouble you have something to fall back on and vice versa. Additionally you don't put yourself in danger of mixing up the records if it is a separate table. In other words I would prefer your second approach rather than the first one.

Using Redis as a Key/Value store for activity stream

I am in the process of creating a simple activity stream for my app.
The current technology layer and logic is as follows:
** All data relating to an activity is stored in MYSQL and an array of all activity id's are kept in Redis for every user.**
User performs action and activity is directly stored in an 'activities' table in MYSQL and a unique 'activity_id' is returned.
An array of this user's 'followers' is retrieved from the database and for each follower I push this new activity_id into their list in Redis.
When a user views their stream I retrieve the array of activity id's from redis based on their userid. I then perform a simple MYSQL WHERE IN($ids) query to get the actual activity data for all these activity id's.
This kind of setup should I believe be quite scaleable as the queries will always be very simple IN queries. However it presents several problems.
Removing a Follower - If a user stops following someone we need to remove all activity_id's that correspond with that user from their Redis list. This requires looping through all ID's in the Redis list and removing the ones that correspond to the removed user. This strikes me as quite unelegant, is there a better way of managing this?
'archiving' - I would like to keep the Redis lists to a length of
say 1000 activity_id's as a maximum as well as frequently prune old data from the MYSQL activities table to prevent it from growing to an unmanageable size. Obviously this can be achieved
by removing old id's from the users stream list when we add a new
one. However, I am unsure how to go about archiving this data so
that users can view very old activity data should they choose to.
What would be the best way to do this? Or am I simply better off
enforcing this limit completely and preventing users from viewing very old activity data?
To summarise: what I would really like to know is if my current setup/logic is a good/bad idea. Do I need a total rethink? If so what are your recommended models? If you feel all is okay, how should I go about addressing the two issues above? I realise this question is quite broad and all answers will be opinion based, but that is exactly what I am looking for. Well formed opinions.
Many thanks in advance.
1 doesn't seem so difficult to perform (no looping):
delete Redis from Redis
join activities on Redis.activity_id = activities.id
and activities.user_id = 2
and Redis.user_id = 1
;
2 I'm not really sure about archiving. You could create archive tables every period and move old activities from the main table to an archive table periodically. Seems like a single properly normalized activity table ought to be able to get pretty big though. (make sure any "large" activity stores the activity data in a separate table, the main activity table should be "narrow" since it's expected to have a lot of entries)