Delete data from mysql innodb tables after one month is passed - mysql

Currently i am using cron for this. I thought perhaps it is possible to implement some procedure that will remove all data from database that is older than one month, but i am not sure that this is the best way.
Problem is that we have many servers with many cron processes, that are controlled by very small amount of stuff, and we need to make it clear and easy-to-manage, that's why i don't want to have such cron process.
Data in table i want to delete - statistics, huge amount of this data is inserted every day, and if it will not be deleted - database will be unbeliaveable huge (about ~500M every day, for us it's quite big amount, 500M * 365 days is 182,5G per year)
Is it possible to delete data using some procedure in mysql (perhaps after new row is added) / and is that a good idea?

If you're intending on moving away from cron jobs, you could always create an event that runs at a scheduled frequency.
Whatever you do, it's a very bad idea to delete data every time a new row is added, as it'll slow down your insert and it's more likely to fragment your tables.

Related

Enable dirty reads in MYSQL

I have a script that runs daily on a table that deletes the last 30 days worth of data and inserts new data in its place.
Sometimes, someone is querying the table when that script begins, which causes the script to fail because I cannot delete data while someone is querying - so we get lock timeouts.
I don't really care if we have dirty reads, I just need to make sure that the daily process runs, irrespective of what other users are doing.
Is there a way in MySQL to ensure that my DELETE + INSERT happens, irrespective of whatever queries are running on that table?

Database query efficiency

My boss is having me create a database table that keeps track of some of our inventory with various parameters. It's meant to be implemented as a cron job that runs every half hour or so, but the scheduling part isn't important since we've already discussed that we're handling it later.
What I'm want to know is if it's more efficient to just delete everything in the table each time the script is called and repopulate it, or go through each record to determine if any changes were made and update each entry accordingly. It's easier to do the former, but given that we have over 700 separate records to keep track of, I don't know if the time it takes to do this would put a huge load on the server. The script is written in PHP.
700 records is an extremely small number of records to have performance concerns. Don't even think about it, do whichever is easier for you.
But if it is performance that you are after, updating rows is slower than inserting rows, (especially if you are not expecting any generated keys, so an insertion is a one-way operation to the database instead of a roundtrip to and from the database,) and TRUNCATE TABLE tends to be faster than DELETE * FROM.
If you have IDs for the proper inventory talking about SQL DB, then it would be good practice to update them, since in theory your IDs will get exhausted (overflow).
Another approach would be to use some NoSQL DB like MongoDB and simply update the DB with given json bodies apparently with existing IDs, and the DB itself will figure it out on its own.

How to create summary tables efficiently

During a month a process inserts a large number of rows in some database tables ~1M.
This happens daily and the whole process lasts ~40mins. That is fine.
I created some "summary tables" from these inserts so as to query the data fast. This works fine.
Problem: I keep inserting data in the summary tables and so the time to create the cache table matches the process to insert the actual data and this is good. But if the data inserted in the previous days have changed (due to any updates) then I would need to "recalculate" the previous days and to solve this instead of creating today's summary data daily I would need to change my process to recreate the summary data from the beginning of each month which would mean my running time would increase substantially.
Is there a standard way to deal with this problem?
We had a similar problem in our system, which we solved by generating a summary table holding each day's summary.
Whenever an UPDATE/INSERT changes the base tables the summary table is updated.. this will of course slow down these operations but keeps the summary table completely up to date.
This can be done using TRIGGERs, but as the operations are in one place, we just do it manually in a TRANSACTION.
One advantage of this approach is that there is no need to run a cron job to refresh/create the summary table.
I understand that this may not be applicable/feasible for your situation.

Ruby/Bash script to backup my table and delete records

I'm save all the transactions in DB instead of logs , but i don't want the table to get huge and slow , so I was thinking to create a cron job to do something every few month like :
1- Backup the table for hard drive
2- move all the records to a new table something like table_backup
3- delete the records on that table
This way in case insert will take lot of time with huge table, the table will be freed every few months
Please not that I'm using ruby with active record models to access the DB tables , what do you think the best way to do such a thing , and is there any alternatives to what I suggested ?
I would suggest the following:
Redundancy - depending on how critical your data is, you may want redundant storage (e.g. a master-slave database setup, or the database on a RAID device)
Backups - have hourly/daily/weekly backups (again, depending on how critical it is to maintain these backups, how much space you can afford for them, how much traffic you're getting, and what the impact is on the database) of the entire database.
Truncation - have a cron task (check out the whenever gem which makes this easy) that deletes all entries older than some threshold (2 weeks?). There's no need to populate a new table just to delete old entries.
I believe these approaches are orthogonal, so you can pick whichever ones suit you, or implement the important one(s) first.

Database design for heavy timed data logging

I have an application where I receive each data 40.000 rows. I have 5 million rows to handle (500 Mb MySQL 5.0 database).
Actually, those rows are stored in the same table => slow to update, hard to backup, etc.
Which kind of scheme is used in such application to allow long term accessibility to the data without problems with too big tables, easy backup, fast read/write ?
Is postgresql better than mysql for such purpose ?
1 - 40000 rows / day is not that big
2 - Partition your data against the insert date : you can easily delete old data this way.
3 - Don't hesitate to go through a datamart step. (compute often asked metrics in intermediary tables)
FYI, I have used PostgreSQL with tables containing several GB of data without any problem (and without partitioning). INSERT/UPDATE time was constant
We're having log tables of 100-200million rows now, and it is quite painful.
backup is impossible, requires several days of down time.
purging old data is becoming too painful - it usually ties down the database for several hours
So far we've only seen these solutions:
backup , set up a MySQL slave. Backing up the slave doesn't impact the main db. (We havn't done this yet - as the logs we load and transform are from flat files - we back up these files and can regenerate the db in case of failures)
Purging old data, only painless way we've found is to introduce a new integer column that identifies the current date, and partition the tables(requires mysql 5.1) on that key, per day. Dropping old data is a matter of dropping a partition, which is fast.
If in addition you need to do continuously transactions on these tables(as opposed to just load data every now and then and mostly query that data), you probably need to look into InnoDB and not the default MyISAM tables.
The general answer is: you probably don't need all that detail around all the time.
For example, instead of keeping every sale in a giant Sales table, you create records in a DailySales table (one record per day), or even a group of tables (DailySalesByLocation = one record per location per day, DailySalesByProduct = one record per product per day, etc.)
First, huge data volumes are not always handled well in a relational database.
What some folks do is to put huge datasets in files. Plain old files. Fast to update, easy to back up.
The files are formatted so that the database bulk loader will work quickly.
Second, no one analyzes huge data volumes. They rarely summarize 5,000,000 rows. Usually, they want a subset.
So, you write simple file filters to cut out their subset, load that into a "data mart" and let them query that. You can build all the indexes they need. Views, everything.
This is one way to handle "Data Warehousing", which is that your problem sounds like.
First, make sure that your logging table is not over-indexed. By that i mean that every time you insert/update/delete from a table any indexes that you have also need to be updated which slows down the process. If you have a lot of indexes specified on your log table you should take a critical look at them and decide if they are indeed necessary. If not, drop them.
You should also consider an archiving procedure such that "old" log information is moved to a separate database at some arbitrary interval, say once a month or once a year. It all depends on how your logs are used.
This is the sort of thing that NoSQL DBs might be useful for, if you're not doing the sort of reporting that requires complicated joins.
CouchDB, MongoDB, and Riak are document-oriented databases; they don't have the heavyweight reporting features of SQL, but if you're storing a large log they might be the ticket, as they're simpler and can scale more readily than SQL DBs.
They're a little easier to get started with than Cassandra or HBase (different type of NoSQL), which you might also look into.
From this SO post:
http://carsonified.com/blog/dev/should-you-go-beyond-relational-databases/