MySql Backup/Change Monitoring - mysql

I am new to MySQL from an admin point of view.
I have spent the last few hours googling with no luck and was wondering if anyone could point me in the right direction of either what to google for or a suggestion.
Basically I am looking for ideas on how best to monitor the data changes within a MySQL database so that I can at the end of a day look at the activity and either choose to rollback a few transactions or back to the last daily back up.
I think programatically there could be ways to do this with triggers but I am not sure if that is a good route to head down, it is just one that seemed possible to me.
roll back to a previous state. I think I will be able to do a daily dump of the database that could be rolled back to.
Cheers,
Rob

I would recommend triggers. I've used them to provide a replicated copy of a database and it works quite well. From within the trigger, insert a record into another table that indicates the operation performed and any data you need to associate with it.

Related

MySQL - What happens when multiple queries hit the database

I am working on a project, which will be used by around 500 employees in my organization. Currently, it's still in development phase, and very few people(around 10) are using it. I'm using MySQL. I just want to know, what happens if many users are doing front end edits and then save, at the same point of time? Some SELECT queries that I've written do take as long as 6 seconds to execute. As only one query can be executed at any point of time, if already a query is in progress, and another hits the database, will it create problem? If this is a common situation in large scale projects, please let me know how can I handle this. I'm not sure, if I've made myself clear :). Any advice or links will be very helpful.
From technical aspect, no - nothing bad will happen, the database won't go ballistics and die on you, they're made for purposes like concurrent access.
From logical point of view - something bad will happen. If two people edit the same thing at the same time and then post it at the same time - it gets saved to hard drive one after another. The last one to save is the one whose updates will be on the HDD, effectively causing the first person to lose their changes.
You can approach this problem from several angles. Some projects introduce the concept of locking (not table locking but in-app locking). It revolves around marking a record as locked using a boolean column and if anyone tries to access that record for updating, the software says that someone else is editing it. It's something really difficult to implement and for the most time it doesn't work as expected (I think I vaguely remember Joomla! using something like that, it was one of the most annoying features ever).
The other option you have is to save each update as a revision. That way you can keep track on who updated what and when and you never lose any records in case of would-get overwritten. I believe that SO and Wikipedia use that approach and it works really great because you can inspect what two or more people have done and merge their contributions.
Optimistic Concurrency Control
http://en.wikipedia.org/wiki/Optimistic_concurrency_control
Make sure that each record contains date metadata on last changed/modified time, and load that as part of your data object. Then when attempting to commit the row to database, check the last_modified time in the table to ensure that it is the SAME as the one stored in memory for your object. If it matches, commit it, else throw exception.

Decentralized synchronized secure data storage

Introduction
Hi, I am going to ask a question which seems utopic for me, but I need to know if there is a way to achieve what I need. And if not, I need to know why not.
The idea
Suppose I have a database structure, in MySql.
I want to create some solution to allow anyone (no matter who, no matter where) to have a synchronized copy (updated clone) of this database (with its content)
Well, and it is not going to be just one synchronized copy, it could (and should) be a multiple replication (supposing the basic, this means, for example, ten copies all over the world)
And, the most important thing: It must be secure. By secure I mean only real-accepted transactions will be synchronized with all the others (no matter how many) database copies/clones.
Note: Since it would be quite difficult to make the synchronization in real-time, I will design everything to make this feature dispensable. So it is not required.
My auto-suggestion
This is how I am thinking to manage it:
Time identifiers and Updates checking: Every action (insert, update, delete...) will be stored as the action instruction itself, associated to the time identifier. [I think better than a DATETIME field, it'll be an INT one, with the number of miliseconds passed from 1st january 2013 on, for example]. So each copy is going to ask to the "neighbour copy" for new actions done since last update, and execute them after checking they are allowed.
Problem 1: the "neighbour copy" could be outdated too.
Solution 1: do not ask just one neighbour, create a random list with some of the copies/clones and ask them for news (I could avoid the list and ask ALL the clones for updates, but this will be inefficient if clones number ascends too much).
Problem 2: Real-time global synchronization is not active. What if...
Someone at CLONE_ENTERPRISING inserts a row into TABLE.
... this row goes to every clone ...
Someone at CLONE_FIXEMALL deletes this row.
... and at the same time, somewhere in an outdated clone ...
Someone at CLONE_DROPOUT edits this row (now inexistent at the other clones)
Solution 2: easy stuff, force a GLOBAL synchronization before doing any new "depending-on-third-data action" (edit, for example). This global synch. will be unnecessary when making an INSERT, for instance.
Note: Well, someone could have some fun, and make the same insert in two clones... since they're not getting updated in real-time, this row will exist twice. But, it's the same as when we have one single database, in some needed cases we check if there is an existing same-row before doing the final action. Not a problem.
Problem 3: It is possible to edit the code and do not filter actions, so someone could spread instructions to delete everything, or just make some trolling activity. This is not a problem, since good clones will always be somewhere. Those who got bad won't interest anymore.
I really appreciate if you read. I know this is not the perfect solution, it has possibly hundred of holes, but it is my basic start. I will now appreciate anything you can teach me now. Thanks a lot.
PS.: It could be that all this I am trying already exists and has its own name. Sorry for asking then (I'd anyway thank this name, if it exists)
I would suggest a look at Sync Framework from Microsoft. It might be better suited to SQL Server but it should work with MySQL too. The problem you are tackling is quite a complex one.

MySQL push changes

I'd like to be able to replicate a bunch of mysql tables to a custom service.
Right now, my best idea is creating an after insert trigger on each table and have these push to a 'cache' table that will get polled by my custom service for updated rows.
The problem with the above is that it means I have to poll at regular intervals. I'm wondering if there is a way to do it where mysql pushes updates to my service. The best way for this that I can think of is if triggers could support actions other than updating other tables, like doing a POST (which as far as I can tell is not possible).
I'm pretty sure there's a way to have mysql push binary logs to me somehow, but I dont know how to do that.
You can extend the engine to run system code from your function. Here's an overview.
Given this effort (setup and maintenance), a polling script doesn't look too bad.

Get all database actions (insert, update , delete, alter, ...)information

in sql server 2008 is there a way to get the user that inserted some rows, or updated, deleted, dropped, altered some tables?
can we get this information the date that occurred?
also is there a way to know if the data was inserted from the same machine or from other machine?
Edit: if it's really hard then maybe a way to achieve this is to user triggers
but is there a way to catch every action that happens on the DB so i can log them all??
something like on insert on any table
i want everything to be done on the DB so no matter what business app i use it will be logged
Unless you already had something set up in advance - a CDC mechanism of some kind it is going to be incredibly difficult to extract that information from the logs. It is possible given enough time, but it is a highly skilled forensic activity that is extremely time consuming to perform. (And relies on full logs being available.) There are third party log readers than can help with this but it will still be a huge effort.

SQL Server 2008 Mirrored DB Update Rollback - Crisis

I am a programmer who has done a very bad thing and somehow didn't select the WHERE clause before hitting F5 on an update query in SQL Server 2008.
I know this isn't a programming question but it is a question from a desprate programmer ...
Is there anyway to get the one column's data back from the transaction log or a log kept by the mirroring system?
Oh and yes, it gets better: the nightly maintenance plan for backups seems to have been turned off.
Any ideas please?
-Mike
stunned at reading "(197875 row(s) affected)"
Call off the dogs. I regenerated the database from an old backup and the source log files used to populate it.
In a more lucid moment I came to understand my question as:
Is the original value of a row stored in the transaction log of an update operation?
I'm almost sure the answer is no.
Thanks for listening.
-Mike
Mike, glad to hear you were able to recover the data. Now is the time to implement some sort of backup strategy :)
To your question, the transaction log can be backed up (every 10 minutes, etc.), but no... the original value is not persisted anywhere unless you explicitly build that functionality in. A great place to start is Ola Hallengren's great set of free maintenance scripts.