I have been looking for a way to prevent MySQL delete statements from getting processed by the slave, I'm working on data warehousing project, and I would like to delete data from production server after having data replicated to slave.
what is the best way to get this done?
Thank you
There are several ways to do this.
Run SET SQL_LOG_BIN=0; for the relevant session on the master before executing your delete. That way it is not written to the binary log
Implement a BEFORE DELETE trigger on the slave to ignore the deletes.
I tend to use approach #1 for statements that I don't want to replicate. It requires SUPER privilege.
I have not tried #2, but it should be possible.
You'll only be able to achieve this with a hack, and it will likely cause problems. MySQL replication isn't designed for this.
Imagine you insert a record in your master, it replicates to the slave. You then delete from the master, but it doesn't delete from the slave. If someone adds a record with the same unique key, there will be a conflict on the slave.
Some alternatives:
If you are looking to make a backup, I would do this by another means. You could do a periodic backup with a cronjob that runs mysqldump, but this assumes you don't want to save EVERY record, only create periodic restore points.
Triggers to update a second, mirror database. This can't cross servers though, you'd have to recreate each table with a different name. Also, the computational cost would be high and restoring from this backup would be difficult.
Don't actually delete anything, simply create a Status field which is Active or Disabled, then hide Disabled from the users. This has issues as well, for example, ON DELETE CASCADE couldn't be used, it would have to be all manually done in code.
Perhaps if you provide the reason you want this mirror database without deletes, I could give you a more targeted solution.
Related
I'm currently thinking about the following problem:
A customer has set up a simple master/slave replication between two mariaDB systems. For unknown reasons they have set the flag "Replicate_Wild_Ignore_Table" to skip "logdb.%". Obviously, they decided to skip the skipping of that database and want the logdb to be included in the replication again.
I'm curious now, is it possible to somehow remove that flag and have the database in question be replicated as the rest or is there no way to circumvent the "stop slave, dump master, import dump, recreate replication based on current logpos, start slave" procedure?
You can't assume that the master still has all relevant binlogs that once contained updates to the logdb.% tables. That is, even if you could re-apply those updates, do you have enough history to account for all changes to the tables?
Another risk is if you use statement-based replication, if there were ever statements that referenced both a table in logdb.% and a table in another database, the replication filter has skipped that statement. So for example:
INSERT INTO mydb.mytable SELECT * FROM logdb.othertable;
Therefore even the tables that are not in logdb.% might be compromised. The point is you don't know for sure.
The bottom line is that you should definitely reinitialize the replica now by taking a current backup of the master, and avoid using replication filters in the future.
If you use InnoDB tables, you might consider using Percona XtraBackup to make the process easier. See https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/setting_up_replication.html
i have a system made with MySQL DB and Other system made with PostgreSQL. I want to create an trigger in postgres that insert rows in MySQL, but i don't know how do this, is it posible?
The reason is that i need to syncronize the users of both databases without knowing when the user is created.
You'd have to use mysql_fdw for that.
But I think that it would be a seriously bad idea to do that — if the MySQL database goes down, the trigger will throw an error, and the transaction is undone. Basically, you cannot modify the table any more. Moreover, the latency of the PostgreSQL-MySQL round trip would be added to each transaction.
I think you would be better of with some sort of log table in PostgreSQL where you store the changes. An asynchronous worker can read the changes and apply them on MySQL.
One more thought: You are not considering replicating database users, right? Because you cannot have triggers on system tables.
So my situation is as follows:
There is a single Master-Slave Replication on a MySQL 5.5 basis.
The master use a small SSD as data partition.
Therefore I want to clean a certain Inno Table (lets call this table MasterA) and move old (datediff < -2) rows to another database on the slave (SlaveA) with more space on the SATA-HDD.
The problem gets interesting as in some cases I need to access data from SlaveA.
So I think it would be the best if an event triggers a transaction like this:
INSERT INTO SlaveA SELECT * FROM MasterA WHERE datediff(created, now()) < -2;
DELETE FROM MasterA WHERE datediff(created, now()) < -2;
But how could I access SlaveA from the master? I already tried the federated engine, but it gets stuck with the read_only option activated on the slave and the super privilege for the user accessing the federated table.
Maybe the event should only call the copy query on the slave, but how to delete the rows on the master afterwards?
There should be other options than installing MySQL 5.6 and use another partition for the SlaveA table on the master.
Thanks in advance!
An external daemon process (with handles to both databases) could accomplish what you are looking for but it is not a very clean solution.
If you did have a single handle with access to both databases a trigger would be a viable solution. I would change your code to use a MySQL user defined variable setting it in the first statement and use it in the second statement.
On the other hand I would question why you think you need the write master on a SSD. Insert queries are normally a lot cheaper than delete queries. If you make sure all the reads are against the slaves the master should have very minimal latency. I would recommend putting it on SATA HDD and not running delete quires against it. Then you don't have to create a custom trigger; MySQL's built in replication should work just fine.
I wonder if there is any easy way to keep the scheme consistent in two different MySQL clusters. Apart from classic replication, I would like to have a special "replication" which would reproduce all DDL queries (CREATE, ALTER, DROP, ...) on another cluster (namely the master of that cluster).
I don't need the actual data to be replicated.
Has anyone ever done or tried anything like this?
You can filter replication in MySQL based upon the database in which a query was executed. That doesn't prevent you making changes in other databases, however! So you can do;
USE ddl_repl_db;
ALTER TABLE other_db.foo ADD COLUMN <etc>
This relies on you configuring your servers correctly. I haven't set up MySQL replication for a while, but IIRC you can both filter what you send out from the master for replication and what you accept on the slave.
Old but still high in search.
So, on you DDL replica set all tables engine to BLACKHOLE
If I'm working on a development server and have updates to the database structure for some of our releases, what is the best way to update the structure on the production server?
Currently we create a new production database containing the structure only, do a SQL dump of the data on the 'old' production database, then run a SQL query to insert the data into the new database.
I know there is an easier way to do these updates, right?
Thanks in advance.
We don't run anything on prod without a script and that script must be in source control. Additionally we have to write a rollback script in case the initial script goes bad and we have to back it out. And when we move to prod configuration management does a differential compare between prod and dev to see if we have missed anything in the production script (any differences have to be traceable to development we are not yet ready to move to prod and documented). A product like Red-gate's SQL compare can do this. Our process is very formalized so that we can maintain a certification required by our larger clients.
If you have large tables even alter table can be slow, but it's still generally more efficient in total time than making a copy of the table with a new name and structure, copying the data to that table, renaming the old table, then naming the new table the name of the orginal table, then deleting the old table.
However, there are times when that is a preferable process as the total down time apparent to the user in this case is the time it takes to rename two tables, so this is good for tables where the data only is filled from the backend not the application (if the application can update the tables, it is a dangerous practice to do this as you may lose changes made while the tables were in transition). A lot of what process to use depends on the nature of the change you are making. Some changes should be done in a maintenance window where the users are not allowed to access the database. For instance if you are adding a new field with a default value to a table with 100,000,000 records, you are liable to lock up the users from using the table while the update happens. It is better to do this in single user mode during off hours (and when the users are told in advance the database will not be available). Other changes only take milliseconds and can happen easily while users are logged in.
Look at alter table to change the schema
It might not be easier than your method but it means less copying of the database
This is actually quite a deep question. If the only changes you've made are to add some columns then ALTER TABLE is probably sufficient. But if you're renaming or deleting columns then ALTER statements may break various foreign key constraints. In addition, sometimes you need to make changes both to the database and the data, which is pretty much unscriptable.
Most likely the best way to automate this would be to write a simple script for each deployment (along with a script to roll back!) which is basically what some systems like Rails will do for you I believe. Some scripts might be simply ALTER statements, some might temporarily disable foreign-key checking and triggers etc, some might run some update statements as well. And some might be dumping the db and rebuilding it. I don't think there's a one-size-fits-all solution here, sorry :)
Use the ALTER TABLE command: http://dev.mysql.com/doc/refman/5.0/en/alter-table.html