MySQL Local Database Replication - mysql

Is it possible to replicate a database to a second database in the same server?
I want to replicate a database that is used for an application, and create a copy that will be used for webservice testing puposes, like creating fake orders, fake data, etc... and it would be very nice to get updates from the main db, like product data updates...
I think i could use the binlog-do-db (or something similar) in mysql config and use the server as master and slave, but i played with that config before and had problems. In my current replications i replicate the entire mysql server, so it works.
Also, i dont want to replicate table1 to table1 and instead, table1 to table2. I dont know if thats allowed.
Is this the best approach or i'm trying to do something wrong/not possible? What would you recommend?

You might be tempted to try to set up a single mysqld instance as master and slave with replicate-same-server-id; this won't work, as the server-id must be unique between every other ID in use by any other replication master or slave.
See this Percona article on binlog-do-db et al. You could achieve this with running another mysqld instance on the same node, and configure on the slave replicate-rewrite-db to apply statements to a different database name. I do not see anything about rewriting table names in replication options, though.
Alternatively, depending on the size of the database you are looking to duplicate, you may mysqldump and import to another database.

Related

How to re-replicate ignored tables

I'm currently thinking about the following problem:
A customer has set up a simple master/slave replication between two mariaDB systems. For unknown reasons they have set the flag "Replicate_Wild_Ignore_Table" to skip "logdb.%". Obviously, they decided to skip the skipping of that database and want the logdb to be included in the replication again.
I'm curious now, is it possible to somehow remove that flag and have the database in question be replicated as the rest or is there no way to circumvent the "stop slave, dump master, import dump, recreate replication based on current logpos, start slave" procedure?
You can't assume that the master still has all relevant binlogs that once contained updates to the logdb.% tables. That is, even if you could re-apply those updates, do you have enough history to account for all changes to the tables?
Another risk is if you use statement-based replication, if there were ever statements that referenced both a table in logdb.% and a table in another database, the replication filter has skipped that statement. So for example:
INSERT INTO mydb.mytable SELECT * FROM logdb.othertable;
Therefore even the tables that are not in logdb.% might be compromised. The point is you don't know for sure.
The bottom line is that you should definitely reinitialize the replica now by taking a current backup of the master, and avoid using replication filters in the future.
If you use InnoDB tables, you might consider using Percona XtraBackup to make the process easier. See https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/setting_up_replication.html

Mysql Master-Slave replication without alter the server-id property

I have a couple of servers and I want to prepare a Master - Slave Mysql replication for one database. The both servers have many databases and I don't want to altere the secuence of ID generation. For example, after I have prepared the configuration I don't want to have in the tables just even numbers for the all the IDs in one of servers.
The replicated database (slave server) will be not accesed for write.
Is posible to configure that scenario?
Many thanks in advance.
Update: The server_id has nothing to do with id generation. It just needs to be a unique integer greater than 0 on each server in your replica-set.
Below is my original answer, which was my guess about what you were asking about, because it's the only feature I could think of that has to do with both replication and auto-increment id generation.
You don't need to change id generation for simple replication.
The scenario where you might use auto_increment_increment=2 is the master-master replication, where two servers replicate from each other, and you want to minimize the risk of split-brain if an insert occurs on both servers. But this is not the scenario you describe.
If you have one master, and it's the only server you write changes on directly, and the replica(s) that replicate from that master are all read-only, then you don't need to change the auto_increment_increment.

filter mysql replication (ignore-db)

mysql ignore-db works according to server my.cnf AFAIK,
i.e.
binlog-ignore-db = mysql
replicate-ignore-db = mysql
I am not sure, if this works from client side too, can anyone explain the mechanism, how can i be able to send from master but not accept in client side.
Why i want to do this? I have multiple slave "2 slave" must replicate MySQL table where as in other 2 should not be overwriten. Where as every other table will be replicated.
Reading this: http://dev.mysql.com/doc/refman/5.6/en/replication-rules-db-options.html didnt make me clear enough.
binlog-ignore-db is a master-side setting, it tells the Master not to log changes taking place on the listed DB.
replicate-ignore-db is a slave-side setting, it tells the Slave to ignore incoming log information related to the listed DB
The typical use case is when you want to replicate different databases from one single Master to different Slaves. The Master must log all changes occurring in all databases (minus those possibly excluded by binlog-ignore-db, i.e. database that will not be replicated anywhere).
Each Slave will receive the full binary log, but will only replicate changes related to the selected databases (i.e. databases not excluded by replicate-ignore-db -- this list would be different on each Slave).
(mysql database being a system database, it should be ignored from both ends, unless you really, really really know what you are doing).

Copying tables data between different MySQL Servers

Imagine the setup of 5 myqsl servers and 1 of them has the correct data for some tables which are being updated all the time and I would like to copy over this data to the other mysql servers.
Now I do remember working on a MySQL Replication task once where through the same website I write to the Master DB and read from the Slave DB but in this case, is this possible to do? Also is it feasible to do?
An example of a table would be "Translations". Whatever new translations are entered in one DB, they are copied to the other servers
You have answered your own question.
You need to set up replication using master - slave servers.
Where you only do updates in the master and let the slaves feed on the master.
See:
http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html
http://crazytoon.com/2008/01/29/mysql-how-do-you-set-up-masterslave-replication-in-mysql-centos-rhel-fedora/
If you want a book I'd recommend: High performance MySQL.

Reducing priority of MySQL commands/jobs (add an index/other commands)?

We have a moderately large production MySQL database. Periodically, we will run commands, usually via a rails migration, that while running, bog down the database. As a specific example, we might add an index to a large table.
Is there any method that can reduce the priority MySQL gives to a specific task. A sort of "nice" within MySQL itself? I found this, which is what inspired the question:
PostgreSQL tips and tricks
Since adding an index causes the work to be done within the DB and MySQL process, lowering the priority of the Rails migration process seems like it won't help. Are there other ways we can lower the priority?
We use multiple, replicated database servers to make changes like this.
In our case, db1 is the master, replicated to db2. (db1->db2).
Start by making the change to db2. If things lock, replication will stall, but that's OK.
Move your traffic to db2. Any remnant traffic going to db1 will replicate over, so you won't lose anything.
Once there's no traffic on db1, rebuild it as a slave of db2 (db2->db1).
That's the general idea and you get very little downtime and you don't have to pull an all-nighter! We actually have three servers, so it's a little more complicated, but not much.
Good luck.
Unfortunately, there is no simple way to do this: commands that alter the database structure don't have a priority option.
If your tables are MyISAM, you can try this:
mysqlhotcopy to make a backup of the table
import that backup it into a different database server (one that's not under load)
make the changes there
make a mysqlhotcopy backup of the altered table
import it into the live server
Note that this may or may not be faster than adding the index on the live server, depending on the time it takes you to transfer the table back and forth.