Import mysql database remotely without using files - mysql

I am trying to set up a replication slave.
I was hoping if I set the binlog position to 0 on the slave it would just start reading from 0 and replicate everything from scratch until it matched the master, but the slave's not doing anything, and not giving errors either.
So I first need a current database snapshot. Can I do this without dumping the database into a file if both servers can talk to each other on the local network? I tried this command but it just spits out the usage help:
slave ~$ sudo mysqlimport --host='[master-server-ip]' --user='repl' -pC
To reiterate, I want to transfer all databases (except the mysql built in databases) over the network without having to manually transfer files.

You can use 0 as binlog position but you also have to specify the log file.
You can see the name of the first binlog file in your MySQL data directory (usually /var/lib/mysql on unix).
Then try something like this on the slave server:
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST='...',
MASTER_PORT=3306,
MASTER_USER='...',
MASTER_PASSWORD='...',
MASTER_LOG_FILE='<your first binlog file>',
MASTER_LOG_POS=0;
START SLAVE;
Of course that will work if all binlogs are still present on the master server since it has been created which might be not your case since binlogs are automatically cleaned after some time.

Related

GTID based replication is trying to reexecute what ever has been dumped on mysql

I am trying to replicate a cloudsql MYSQL database to a GCE VM and I am following this guide.
https://cloud.google.com/sql/docs/mysql/replication/configure-external-replica
The error that I face is that is once I restore the dump and start my slave, the slave tries to execute the DDL commands that have already been dumped. In other words, the GTID based replication starts from 0.
What I expect is that it starts from the point where the dump has been taken.
What I am doing wrong here ?
I can see that I am getting the latest GTID set from the master. (left side is slave and right side is master).
So I have found out the issue.
The issue was that my dump file did not contain information about GTID from the source server. Because of this the destination has no idea about what GTID has been executed at the source.
So I must set gtid_purged to off when creating a mysql dump.
This will set the gtid_executed at the destination when restoring and ensure that there is not reexecution

Making new MYSQL replication

I need to make working mysql replication from master to slave. (tried it once already)
The database is quite large (over 100GB) and it will take some hours to make it ready for new slave.
The database has MyIsam and innoDB engine and both are being written
I think my only choice is to copy the data files from master to a new slave? (or make a database dump which im referring later in the topic of ROUND 2)
Before that I have to run down all the services which uses the database and
make writelock for tables or should i shut down the whole database?
After data directory sync to the new replication server I started it up and the database with the tables was there. First error that I got rid off by changing bin.log to 007324 and position to 0.
Error 1:
140213 4:52:07 [ERROR] Got fatal error 1236: 'Could not find first log file name in binary log index file' from master when reading data from binary log
140213 4:52:07 [Note] Slave I/O thread exiting, read up to log 'bin-log.007323', position 46774422
After that I got new problems from database and this error came out from every table.
Error 2:
Error 'Incorrect information in file: './database/table.frm'' on query. Default database: 'database'.
Seems that something went wrong.
ROUND 2!
After this scene I started to think that can this be done without long service break.
Master database has been already configured and it works ok to another slave.
So i did some googling and this is what i came up with.
Making read lock to tables:
FLUSH TABLES WITH READ LOCK;
Taking dump:
mysqldump --skip-lock-tables --single-transaction --flush-logs --master-data=2 -A > dbdump.sql
Packaging and moving:
gzip (pigz) the the dbdump and moving it to slave server after that finding the MASTER_LOG_FILE and MASTER_LOG_POS from the dump.
After that i don't think that i want to import the dbdump.sql because its over 100GB and
will take time. So i think SOURCE would be ok option for it.
On SLAVE server:
CREATE DATABASE dbdump;
USE dbdump;
SOURCE dbdump.db;
CHANGE MASTER TO MASTER_HOST='x.x.x.x',MASTER_USER='replication',MASTER_PASSWORD='slavepass',
MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=X;
start slave;
SHOW SLAVE STATUS \G
I haven't tested this yet, am I on to something?
--bp
Realize that issuing a SOURCE command is the same as running an import of the dumped SQL from shell. Either way, it is going to take a long time. Outside of that, you have the steps correct - flush table with read lock on master, make a database dump of master, make sure you note master binlog coordinates, import dump on slave, set binlog coordinates, start replication. Do not work with the raw binaries unless you REALLY know what you are doing (especially for INNODB tables).
If you have a number of large tables (i.e. not just one big one), you could consider parallelizing your dumps/imports by table (or groups of tables) to speed things along. There are actually tools out there to help you do this.
You CAN work with the raw binaries, but it is not for the faint of heart. In the past, I have used rsync to differentially update the raw binaries between master and slave (you still must use flush table with read lock and gather master binlog coordinates before doing this). For MyISAM tables this works pretty well actually. For InnoDB, it can be more tricky. I prefer to use the option to set InnoDB to write index and data files per table. You would need to rsync the ibdata* files. You would delete ib_logfile* files from slave.
This whole thing is a bit of a high wire act, so I would not resort to doing this unless you have no other viable options. Absolutely take a traditional SQL dump before even thinking about attempting a binary file sync, and each time until you are VERY comfortable that you actually know what you are doing.

Moving a MySQL slave to new hard drives - do I need mysqld-relay-bin logs?

I am moving a MySQL slave from one set of HDs to another. The configuration of the machine denies me the ability to have both old and new hard drives on it at the same time. So I rsync'ed the data directory to another machine.
Whe the new hard drives came online, I rsyn'ed the data dir back. This worked fine.
However, I cannot start replication. This is the error I get.
120314 4:23:07 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem.
120314 4:23:07 [ERROR] Failed to open the relay log '/var/lib/mysqllogs/mysqld-relay-bin.000273' (relay_log_pos 677043943)
120314 4:23:07 [ERROR] Could not find target log during relay log initialization
120314 4:23:07 [ERROR] Failed to initialize the master info structure
I found this comment:
https://serverfault.com/questions/61471/moving-a-mysql-slave-to-a-new-host-failed-to-open-the-relay-log
If it is just complaining about the relay logs, in most cases, they
are disposable if the master still has the binary logs around. You can
just run CHANGE MASTER TO on the slave and it will flush the existing
relay logs and start anew. You don't need to make a new fresh copy.
This seems to suggest that I do not need these log files.
The host name is not changing.
My Questions:
Do I need these log files?
If not, what do I need to do to get replication started? Will it remember where it left off?
If I do need these log files, is there anything else I'm forgetting?
I don't think you need the relay bin loge files to get it to work. It might remember where it left off, did you try the command mysql>RESET SLAVE; ? You should get the position from the SLAVE by SHOW SLAVE STATUS; to see if still rememvers, then check to see if the logfile on the master server still exists, because it only keeps it for as long as you set the max file size. But try RESET SLAVE; if you haven't it does magic. You are probably going to have to start the whole process over by dumping the existing server data right after you lock the tables and do "SHOW MASTER STATUS" I wouldn't recommend trying to save this process if you have the option to start replication from scratch.

Restoring purged mysql binlog files

I've got a replication set up on pair of servers. One is a master and second is a slave.
Recently on master the binlog files were purged too early (by filename so mysql haven't prevented too early removal of file).
Now the SLAVE has status:
Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
I wan't to restore the missing binlog files so the slave will restart reading from the point it finished.
The files are already in place but how can I force master to 'unpurge' it's log list (so they are visible in SHOW BINARY LOGS)?
Ok I made it. However this solution isn't perfect/100% safe.
I've entered all filenames to my mysql-bin.index
find /var/log/mysql/ -wholename '/var/log/mysql/mysql-bin.0*' | sort > mysql-bin.index
(if you will use it check the filename format in mysql-bin.index file first and adjust to your needs)
Then restart mysql and mysql reloads that file on start.
the MASTER is ready.
Now it's enough to do
SLAVE STOP;
and
SLAVE START;
on SLAVE and it will continue his job.

In MySQL, how can I delete/flush/clear all the logs that are not necessary?

I have tried several commands (FLUSH LOGS, PURGE MASTER) but none deletes the log files (when previously activated) or the log tables (mysql/slow_log.CSV and mysql/general_log.CSV and their .frm and .CSM counterparts).
SHOW BINARY LOGS returns "You are not using binary logging".
Edit: I found this simple solution to clear the table logs (but not yet the file logs using a mysql command):
TRUNCATE mysql.general_log;
TRUNCATE mysql.slow_log;
FLUSH LOGS just closes and reopens log files. If the log files are large, it won't reduce them. If you're on Linux, you can use mv to rename log files while they're in use, and then after FLUSH LOGS, you know that MySQL is writing to a new, small file, and you can remove the old big files.
Binary logs are different. To eliminate old binlogs, use PURGE BINARY LOGS. Make sure your slaves (if any) aren't still using the binary logs. That is, run SHOW SLAVE STATUS to see what binlog file they're working on, and don't purge that file or later files.
Also keep in mind that binlogs are useful for point-in-time recovery in case you need to restore from backups and then reapply binlogs to bring the database up to date. If you need to use binlogs in this manner, don't purge the binlogs that have been written since your last backup.
If you are on amazon RDS, executing this twice will do the trick:
PROMPT> CALL mysql.rds_rotate_slow_log;
PROMPT> CALL mysql.rds_rotate_general_log;
Source: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html
It seems binary logging is not enabled in your server .And i guess you want to delete the old log files which were used/created at the time of binary logging is enabled . you can delete them manually using 'rm' command if you want . if you want to enable the binary logging you can do the same by updating the configuaration file ( but it needs restart of the server if it is already running) . You can refer below links.
http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#option_mysqld_log-bin
http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#sysvar_log_bin