mysql unlock transaction running for many days - mysql

in "show engine innodb status" i get the following row:
MySQL thread id 1, OS thread handle 0x2b0a8fef1700, query id 860436 localhost 127.0.0.1 rdsadmin cleaning up
---TRANSACTION 334275772, ACTIVE 1403158714 sec recovered trx
ROLLING BACK 2 lock struct(s), heap size 376, 1 row lock(s), undo log entries 399300
it locks a row i can't modify or delete and don't know what to do.
since i'm using aws rds i can't even restart the server
what can be done?

Probably a bit late; but here was my solution. Had to identify the actual table affected; this was done by having mysql operational, and making calls from our application that uses MySQL, then watching 'show full processlist' to see where queries were queuing up - our ncs.keywords table.
Solution was to drop that offending table, recreate the table structure, then, repopulate the table.
OS: Centos 7 .. MySQL version: 5.7
Symptom : CPU running at 100% for mysqld upon MySQL start; nothing in processlist.
Stop MySQL
#] systemctl stop mysqld
Wouldn't 'stop' so find the process and kill manually
#] kill -9 12345
Edit the MySQL config /etc/my.cnf, add line innodb_force_recovery = 3 as per documentation
Start MySQL [starts ok], CPU now at 0%. No rollback occurring.
#] systemctl start mysqld
Drop affected table ncs.keywords [we did not have to try to save our data - we repopulate it via our code]
#] mysql -u root -p
> show create table ncs.keywords [keep this syntax]
> drop table ncs.keywords
> exit
Check mysql ; restart again
#] systemctl restart mysqld
#] mysql -u root -p
> [paste the 'show create' syntax to create the fresh table again]
#] exit
Remove innodb_force_recovery = 3 from /etc/my.cnf, and restart MySQL again. CPU now at 0%, whilst not in recovery mode.
We were OK because a recovery of the data wasn't required. But you may need to dump data out of the table first.
Hope that helps [next time!].

Related

mysqldump hangs when calling "FLUSH TABLES WITH READ LOCK"

I have a MySQL server (Serv-B) that acts as a Slave from a Master server (Serv-A).
This has been configured and is working perfectly fine (the replication has been checked to work).
Now, I'd like this "slave" server to also become the master to another slave server (Serv-C). For that, I do the same commands as for setting up the initial configuration :
I connect to the MySQL server Serv-B, and call the following commands :
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
On another terminal, I run the mysqldump command :
mysqldump -u root mydb > /home/cx42/mydb.sql
But it hangs. It hangs until I either close the MySQL terminal, or stop the "Flush" command with UNLOCK TABLES;
As soon as I unlock the tables, the mysqldump commands finish in a few seconds.
It seems that the FLUSH command is locking my tables in a way that mysqldump can't access them, but I don't know what is causing this.
The Serv-B server (the one I'm calling "FLUSH ..") has the following configuration :
server-id = 10
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 7
max_binlog_size = 100M
binlog_do_db = mydb
log_slave_updates = 1
relay-log = /var/log/mysql/mysql-relay-bin.log
What is wrong ?
Yes, it's correct that FLUSH TABLES WITH READ LOCK acquires locks. It should be apparent from the syntax, and also if you were to read the documentation:
FLUSH TABLES tbl_name [, tbl_name] ... WITH READ LOCK
Flushes and acquires read locks for the named tables.
You could release the lock with UNLOCK TABLES, but that means you might not get the right reading from SHOW MASTER STATUS. As soon as you unlock the tables, more updates could be executed, advancing the log position.
The better option is to let mysqldump do the work. If you invoke mysqldump with the --source-data option (or --master-data before version 8.0.26), the output of the dump will include the binary log position, so you don't have to read it in another window.
See https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_source-data

How to Non-stop replication mysql server

In the production environment, I have a master and slave,but for some reason
some synchronization data of the slave is not synchronized
so the error code 1032 is caused.
I saw the solution and use the command:
set global sql_slave_skip_counter=1;
Now that db cannot be shut down, what method can I use to repair my slave
The master will always insert, delete, and update operations, Can't stop.
slave is only used to read,and I can Truncate slave.
The problem I encountered is composed of these:
The server cannot be shut down when I dump
If there is any operation to change the database, the index of binlog will change
How to solve this problem,
you can record the current position and index at the moment the dump.
mysqldump -u user -p mydb --set-gtid-purged=OFF --single-transaction --master-data=1> mydump.sql
--master-data=1
Indicates that the current position and index are recorded when finish dump.
cat idn_maindb.sql |grep "MASTER_LOG_FILE"
And you will get the index and position.

Why can't I drop MySQL Database?

Problem
I'm running MySQL 5.5.23 on Mac OS 10.8.2 and am unable to drop a particular database, but I can drop others.
When I attempt to drop the specific table I get this error:
#1548 - Cannot load from mysql.proc. The table is probably corrupted
Attempted Fixes
I have restarted the system
I have tried to restart MySQL via CLI
$ sudo /usr/local/mysql/support-files/mysql.server stop
but received this error ERROR! MySQL server PID file could not be found!
I have repaired the mysql.proc table.
REPAIR TABLE mysql.proc
REPAIR TABLE mysql.proc USE_FRM
I have repaired all mysql.* tables.
REPAIR TABLE mysql.*
When running mysqlcheck from the Command Line
mysqlcheck --repair --all-databases
mysqlcheck --repair specific-db
I received this error : mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (2) when trying to connect
Current Status
I still cannot drop the original specific database, but can drop others.
Update[1] 2013-01-05 11:15 am [New York]
Logs and Feedback (per #Thomas in comments)
To find all logs, I ran (cli):
$(ps auxww|sed -n '/sed -n/d;/mysqld /{s/.* \([^ ]*mysqld\) .*/\1/;p;}') --verbose --help|grep '^log'
I received this feedback:
130105 11:35:21 [Warning] Can't create test file /usr/local/mysql-5.5.23-osx10.6-x86_64/data/wills-mbp.lower-test
130105 11:35:21 [Warning] Can't create test file /usr/local/mysql-5.5.23-osx10.6-x86_64/data/wills-mbp.lower-test
130105 11:35:21 [Note] Plugin 'FEDERATED' is disabled. /usr/local/mysql/bin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13)
130105 11:35:21 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
I'm looking into the mysql_upgrade.
Update[2] 2013-01-05 4:04 pm [New York]
I ran this :
sudo /usr/local/mysql/support-files/mysql.server stop
And received this error:
ERROR! MySQL server PID file could not be found!
Update[2.1] 2013-01-05 5:37 pm [New York]
I ran ps auxww | grep mysql and found the mysqld process and killed it (sudo kill [process id]). I was then able to restart mysql successfully. However, I'm still having no luck dropping that specific database mentioned above.
Resolved
After trying to manually repair the corruption and many of the suggestions and the other answer listed here, reinstalling mySQL was the only thing that solved my problem.
On a Mac (running 10.8.2) I also had to do some manual deletions for a clean install:
sudo rm /usr/local/mysql
sudo rm -rf /usr/local/mysql*
sudo rm -rf /Library/StartupItems/MySQLCOM
sudo rm -rf /Library/PreferencePanes/My*
sudo rm -rf /Library/Receipts/mysql*
sudo rm -rf /Library/Receipts/MySQL*
sudo rm /etc/my.cnf
Articles consulted
MySQL duplicates with CONCAT error 1548 - Cannot load from mysql.proc. The table is probably corrupted
SQL error: BIGINT UNSIGNED value is out of range in (…), but it doesn't make sense
How to repair corrupted table
MySQL manager or server PID file could not be found
PHP/MySQL issue after security update 2010-005
mysql problems after Mac OS X software update
How to remove MySQL completely Mac OS X Leopard
I ran into an issue that queries on my databases (named: caloriecalculator) was taking too long and it won't drop at all. I followed these steps below and it fixed my issue:
See all MySQL processes: mysqladmin processlist -u root -p
Kill all processes relating to caloriecalculator as it was blocking my next queries to be executed.
mysqladmin -u root -p kill 4
Now run: drop database caloriecalculator;
I would try:
Backup/save any databases that have important data.
Remove mySQL
Reinstall mySQL
Restore any backed up databases.
I had this happen to me on a Linux server, and the cause was a corrupted database directory.
UPDATE: one thing to do is to go into MySQL database directory and perform a ls -la, to verify that the evil DB is the same as the others as regards permissions, ownership and so on. For example here the 'original' database cannot be dropped (it was created by a stupid tool ran as root):
drwx------ 2 mysql mysql 4096 Aug 27 2015 _db_graph
drwx------ 2 mysql mysql 4096 Jul 13 11:58 _db_xatex
drwxrw-rw- 2 root root 12288 May 18 14:27 _db_xatex_original
drwx------ 2 mysql mysql 12288 Jun 9 08:23 _db_xatex_contab
drwx------ 2 mysql mysql 12288 May 18 17:58 _db_xatex_copy
drwx------ 2 mysql mysql 4096 Nov 24 2016 _db_xatex_test
Running chown mysql:mysql _db_xatex_original; chmod 700 _db_xatex_original would fix the problem (but check inside the directory to verify there too permissions and ownerships are copacetic).
In the end, I employed the following ugly hack (after trying stopping, restarting and repairing whatever could be targeted by a REPAIR):
created a database "scapegoat"
stopped MySQL Server
copied the directory created by MySQL Server, /var/lib/mysql/scapegoat, to /tmp
restarted MySQL Server, dropped the database "scapegoat", stopped the server
Now I had a copy of a clean, empty DB dir that MySQL no longer knew anything about.
moved the "evildb" directory to /tmp (so that if thing went wrong I could put it back)
moved the "scapegoat" directory to /var/lib/mysql renaming it to "evildb"
started MySQL Server
not sure if I ran any more repairs at this point
and the "evildb" database became droppable!
My explanation is that when asked to drop a database, MySQL Server first performs some checks on the files in the database directory. If these checks fail, the drop also fails. These checks must be subtly different from the ones performed by REPAIR. Maybe in the affected directory there is something unexpected.
I think this was on a MySQL 5.1 or 5.2 on a SuSE 11.2 Linux distribution. Hope it helps.
UPDATE
On thinking back, I don't remember getting errors about "proc". So I'm less sure that the problem lies in the directory. It might be connected with the proc table, without being a table corruption. Have you tried visually inspecting the proc database table, in order to find something there that belongs to the evil DB?
USE mysql;
SELECT * FROM proc;
That, or any errors therefrom, could help in solving the problem. You might, who know, have some lines with the wrong db column. In a pinch, you could export the proc table and reload it after cleaning (either through SQL or via a disk file).
TEST
I have partial verification for the above update. By intentionally inserting rubbish into the proc table apropos a newly created database evil, I partially reproduced your symptoms (undroppable database, MySQL connection crashes on attempt). Error number is not 1548 though; but maybe it would be, if I inserted the right rubbish in that table... anyway, the useful bit is that by removing all references to the evil db, the latter became droppable again:
mysql> drop database evil;
ERROR 2013 (HY000): Lost connection to MySQL server during query
mysql> use mysql;
No connection. Trying to reconnect...
Connection id: 1
Current database: *** NONE ***
Database changed
mysql> DELETE FROM proc WHERE db = 'evil';
Query OK, 2 rows affected (0.00 sec)
mysql> drop database evil;
Query OK, 0 rows affected (0.00 sec)
I had the same problem and all I did was to delete the database directory from the mysql data directory.
If you using xampp In windows
you can also drop your database using phpmyadmin
go to home -> databases -> click on your [database name] -> drop
OR
you can also drop your database manually
go to xampp -> mysql -> data -> [database name]
delete your [database name] now.

mysqldump and flush tables with read locks

I'm running the mysqdump from server1 to server2
The mysqldump command I'm using is
mysqldump -q -u dump -p######## -h ###.###.###.### --add-drop-database --add-drop-table --set-charset --all-databases > dump.sql
and the user and privileges are correct.
When I run this I do get an output file (dump.sql) but it stops at 948,920 bytes and does not increase in size even if I leave it for 1 hour.
I have tried the mysqldump 8 times now with the same message from the running process:
292186 root localhost RED Query 26 Waiting for release of readlock LOCK TABLES `OLD_RED_NOTES` READ /*!32311 LOCAL */,`RED_ADD` READ /*!32311 LOCAL */,`RED_COUNTRY` RE
If however I don't perform the FLUSH TABLES WITH READ LOCK; I can get a 7GB file without issue.
I simply cant understand why I cant get this with the table lock !
Been bashing Google for 48 hours with no joy ! Please help
Do "SHOW PROCESSLIST" as root to see what other processes are hitting your table and giving you the lock. If your tables are InnoDB, use SHOW ENGINE INNODB STATUS which will show the InnoDB locks and other information as well. If you are using MyISAM, "SHOW STATUS LIKE 'Table%'" will show you if you're hitting a lot of locks all the time or only with the dumps.

How to re-sync the Mysql DB if Master and slave have different database incase of Mysql replication?

Mysql Server1 is running as MASTER.
Mysql Server2 is running as SLAVE.
Now DB replication is happening from MASTER to SLAVE.
Server2 is removed from network and re-connect it back after 1 day. After this there is mismatch in database in master and slave.
How to re-sync the DB again as after restoring DB taken from Master to Slave also doesn't solve the problem ?
This is the full step-by-step procedure to resync a master-slave replication from scratch:
At the master:
RESET MASTER;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
And copy the values of the result of the last command somewhere.
Without closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master:
mysqldump -u root -p --all-databases > /a/path/mysqldump.sql
Now you can release the lock, even if the dump hasn't ended yet. To do it, perform the following command in the MySQL client:
UNLOCK TABLES;
Now copy the dump file to the slave using scp or your preferred tool.
At the slave:
Open a connection to mysql and type:
STOP SLAVE;
Load master's data dump with this console command:
mysql -uroot -p < mysqldump.sql
Sync slave and master logs:
RESET SLAVE;
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98;
Where the values of the above fields are the ones you copied before.
Finally, type:
START SLAVE;
To check that everything is working again, after typing:
SHOW SLAVE STATUS;
you should see:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
That's it!
The documentation for this at the MySQL site is woefully out of date and riddled with foot-guns (such as interactive_timeout). Issuing FLUSH TABLES WITH READ LOCK as part of your export of the master generally only makes sense when coordinated with a storage/filesystem snapshot such as LVM or zfs.
If you are going to use mysqldump, you should rely instead on the --master-data option to guard against human error and release the locks on the master as quickly as possible.
Assume the master is 192.168.100.50 and the slave is 192.168.100.51, each server has a distinct server-id configured, the master has binary logging on and the slave has read-only=1 in my.cnf
To stage the slave to be able to start replication just after importing the dump, issue a CHANGE MASTER command but omit the log file name and position:
slaveserver> CHANGE MASTER TO MASTER_HOST='192.168.100.50', MASTER_USER='replica', MASTER_PASSWORD='asdmk3qwdq1';
Issue the GRANT on the master for the slave to use:
masterserver> GRANT REPLICATION SLAVE ON *.* TO 'replica'#'192.168.100.51' IDENTIFIED BY 'asdmk3qwdq1';
Export the master (in screen) using compression and automatically capturing the correct binary log coordinates:
mysqldump --master-data --all-databases --flush-privileges | gzip -1 > replication.sql.gz
Copy the replication.sql.gz file to the slave and then import it with zcat to the instance of MySQL running on the slave:
zcat replication.sql.gz | mysql
Start replication by issuing the command to the slave:
slaveserver> START SLAVE;
Optionally update the /root/.my.cnf on the slave to store the same root password as the master.
If you are on 5.1+, it is best to first set the master's binlog_format to MIXED or ROW. Beware that row logged events are slow for tables which lack a primary key. This is usually better than the alternative (and default) configuration of binlog_format=statement (on master), since it is less likely to produce the wrong data on the slave.
If you must (but probably shouldn't) filter replication, do so with slave options replicate-wild-do-table=dbname.% or replicate-wild-ignore-table=badDB.% and use only binlog_format=row
This process will hold a global lock on the master for the duration of the mysqldump command but will not otherwise impact the master.
If you are tempted to use mysqldump --master-data --all-databases --single-transaction (because you only using InnoDB tables), you are perhaps better served using MySQL Enterprise Backup or the open source implementation called xtrabackup (courtesy of Percona)
Unless you are writing directly to the slave (Server2) the only problem should be that Server2 is missing any updates that have happened since it was disconnected. Simply restarting the slave with "START SLAVE;" should get everything back up to speed.
I am very late to this question, however I did encounter this problem and, after much searching, I found this information from Bryan Kennedy: http://plusbryan.com/mysql-replication-without-downtime
On Master take a backup like this:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/dump.sql
Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later:
head dump.sql -n80 | grep "MASTER_LOG"
Copy the "dump.sql" file over to Slave and restore it:
mysql -u mysql-user -p < ~/dump.sql
Connect to Slave mysql and run a command like this:
CHANGE MASTER TO MASTER_HOST='master-server-ip', MASTER_USER='replication-user', MASTER_PASSWORD='slave-server-password', MASTER_LOG_FILE='value from above', MASTER_LOG_POS=value from above; START SLAVE;
To check the progress of Slave:
SHOW SLAVE STATUS;
If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”.
Look for Seconds_Behind_Master which indicates how far behind it is.
YMMV. :)
I think, Maatkit utilits helps for you! You can use mk-table-sync. Please see this link: http://www.maatkit.org/doc/mk-table-sync.html
Here is what I typically do when a mysql slave gets out of sync. I have looked at mk-table-sync but thought the Risks section was scary looking.
On Master:
SHOW MASTER STATUS
The outputted columns (File, Position) will be of use to us in a bit.
On Slave:
STOP SLAVE
Then dump the master db and import it to the slave db.
Then run the following:
CHANGE MASTER TO
MASTER_LOG_FILE='[File]',
MASTER_LOG_POS=[Position];
START SLAVE;
Where [File] and [Position] are the values outputted from the "SHOW MASTER STATUS" ran above.
Hope this helps!
Following up on David's answer...
Using SHOW SLAVE STATUS\G will give human-readable output.
Master:
mysqldump -u root -p --all-databases --master-data | gzip > /tmp/dump.sql.gz
scp master:/tmp/dump.sql.gz slave:/tmp/ Move dump file to slave server
Slave:
STOP SLAVE;
zcat /tmp/dump.sql.gz | mysql -u root -p
START SLAVE;
SHOW SLAVE STATUS;
NOTE:
On master you can run SET GLOBAL expire_logs_days = 3 to keep binlogs for 3 days in case of slave issues.
Here is a complete answer that will hopefully help others...
I want to setup mysql replication using master and slave, and since the only thing I knew was that it uses log file(s) to synchronize, if the slave goes offline and gets out of sync, in theory it should only need to connect back to its master and keep reading the log file from where it left off, as user malonso mentioned.
So here are the test result after configuring the master and slave as mentioned by: http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html ...
Provided you use the recommended master/slave configuration and don't write to the slave, he and I where right (as far as mysql-server 5.x is concerned). I didn't even need to use "START SLAVE;", it just caught up to its master. But there is a default 88000 something retries every 60 second so I guess if you exhaust that you might have to start or restart the slave. Anyways, for those like me who wanted to know if having a slave going offline and back up again requires manual intervention.. no, it doesn't.
Maybe the original poster had corruption in the log-file(s)? But most probably not just a server going off-line for a day.
pulled from /usr/share/doc/mysql-server-5.1/README.Debian.gz which probably makes sense to non debian servers as well:
* FURTHER NOTES ON REPLICATION
===============================
If the MySQL server is acting as a replication slave, you should not
set --tmpdir to point to a directory on a memory-based filesystem or to
a directory that is cleared when the server host restarts. A replication
slave needs some of its temporary files to survive a machine restart so
that it can replicate temporary tables or LOAD DATA INFILE operations. If
files in the temporary file directory are lost when the server restarts,
replication fails.
you can use something sql like: show variables like 'tmpdir'; to find out.
Adding to the popular answer to include this error:
"ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO",
Replication from slave in one shot:
In one terminal window:
mysql -h <Master_IP_Address> -uroot -p
After connecting,
RESET MASTER;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
The status appears as below: Note that position number varies!
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 | 98 | your_DB | |
+------------------+----------+--------------+------------------+
Export the dump similar to how he described "using another terminal"!
Exit and connect to your own DB(which is the slave):
mysql -u root -p
The type the below commands:
STOP SLAVE;
Import the Dump as mentioned (in another terminal, of course!) and type the below commands:
RESET SLAVE;
CHANGE MASTER TO
MASTER_HOST = 'Master_IP_Address',
MASTER_USER = 'your_Master_user', // usually the "root" user
MASTER_PASSWORD = 'Your_MasterDB_Password',
MASTER_PORT = 3306,
MASTER_LOG_FILE = 'mysql-bin.000001',
MASTER_LOG_POS = 98; // In this case
Once logged, set the server_id parameter (usually, for new / non-replicated DBs, this is not set by default),
set global server_id=4000;
Now, start the slave.
START SLAVE;
SHOW SLAVE STATUS\G;
The output should be the same as he described.
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Note: Once replicated, the master and slave share the same password!
Rebuilding the slave using LVM
Here is the method we use to rebuild MySQL slaves using Linux LVM. This guarantees a consistent snapshot while requiring very minimal downtime on your master.
Set innodb max dirty pages percent to zero on the master MySQL server. This will force MySQL to write all the pages to the disk which will significantly speed up the restart.
set global innodb_max_dirty_pages_pct = 0;
To monitor the number of dirty pages run the command
mysqladmin ext -i10 | grep dirty
Once the number stop decreasing you have reach the point to continue. Next reset the master to clear the old bin logs / relay logs:
RESET MASTER;
Execute lvdisplay to get LV Path
lvdisplay
Output will look like this
--- Logical volume ---
LV Path /dev/vg_mysql/lv_data
LV Name lv_data
VG Name vg_mysql
Shutdown the master database with command
service mysql stop
Next take a snaphot, mysql_snapshot will be the new logical volume name. If binlogs are place on the OS drive those need to be snapshot as well.
lvcreate --size 10G --snapshot --name mysql_snapshot /dev/vg_mysql/lv_data
Start master again with command
service mysql start
Restore dirty pages setting to the default
set global innodb_max_dirty_pages_pct = 75;
Run lvdisplay again to make sure the snapshot is there and visible
lvdisplay
Output:
--- Logical volume ---
LV Path /dev/vg_mysql/mysql_snapshot
LV Name mysql_snapshot
VG Name vg_mysql
Mount the snapshot
mkdir /mnt/mysql_snapshot
mount /dev/vg_mysql/mysql_snapshot /mnt/mysql_snapshot
If you have an existing MySQL slave running you need to stop it
service mysql stop
Next you need to clear MySQL data folder
cd /var/lib/mysql
rm -fr *
Back to master. Now rsync the snapshot to the MySQL slave
rsync --progress -harz /mnt/mysql_snapshot/ targethostname:/var/lib/mysql/
Once rsync has completed you may unmount and remove the snapshot
umount /mnt/mysql_snapshot
lvremove -f /dev/vg_mysql/mysql_snapshot
Create replication user on the master if the old replication user doesn't exist or password is unknown
GRANT REPLICATION SLAVE on *.* to 'replication'#'[SLAVE IP]' identified by 'YourPass';
Verify that /var/lib/mysql data files are owned by the mysql user, if so you can omit the following command:
chown -R mysql:mysql /var/lib/mysql
Next record the binlog position
ls -laF | grep mysql-bin
You will see something like
..
-rw-rw---- 1 mysql mysql 1073750329 Aug 28 03:33 mysql-bin.000017
-rw-rw---- 1 mysql mysql 1073741932 Aug 28 08:32 mysql-bin.000018
-rw-rw---- 1 mysql mysql 963333441 Aug 28 15:37 mysql-bin.000019
-rw-rw---- 1 mysql mysql 65657162 Aug 28 16:44 mysql-bin.000020
Here the master log file is the highest file number in sequence and bin log position is the file size. Record these values:
master_log_file=mysql-bin.000020
master_log_post=65657162
Next start the slave MySQL
service mysql start
Execute change master command on the slave by executing the following:
CHANGE MASTER TO
master_host="10.0.0.12",
master_user="replication",
master_password="YourPass",
master_log_file="mysql-bin.000020",
master_log_pos=65657162;
Finally start the slave
SLAVE START;
Check slave status:
SHOW SLAVE STATUS;
Make sure Slave IO is running and there are no connection errors. Good luck!
I recently wrote this on my blog which is found here... There are few more details there but the story is the same.
http://www.juhavehnia.com/2015/05/rebuilding-mysql-slave-using-linux-lvm.html
I created a GitHub repo with an script to solve this problem quickly. Just change a couple variables and run it (First, the script creates a backup of your database).
I hope this help you (and others people too).
How to Reset (Re-Sync) MySQL Master-Slave Replication
sometimes you just need to give the slave a kick too
try
stop slave;
reset slave;
start slave;
show slave status;
quite often, slaves, they just get stuck guys :)
We are using master-master replication technique of MySQL and if one MySQL server say 1 is removed from the network it reconnects itself after the connection are restored and all the records that were committed in the in the server 2 which was in the network are transferred to the server 1 which has lost the connection after restoration.
Slave thread in the MySQL retries to connect to its master after every 60 sec by default. This property can be changed as MySQL ha a flag "master_connect_retry=5" where 5 is in sec. This means that we want a retry after every 5 sec.
But you need to make sure that the server which lost the connection show not make any commit in the database as you get duplicate Key error Error code: 1062