mysql - do locks propagate over replication? - mysql

I have a Mysql master-slave(s) replication with MyISAM tables. All updates are done on the master and selects are done on either the master or slaves.
It appears that we might need to manually lock a few tables when we do certain updates. While this write lock is on the tables, no selects can happen on the locked table. But what about on the slaves? Does the lock propagate out?
Say I have table_A and table_B. I initiate a lock on table_A and table_B on the master and start performing the update. At this time no other connection can read table_A and table_B off the master? But what if at this time another connection tries to read the tables off of a slave, can they do so?

Everything that MySQL replicates can be found in the binary logs.
You can run the following command to see the details.
show global variables like 'log_bin%';
log_bin_basename will tell you the path to your binary logs with base file name.
and run
show binary logs
to find the binary files that are currently present on your server.
You can check the actual commands that are written to the file by using mysqlbinlog command together with the file name or by running show binlog events ... from the MySQL CLI.
Also, check what binlog_format are you using.
Basically - the lock of the tables is not directly propagated to slaves, but at the time, whey will execute the performed updates they will perform a lock of the updated table if needed.

As far as I know write locks do not propagate into the binlog, You can verify that by doing quick test and looking at the binlog. If you want to avoid issues on the master aswell and for some reason can not migrate to InnoDB consider integrating something like GET_LOCK() into your application instead of completely locking a table. MyISAM is quite iffy when it comes to concurrency.

Related

How to re-replicate ignored tables

I'm currently thinking about the following problem:
A customer has set up a simple master/slave replication between two mariaDB systems. For unknown reasons they have set the flag "Replicate_Wild_Ignore_Table" to skip "logdb.%". Obviously, they decided to skip the skipping of that database and want the logdb to be included in the replication again.
I'm curious now, is it possible to somehow remove that flag and have the database in question be replicated as the rest or is there no way to circumvent the "stop slave, dump master, import dump, recreate replication based on current logpos, start slave" procedure?
You can't assume that the master still has all relevant binlogs that once contained updates to the logdb.% tables. That is, even if you could re-apply those updates, do you have enough history to account for all changes to the tables?
Another risk is if you use statement-based replication, if there were ever statements that referenced both a table in logdb.% and a table in another database, the replication filter has skipped that statement. So for example:
INSERT INTO mydb.mytable SELECT * FROM logdb.othertable;
Therefore even the tables that are not in logdb.% might be compromised. The point is you don't know for sure.
The bottom line is that you should definitely reinitialize the replica now by taking a current backup of the master, and avoid using replication filters in the future.
If you use InnoDB tables, you might consider using Percona XtraBackup to make the process easier. See https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/setting_up_replication.html

Replication issue

We have a one master and two VIP slave database servers. We changed data type of column from VARCHAR(255) to TEXT on master.
The application is currently configured to use master only for writing operations and configured slaves for reading operation.
After changing the data type on master server using ALTER TABLE command the slave server becomes unresponsive.
We are using Mariadb 10.0
[PROCESSES INFORMATION]
Id User Host Db Command Time(sec) State Info
-----------------------------------------------------------------------
203739 repl slave1 Binlog Dump 75,143,121 Master has sent all binlog to slave; waiting for binlog to be updated
203740 repl slave2 Binlog Dump 75,143,121 Master has sent all binlog to slave; waiting for binlog to be updated
The slave instance becomes very slow due to slow queries.
number of sessions: 1590
thread_pool_max_thread=500
Current value =648
After performing ALTER TABLE on Master server, it was replicating to slave server and in the same time number of sessions were get increased rapidly on slave server.
I think slaves becomes unresponsive because of slow queries.
But I don't know why this queries became so slow and slaves got unresponsive.
The DBA's saying that after executing ANALYZE TABLE command, the issue has been solved.
But I don't understand why this happened because ANALYZE TABLE only update the statistic information.
It would be helpful if anyone comment on this why it happened?
How to avoid such issues in future.
There is one minor case where TEXT is slower than VARCHAR. When a SELECT needs to build a temporary table (often for sorting due to GROUP BY or ORDER BY), it first tries to build a MEMORY table. But, TEXT and BLOB prevent it from using such, so it uses MyISAM instead. This is slower (but gets the job done).
I say this is a "minor case" because users rarely identify it with phrases like "very slow" and "becomes unresponsive". I would guess that a SELECT might run twice as slow.
Also, the ANALYZE TABLE discussion does not hold water. Again it may be coincidence, not causation.
So, the change to TEXT may be a 'red herring'. Instead, let's discover what is being slow by using the slowlog. See this for what I like to work from.

Transactions between two replicating master mysql servers

With a replicating mysql master to master database with innodb engine, if one transaction were to initiate on database A will that row lock for database B until the transaction has been committed?
The master getting the first transaction is completely separate from the second master and they communicate through a binary log.
https://dev.mysql.com/doc/refman/5.7/en/replication-formats.html
In the case of something requiring a transaction, then the actual statements are not written to the log until the transaction is complete.
https://dev.mysql.com/doc/refman/5.7/en/replication-features-transactions.html
So the second master should be completely unhindered, since it won't actually know anything about the request until the first master is done processing it.
(Standard caveats though of it may depend on what type of replication SBR/RBR/mix and the actual transactions.)

MySQL binlog: when does the DELETE gets logged before the INSERT

Some relevant my.cnf settings:
binlog-format=ROW
init_connect='SET autocommit=1'
autocommit=1
innodb_flush_log_at_trx_commit=1
I also have a replication running... Now, most of the time things runs rather well.
But sometimes I do get this:
Could not execute Delete_rows/Update_rows event on table auto.parcels_to_cache; Can't find record in 'parcels_to_cache'.
This is because of this:
mysql-bin.000021.decoded-26373095-### DELETE FROM auto.parcels_to_cache
mysql-bin.000021.decoded-26373096-### WHERE
mysql-bin.000021.decoded-26373097-### #1='0101'
mysql-bin.000021.decoded-26373098-### #2='2013:01:05'
mysql-bin.000021.decoded:26373099:### #3='01014700669249'
--
mysql-bin.000022.decoded-4143326-### INSERT INTO auto.parcels_to_cache
mysql-bin.000022.decoded-4143327-### SET
mysql-bin.000022.decoded-4143328-### #1='0101'
mysql-bin.000022.decoded-4143329-### #2='2013:01:05'
mysql-bin.000022.decoded:4143330:### #3='01014700669249'
This is a decoded binary log from the master server. The replication server reflects this.
Also this seems only to happen on InnoDB tables. But not always. Although I think the MyISAM problems with replication I had were related to another problem.
I recently recoded all the sources to remove the few transactions I had in there to remove all of them. So no begins, no commits, no rollbacks anymore... Then I added into the mysql database class to always turn off commits as well.
This because I read on the MySQL website there were issues with transactions & transactionable and non-trans tables.
For example this auto.parcels_to_lifecycle table is heavily used, sometimes it is possibly accessed by 20 threads at once. Hence the InnoDB. Otherwise each thread will wait for when only 1 thread is updating...
Anyone knows how to fix this DELETE before INSERT problem? Or maybe some way to approach the problem and fix it?
Thanks!

MySQL Replication: Preventing master server from replicating table inserts

I have a logging table on the master server that is inserted into very often. I don't need this table replicated to the slave servers, and in fact I already have replicate-ignore-table set on the slaves to ignore it.
However, that only happens after all of those inserts are fetched from the master. I'd like to prevent those inserts from getting sent to the slaves entirely for 2 reasons:
Cut down on network traffic between the servers
I've had cases of the relay log entries being corrupted (and having to skip corrupted entries). Given the quantity of inserts into the logging table, it's always on those inserts (which aren't necessary anyway).
Is it possible to somehow prevent the master from sending back the logs for a specific table? Or, prevent the inserts from showing up in the master's bin-log files? I'm only aware of ignoring databases in the master's bin-log files.
Thanks.
In your code, send "SET SESSION sql_log_bin=0" to MySQL before inserting a logging row. Then set it back to 1 afterward.
This approach gives you fine-grained control over when and when not to binary-log. Only possible drawback is that the database user will need the SUPER privilege.