Mysql 8 INSERT performance degradation with bin-log - mysql

I have a setup using PHP/MySQL running on MySQL 5.
I have moved this setup to a MySQL 8 based install and suffered severe degradation in INSERT performance.
A typical INSERT takes 40ms.
Tabels are identical and queries are identical.
SELECT perfomance is as good or better.
The performance difference is repeatedly about 100 times slower on MySQL 8 at INSERT.
I have changed innodb_flush_log_at_trx_commit =2.
This improved somewhat with to typical INSERT to 10ms.
Then I disabled bin-log. This gave a significant performance boost.
Insert is now about 0.5 ms, and I believe to be acceptable, as it is not a heavy traffic db.
The question now is:
Is it normal to have this difference in performance due to bin-log?
If not, what should be expected, and what are the likely candidates to improve this?
Is it safe to disable bin-log, given that I do not have db replication?

The binary log does have a pretty high performance overhead. https://www.percona.com/blog/2018/05/04/how-binary-logs-affect-mysql-8-0-performance/ shows that the overhead is up to 30%.
The binary log is used for replication as you know, and it can also be used for point-in-time recovery if you combine it with backups. It's up to you if this is useful to you or if you want to disable binary logs.
I'll comment that as a consultant, I have been called by a few clients who had a database server crash when their storage device died. I asked if they had working backups or binary logs, and they said no. All I could say was, "then I hope that data wasn't important."
There's a compromise solution: you can keep sync_binlog but use a value other than 1. The value is not a boolean or ON/OFF, it's an integer. It means "every 1 transactions, sync the binlog file to disk." You can set this to a higher value, for example 100, so it runs a sync to disk on every 100th commit. This obviously isn't as secure as syncing after every commit, but it's often better than just allowing the filesystem to buffer writes until it feels like syncing.
Another strategy is to use replication, with semi-synchronous replication so the replica is guaranteed to receive the binlog event, even if it isn't synced to disk on the primary or the replica. Replicating a binary log event over a local network is fast, and is often faster than syncing it to disk.
One more comment: If your disk performance is important, you should explore options for upgrading the hardware to support fast sync. If you are still using rotational disks, consider updating to SSD or NVMe technology.

Related

MySQL/MariaDB read preference from slave with max staleness

I am using Mysql/MariaDB with Innodb storage engine version 10.x.
I want to setup a cluster with master-slave configuration. There is an option to read data from slave using --innodb-read-only or --read-only.
However in addition to the above, client needs to read the data from slave if and only if max slave lag is less than x seconds.
Slaves can lag behind the primary due to network congestion, low disk throughput, long-running operations, etc. The read preference with max allowed staleness option should let application specify a maximum replication lag, or “staleness”, for reads from slaves. When a secondary’s estimated staleness exceeds, the client stops using it for read operations from slaves and start reading from master.
I would like to know if there is an option in MySql/InnoDB?
There's no automatic option for switching the query to the master. This is handled by application logic.
You can run a query SHOW SLAVE STATUS and one of the fields returned is Seconds_Behind_Master. You would have to write application code to check this, and if the lag is greater than your threshold, query the master instead.
You might find some type of proxy that can do this logic for you. See https://mydbops.wordpress.com/2018/02/19/proxysql-series-mysql-replication-read-write-split-up/
It's not always the best option to treat a replica with X seconds of lag as unusable. Some queries are perfectly okay regardless of the lag. I wrote a presentation about this some years ago, and it includes some example queries. Read / Write Splitting with MySQL and PHP (Percona webinar 2013)
There are many Proxy products that may have code for such.
If you automatically switch to the Master, then it may get overwhelmed, thereby leading to worse system problem.
If you try to switch to another Slave, it is too easy to get into a flapping situation.
Galera has a way to deal with "critical read", if you wanted to go to a Cluster setup instead of Master + Slaves.
If part of the problem is the distance between Master and Slave, and if you switch to the Master, where is the Client? If it is near the Slave, won't the added time to reach the master cancel out some of the benefit?
Avoid long-running queries, beef up the Slave to avoid slow disks, speed up queries that are hitting the disk a lot, look into network improvements.
In summary, I don't like the idea of attempt to move a query to the Master; I would work on dealing with the underlying problem.
MariaDB MaxScale has multiple ways of dealing with replication lag.
The simplest method is to limit the maximum allowed replication lag with the max_slave_replication_lag parameter. This works exactly the way you described: if a slave is too many seconds behind the master, other slaves and, as a last resort, the master is used. This is the most common method of dealing with replication lag in MaxScale.
Another option is to use the causal_reads feature which leverages the MASTER_GTID_WAIT and other features found in MariaDB 10.2 and newer versions. This allows read consistency without adding additional load on the master. This does come at the cost of latency: if the server is lagging several seconds behind the read could take longer. This option is used when data consistency is critical but the request latency is not as important.
The third option is to use the CCRFilter to force reads to the master after a write happens. This is a simpler approach compared to causal_reads but it provides data consistency at the cost of increased load on the master.

MySQL: Speed over reliability config

For my development machine I need no data consistency in case of a crash. Is there a config for a Debian-like system, that optimizes MySQL for speed (even if it sacrifices reliability)?
So something like: Cache the last 1 GB in RAM. Don't touch the disk with data until the 1 GB is used.
What kind of queries are going on? One of my mantras: "You cannot configure your way out of a performance problem."
Here's one thing that speeds up InnoDB, wrt transactions:
innodb_flush_log_at_trx_commit = 2
There is a simple way to speed up single-row inserts by a factor of 10.
Some 'composite' indexes can speed up a SELECT by a factor of 100.
Reformulating a WHERE can sometimes speed up a query by a factor of 100.
You can disable many of the InnoDB configurations for durability, at the risk of increased risk of losing data. But sometimes you want to operate the database in Running with scissors mode because the original data is safely stored somewhere else, and the copy in your test database is easily recreated.
This blog describes Reducing MySQL durability for testing. You aren't going to see any official MySQL recommendation to do this for any purpose other than testing!
Here's a summary of changes you can make in your /etc/my.cnf:
[mysqld]
# log_bin (comment this out to disable the binary log)
# sync_binlog=0 (irrelevant if you don't use the binary log)
sync_frm=0
innodb_flush_log_at_trx_commit=0
innodb_doublewrite=0
innodb_checksums=0
innodb_support_xa=0
innodb_log_file_size=2048M # or more
He also recommends to increase innodb_buffer_pool_size, but the size depends on your available RAM.
For what it's worth, I recently tried to set innodb_flush_log_at_trx_commit=0 in the configuration in the default Vagrant box I built for developers on my team, but I had to back out that change because it was causing too much lost time for developers who were getting corrupted databases. Just food for thought. Sometimes it's not a good tradeoff.
This doesn't do exactly what you asked (keep the last 1GB of data in RAM), as it still operates InnoDB with transaction logging and the log flushes to disk once per second. There's no way to turn that off in MySQL.
You could try using MyISAM, which uses buffered writes for data and index, and relies on the filesystem buffer. Therefore it could cache some of your data (in practice I have found that the buffer flushes to disk pretty promptly, so you're unlikely to have a full 1GB in RAM at any time). MyISAM has other problems, like lack of support for transactions. Developing with MyISAM and then using InnoDB in production can set you up for some awkward surprises.
Here's a couple of other changes you could make in your MySQL sessions for the sake of performance, but I don't recommend these even for development, because it can change your application behavior.
set session unique_checks=0;
set session foreign_key_checks=0;
Some people recommend using the MEMORY storage engine. That has its own problems, like size limits, table-locking, and lack of support for transactions.
I've also experimented with trying to put tables or tmpdir onto a tmpfs, but I found that didn't give nearly the performance boost you might expect. There's overhead in an RDBMS that is not directly related to disk I/O.
You might also like to experiment with MyRocks, a version of MySQL including the RocksDB storage engine for MySQL. Facebook developed it and released it as open-source. See Facebook rocks an open source storage engine for MySQL (InfoWorld). They promise it reduces I/O, it compresses data, and does other neat things.
But again, it's a good rule of thumb to make your development environment as close as possible to your production environment. Using a different storage engine creates a risk of not discovering some bugs until your code reaches production.
Bottom line: Tuning MySQL isn't a magic bullet. Maybe you should consider designing your application to make more use of microservices, caches, and message queues, and less reliance on direct SQL queries.
Also, I'd recommend to always supply your developers the fastest SSD-based workstation you can afford. Go for the top of the line on CPU and RAM and disk speed.
#Bill Karwin's answer has useful mysql settings to improve performance. I have used them all and was able to achieve a roughly 2x performance improvement.
However, what gave me the biggest performance boost (nearly 15x faster) for my use case -- which was reloading a mysql dump -- was to mount the underlying filesystem (ext4) using the nobarriers option.
mount -o remount,nobarrier /
More info here
You should only consider this if you have a separate partition (or logical volume) mounted at /var/lib/mysql, so that you can make this tradeoff only for MySQL, not your entire system.
Although this answer may not hit exactly the questions you ask, consider creating your tables with MEMORY engine as documented here: http://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html
A typical use case for the MEMORY engine involves these
characteristics:
Operations involving transient, non-critical data such as session
management or caching. When the MySQL server halts or restarts, the
data in MEMORY tables is lost.
In-memory storage for fast access and low latency. Data volume can fit
entirely in memory without causing the operating system to swap out
virtual memory pages.
A read-only or read-mostly data access pattern (limited updates).
Give that a shot.
My recommendation, even for a development machine, would be to use the default InnoDB. If you choose to do transactions, InnoDB will be helpful.
This blog can help you run MySQL off of tmpfs: http://jotschi.de/2014/02/03/high-performance-mysql-testdatabase/. User Jotschi also speaks about that in a SO answer #10692398

Tuning an write-only master mysql database

I have an master database when I only run write queries (inserts, deletes, updates).
I would like to know how to tune this having in mind that selects are not important here.
I'm using InnoDB. Replication with 1 Master and 2 Slaves. Running on a Ubuntu 16.04 server. MySQL 5.6
Disable the query cache. It's only beneficial for reads.
Disable the adaptive hash index. It's only beneficial for reads.
Increase the innodb_log_file_size. I recommend at least 2GB, unless disk space is short.
Drop indexes, except for those used by your UPDATE/DELETE statements. You can create more indexes on the slave to support SELECT queries.
Consider fine-tuning the Buffer Pool Flushing. The optimal settings depend on your workload, so you'll have to experiment.
If you want to sacrifice durability, you can make some other changes. Warning: these increase the risk of data loss.
innodb_flush_log_at_trx_commit = 2 or 0 to relax synchronous log writes.
innodb_doublewrite = OFF to disable page write protections.
sync_binlog = 0 to disable synchronous writes to the binary log.
Make sure your data directory is on fast disks, like SSD or a caching RAID array.
Never use NFS.
You may experiment with putting innodb_log_group_home_dir and innodb_undo_directory and log_bin_basename and tmpdir on different physical volumes from your data directory. But this won't give a benefit unless performance is really disk-bound.
Further tuning depends on your workload. For example, changing the thread concurrency or the number of IO write threads or the IO capacity. If you want to go to this level of tuning, get some consulting from a professional.
Comment from #spencer7593 brings up a good point, you might not be able to achieve the best optimization solely with database tuning options.
You haven't mentioned anything about the application or the type of writes, but eventually you'll have to consider changing the way you write to the database. Tuning changes alone are limited in how they improve database performance.
For example, applications could write to a queue, then create a consumer app to consume items from the queue and write data to the database in larger batches. That means more efficient database writes, but more importantly it allows applications to "write" with much lower latency because they are only writing to a queue.
Eventually, you may find that no single database instance can keep up with the rate of writes. At that point, you'll have to scale out, by spreading writes over multiple database instances. This is called "sharding" the data. Of course this adds more complexity to database reads, because your data is not all together. So try all the tuning changes you can try before resorting to sharding.

Run MySQL in memory with slave for persistence

Let's presume that I need to maximize my write performance and am willing to take a risk of a few minutes of lost data. My use case is a "burst" of activity for a few hours which will subside. The workload is append-heavy.
Let's presume, for the sake of argument, that the data is not so urgent that a few minutes of lost data will cause as many problems as a slow server. For reasons beyond my control, the master must run on EC2, so disk speed could be an issue.
My potentially crazy idea is to have an a master database that runs entirely in RAM (either as a MEMORY table or as InnoDB backed by a RAM disk) and then replicate to a slave for slightly delayed persistence. What will go wrong?
Just use InnoDB with a huge buffer pool and set innodb_flush_log_at_trx_commit=2 or 0. It's pretty much what you're describing.

mysql replication - table locking?

I am currently working for a company that has a website running mysql/php (all tables are also using the MYISAM table type).
We would like to implement replication, but I have read in the mysql docs and elsewhere on the internet that this will lock the tables when doing the writes to the binary log (which the slave dbs will eventually read from).
Will these locks cause a problem on a live site that is fairly write-heavy? Also, is there a way to enable replication without having to lock the tables?
If you change your table types to innodb, row level locking is used. Also, your replication will be more stable, as updates will be transactional. MyISAM replication is a long-term pain.
Be sure that your servers are version-matched, and ALWAYS be sure to shut down the master before shutting down the slaves. You can bring the master up again immediately after shutting down the slaves, but you do have to take it down.
Also, make sure you use appropriate autoextend options for InnoDB. And, while you're at it, you'll probably want to migrate away from float and double to 'decimal' (which means mysql 5.1.) That will save you some replication headaches.
That's probably a bit more than you asked for. Enjoy.
P.s., yes the myisam locks can cause problems. Also, innodb is slower than myisam, unless myisam is blocking for a huge select.
In my experience DBAing a write-heavy site, writing a binary log adds no perceivable problems with locking or performance on the master. If you want to benchmark it, simply turn binary logging on. I really don't think tables are locked to write queries to the binary log.
Table locking on the slave is quite another thing, however. Replication is serial: each query runs to completion before the slave runs the next one. So long updates will cause replication to fall behind temporarily. If your application is intending to use replication for scale-out, it needs to know how to accomodate this.
The solution with the myisam table type is not 'better'. However, you can get by with it.
The best you can do, is make sure your slave and master run on the same hardware (FPU differences can create replication errors), as well as making sure you are running the same version numbers on your MySQL servers.
The following link answers your questions. Specifically, locks in MyISAM tables have less of a chance of blocking writes if there are no deletes going on. So a table that doesn't have delete holes in it will perform faster in a replicated setup.
http://dev.mysql.com/doc/refman/5.1/en/internal-locking.html
You can mitigate the effect of 'holes' by have a DBA export/import periodically during scheduled downtimes (especially after mass deletes.) Also, make sure your slave databases don't go down with the master still running. That will save you many, many issues.