MySQL performance 5.7 vs 8.0 - mysql

I've been testing the performance between 5.7 and 8.0.
We figured out that our application is around 30% slower with 8.0, so I tried to narrow down the issue.
I've started running simple sysbench tests and they showed the same perf hit.
Then I found this nice article.
I tried to replicate the setup (4 cores, NVMe backend, only VM on the host, etc) the best I can.
I used two separate VMs based on CentOS 7 with community editions.
However, after tests I got the same ~30% perf hit with the same scripts found in the blog post.
Then I tried to add the my.cnf options mentioned in the blog, nothing. Apart from these options, my.cnf was left as default (except innodb_buffer_pool, which was set to 8G from the start and binlogs were disabled on both).
As a last resort I just tried a simple sysbench run (removed all the extra options, ran it for 10 minutes), same thing.
I even tried to force M8 to use latin1 instead of utf8 (while creating the DB), which yielded these (best) results:
Results
Everything points to M8 having way less performance as M5.7, which contradicts ALL google listing.
Does anyone have any idea where the perf hit comes from?
Thanks in advance!

8.0 implements ROLLBACK for DDL statements such as CREATE/DROP/ALTER TABLE/DATABASE/COLUMN. This makes them slower.
For most applications, this is not a performance problem because DDL is only a rare event. Are you doing a lot of DDL statements?

From several9s, that shows tons of improvements that are present in MySQL 8.0. The benchmark results reveals that there has been an impressive improvement, not only on managing read workloads, but also on a high read/write workload comparing to MySQL 5.7.
Here are the benchmark results:
Overall, MySQL 8.0 has dominated MySQL 5.7 efficiently. Especially if you have a high number of threads, I believe you must use MySQL 8.0.

Related

Recommendation on Database Upgrade

Mysql 8.0 vs Mysql 5.7 performance comparison
At present in my production , I'm using mysql 5.7 version.
I want to upgrade my database to mysql8 .
I need a performance comparison among these three.
After searching Mysql5.7 vs Mysql 8.
My findings are the followings :
Creating Role
Invisible index
...etc
Insert operations are slower because of binary logging
and we can disable it.
Use slightly more RAM than 5.7 version
But what i really need is a comparison based on performance .
Connection handling, thread, pooling , max user , max connection, processing,CPU usage,memory something like this.
Thanks in advance.
To be honest, here's no clear cut answer for you. 90% of the performance is defined by database design and implementation, not the actual version of MySQL used.
The ability to handle load and stress is mostly the same between versions. Some things under the same configurations may be performing better on the newer version, some other things on older.
Most metric in your list (handling, thread, pooling, max user, max connection, processing, CPU usage, memory, etc) are usually fine tuneable on both my.cnf and OS sysctl level. Even kernel tweaks can have an effect.
So, in general, 90% of people can't see the performance difference. On the other hand, 90% also fail at doing proper performance tests too.

mysql db enterprise upgrade from 5.5.42 to 5.7.11

All,
Our product uses Mysql enterprise db. currently at 5.5.42 but planning to upgrade to 5.7.latest. What changes should I expect to see ?
I'm interested in :
1) Performance impacts
2) Broken behavior
3) Changed behavior
4) Improvements -- functional and performance-wise
Is it advised to 1st upgrade our product to use 5.6.x and then next to 5.7.x OR directly to 5.7.x and then test/QA from there ? Any inputs will be valuable.
Performance
Performance gains in 5.7 can be achieved only on high loaded systems. If you have less than 40-60 parallel executions, then expect small performance fall around 10%. This is very system- and load-dependent. Always test yourself!
Here is good MySQL 5.7 performance test done by Percona: https://www.percona.com/blog/2016/05/17/mysql-5-7-read-write-benchmarks/
Changed and broken behaviour, Improvements
There were a lot of changes in 5.7. Check this link for complete list: http://www.thecompletelistoffeatures.com/
I would mention the following changes, that one should think of during upgrade:
SQL_MODE was changed. It is good for new installations, but for old systems most probably you will have to set it back to the old value.
First GA release had password expiry, but after complaints it was disabled in 5.7.11. So nothing to worry about now.
SQL optimizer was changed a lot. Queries should run faster now, but I expect, that some could run slower. The most time should be spent on queries testing on the new version!
Here is good overview of all changes affecting upgrade to 5.7:
http://dev.mysql.com/doc/refman/5.7/en/upgrading-from-previous-series.html
Upgrade
There are two ways to upgrade: using mysql_upgrade and mysqldump.
With mysql_upgrade go with one version at a time. There is no need to test intermediary version. Test only final version.
With mysqldump go directly to 5.7. It is advisable to skip mysql database. Users can not be migrated with mysqldump, use grant statements.
For further information read this: http://dev.mysql.com/doc/refman/5.7/en/upgrading.html
There are tremendous amount of change between 5.5 and 5.7. It's simply not practical to enlist them all here. I'll try to highlight the most important ones. Obviously there are also a lot of new features introduced (GTID, semi-sync replication, JSON support, virtual columns etc.).
1) Performance
You should expect performance gains since 5.6 and 5.7 improved a lot in mutex handling therefore less contention is expected.
Also scalability improvements were done so you can scale your servers up further and performance/query capacity follows for longer than before.
There were a lot of change in the optimizer. So make sure to check your queries.
At the end of the day you should always check your own case with the your own application.
2+3) Changed and broken behaviour
Password expiry. In 5.7 passwords can have expire time. Just something to be aware of.
Password management commands have changed so make sure you check them before you get surprised by having your automation broken.
Since optimizer and the cost model changed a lot some queries might get executed differently.
Two parameters I would mention specifically that caused some headaches for us: Look into the optimizer_switch and decide what you turn on and off. range_index_dive_limit default value was raised from 10 to 200 by default in 5.7.4+. Since I see this parameter is rarely being set in any my.cnf this new default might affect you big time.
4) Improvements
A lot. As I wrote above there are plenty of new features and scalability, performance improvements that were meant to server you better.
5) Upgrade path
Yes, it's best to do one major upgrade at a time, test and when everything's working you can do the second major upgrade.
Compatibility - The issue that would be your greatest concern when upgrading. Because your codes/syntaxes are made with lower version, expect them to behave differently for some of them will be invalid or would not be recognized with the newer version (great impact will be occurred if you jump to a very far higher version).
Normally, you'll get improved performance, security, etc with newer versions but you need to bend your codes/syntaxes to cope with new functions and features.

Reduced performance in mysql after upgrade to 5.6.27

Our application was using MySql version 4.0.24 for a long time. We are trying to migrate it to version 5.6.27.
But, on testing the performance on 5.6.27, even the simple selects and updates are 30-40% slower when we are doing load testing. The CPU and IO speeds are much better than the older server. The storage engine of the tables is MyIsam in both versions. There's only one connection to the database. We tried the following options:
Changing storage engine to InnoDb - this reduce the performance drastically (70% slower)
Changing the innodb log size and buffer size - didn't help much
Increasing key buffer size with MyIsam storage engine for tables. - It made no difference
We tried modifying other parameters like query cache, tmp_table_size, heap_table_size. But, none of them made any difference.
Can you please let me know if there's any other option that we can try?
Here's a copy of my.cnf:
lower-case-table-names=1
myisam-recover=FORCE
key_buffer_size=2000M
Some things you can look at are whether the two servers have the same amount of RAM or not as it may be that the old server has more RAM and so can cache more things in memory.
You can also look at how are you connecting to the MySQL server - is it over a network? Is that network speed / quality different? Is one server accessed locally and the other over a network.
You tried tuning some good parameters, but in case there are ones you're missing, you can run mysql -e 'show variables' on both servers and then use a tool like Winmerge to compare the values of the two and see what might be different that you might not have thought of.
After trying multiple options, here are the configurations that worked for us. There may be other solutions as well but this worked for us.
Option 1:
Turned off the performance schema
Added a couple of jdbc connection parameters: useConfigs=maxPerformance&maintainTimeStats=false
Option 2:
Migrate to MariaDB. From a performance perspective, this worked out really well. It was giving a 5% better performance compared to mysql for our system. But, we couldn't pursue this option due to non-technical reasons.
Thank you for your inputs.

MySQL changing large table to InnoDB

I have a MySQL server running on CentOS which houses a large (>12GB) DB. I have been advised to move to InnoDB for performance reasons as we are experiencing lockups where the application that relies on the DB becomes unresponsive when the server is busy.
I have been reading around and can see that the ALTER command that changes the table to InnoDB is likely to take a long time and hammer the server in the process. As far as I can see, the only change required is to use the following command:
ALTER TABLE t ENGINE=InnoDB
I have run this on a test server and it seems to complete fine, taking about 26 minutes on the largest of the tables that needs to be converted.
Having never run this on a production system I am interested to know the following:
What changes are recommended to be made to the MySQL config to take advantage of additional performance of InnoDB tables? The server currently has 3GB assigned to InnoDB cache - was thinking of increasing this to 15GB once the additional RAM is installed.
Is there anything else I should do to the server with this change?
I would really recommend using either Percona MySQL or MariaDB. Both have tools that will help you get the most out of InnoDB, as well as some tools to help you diagnose and optimize your database further (for example, Percona's Online Schema Change tool could be used to alter your tables without downtime).
As far as optimization of InnoDB, I think most would agree that innodb_buffer_pool_size is one of the most important parameters to tune (and typically people set it around 70-80% of total available memory, but that's not a magic number). It's not the only important config variable, though, and there's really no magic run_really_fast setting. You should also pay attention to innodb_buffer_pool_instances (and there's a good discussion about this topic on https://dba.stackexchange.com/questions/194/how-do-you-tune-mysql-for-a-heavy-innodb-workload)
Also, you should definitely check out the tips offered in the MySQL documentation itself (http://dev.mysql.com/doc/refman/5.6/en/optimizing-innodb.html). It's also a good idea to pay attention to your InnoDB hit ratio (Rolado over at DBA Stackexchange has a great answer on this topic, eg, https://dba.stackexchange.com/questions/65341/innodb-buffer-pool-hit-rate) and analyze your slow query logs carefully. Towards that later end, I would definitely recommend taking a look at Percona again. Their slow query analyzer is top notch and can really give you a leg up when it comes to optimizing SQL performance.

Why is PostgreSQL so slow on Windows?

We had an applicationg running using MySql. We found MySql was not suitable for our app after we found that it didnt support some of the GIS capability that PostGIS has (note: mysql only supports minimum-bounding rectangle GIS search).
So we changed our DB to PostgreSQL. We then found out that Postgresql 8.2 running on Windows is so much slower compared to Mysql 5.1. By slower, I mean at roughly 4-5 times slower.
Why is this? Is there something in the configuration that we need to change?
I found some comments from other websites such as this:
UPDATE: We found that the cause of the slowness is due to the BLOB that we are inserting into the DB. We need to be able to insert BLOB at a sustained rate of 10-15 MB/s. We are using libpq's lo_read and lo_write for each BLOB we are inserting/reading. Is that the best way? Has anyone used Pgsql for inserting large BLOB at a high rate before?
EDIT: I heard that PgSql just recently got ported to Windows. Could this be one of the reasons?
There are cases where PostgreSQL on Windows pays an additional overhead compared to other solutions, due to tradeoffs made when we ported it.
For example, PostgreSQL uses a process per connection, MySQL uses a thread. On Unix, this is usually not a noticeable performance difference, but on Windows creating new processes is very expensive (due to the lack of the fork() system call). For this reason, using persistent connections or a connection pooler is much more important on Windows when using PostgreSQL.
Another issue I've seen is that early PostgreSQL on Windows will by default make sure that it's writes are going through the write cache - even if it's battery backed. AFAIK, MySQL does not do this, and it will greatly affect write performance. Now, this is actually required if you have a non-safe hardware, such as a cheap drive. But if you have a battery-backed write cache, you want to change this to regular fsync. Modern versions of PostgreSQL (certainly 8.3) will default to open_datasync instead, which should remove this difference.
You also mention nothing about how you have tuned the configuration of the database. By default, the configuration file shipped with PostgreSQL is very conservative. If you haven't changed anything there, you definitely need to take a look at it. There is some tuning advice available on the PostgreSQL wiki.
To give any more details, you will have to provide a lot more details about exactly what runs slow, and how you have tuned your database. I'd suggest an email to the pgsql-general mailinglist.
While the Windows port of PostgreSQL is relatively recent, my understanding is that it performs about as well as the other versions. But it's definitely a port; almost all developers work primarily or exclusively on Unix/Linux/BSD.
You really shouldn't be running 8.2 on Windows. In my opinion, 8.3 was the first Windows release that was truly production-ready; 8.4 is better yet. 8.2 is rather out of date anyway, and you'll reap several benefits if you can manage to upgrade.
Another thing to consider is tuning. PostgreSQL requires more tuning than MySQL to achieve optimal performance. You may want to consider posting to one of the mailing lists for help with more than basic tweaking.
PostgreSQL is already slower than MySQL up to a certain point (it is actually faster when you have a ridiculously large database). Just FYI, this isn't causing your problem but keep that in mind.