MySQL application runs slow after general usage - mysql

I am currently using MySQL. I have noticed several times my front application runs slow after some usage. when i checked server status in MySQL workbench. I have noticed that innodb buffer usage was going to 100% . so I increased parameter innodb_buffer_pool_size to 1G in my.ini file of xampp. but innodb is not flushing the buffer and application runs slow after some time. is there any other parameters to change as-well?

consider using a size for innodb_buffer_pool_size of 70%-80% of available ram. Depending on how big your dataset is, you should increase the size.

Related

MySQL disconnects when innodb_buffer_pool_size > 75% RAM

For now innodb_buffer_pool_size is set to 12GB (out of 16GB memory), and when I try to increase this value (12.5GB or even up to 13G) to max out performance, MySQL suddenly disconnects itself from the client. I'm having hard time figuring out what would be the issue here.
MySQL uses memory for other needs other than the MySQL innodb buffer pool. By increasing the buffer pool, you are causing MySQL to run out of memory. When MySQL runs out of memory, MySQL automatically shuts down and restarts.
You can see this info in the MySQL error log.
To increase performance, try to look for slow queries (slower than 1-2 seconds) instead, and analyze the explain plan and indexes. Queries that scan many rows and dont use proper indexes cause severe performance issues. This will help a lot more than increasing the buffer pool.

Ubuntu 14 - MySQL14 - Allocating more RAM

What are the recommended configurations for allocating RAM to MySQL?
My environment: Ubuntu 14 machine with 96G RAM, MySQL Ver 14.14 Distrib 5.7.
And how to set those configurations?
Thank you in advance!
Assuming you are are using InnoDB as engine than please see here https://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/
MySQL Innodb Settings
From 5.5 InnoDB is the default engine, so these parameters are even more important for performance than before. The most important ones are:
innodb_buffer_pool_size: InnoDB relies heavily on the buffer pool and should be set correctly, so be sure to allocate enough memory to it. Typically a good value is 70%-80% of available memory. More precisely, if you have RAM bigger than your dataset setting it bit larger should be appropriate with that keep in account of your database growth and re-adjust innodb buffer pool size accordingly. Further, there is improvement in code for InnoDB buffer scalability if you are using Percona Server 5.1 or Percona Server 5.5 You can read more about it here.
innodb_buffer_pool_instances: Multiple innodb buffer pools introduced in InnoDB 1.1 and MySQL 5.5. In MySQL 5.5 the default value for it was 1 which is changed to 8 as new default value in MySQL 5.6. Minimum innodb_buffer_pool_instances should be lie between 1 (minimum) & 64 (maximum). Enabling innodb_buffer_pool_instances is useful in highly concurrent workload as it may reduce contention of the global mutexes.
Dump/Restore Buffer Pool: This feature speed up restarts by saving and restoring the contents of the buffer pool. This feature is first introduced in Percona Server 5.5 you can read about it here. Also Vadim benchmark this feature You can read more about it in this post. Oracle MySQL also introduced it in version 5.6, To automatically dump the database at startup and shutdown set innodb_buffer_pool_dump_at_shutdown & innodb_buffer_pool_load_at_startup parameters to ON.
innodb_log_file_size: Large enough InnoDB transaction logs are crucial for good, stable write performance. But also larger log files means that recovery process will slower in case of crash. However this is not such big issue since great improvements in 5.5. Default value has been changed in MySQL 5.6 to 50 MB from 5 MB (old default), but it’s still too small size for many workloads. Also, in MySQL 5.6, if innodb_log_file_size is changed between restarts then MySQL will automatically resize the logs to match the new desired size during the startup process. Combined log file size is increased to almost 512 GB in MySQL 5.6 from 4 GB. To get the optimal logfile size please check this blog post.
innodb_log_buffer_size: Innodb writes changed data record into lt’s log buffer, which kept in memory and it saves disk I/O for large transactions as it not need to write the log of changes to disk before transaction commit. 4 MB – 8 MB is good start unless you write a lot of huge blobs.
innodb_flush_log_at_trx_commit: When innodb_flush_log_at_trx_commit is set to 1 the log buffer is flushed on every transaction commit to the log file on disk and provides maximum data integrity but it also has performance impact. Setting it to 2 means log buffer is flushed to OS file cache on every transaction commit. The implication of 2 is optimal and improve performance if you are not concerning ACID and can lose transactions for last second or two in case of OS crashes.
innodb_thread_concurrency: With improvements to the InnoDB engine, it is recommended to allow the engine to control the concurrency by keeping it to default value (which is zero). If you see concurrency issues, you can tune this variable. A recommended value is 2 times the number of CPUs plus the number of disks. It’s dynamic variable means it can set without restarting MySQL server.
innodb_flush_method: DIRECT_IO relieves I/O pressure. Direct I/O is not cached, If it set to O_DIRECT avoids double buffering with buffer pool and filesystem cache. Given that you have hardware RAID controller and battery-backed write cache.
innodb_file_per_table: innodb_file_per_table is ON by default from MySQL 5.6. This is usually recommended as it avoids having a huge shared tablespace and as it allows you to reclaim space when you drop or truncate a table. Separate tablespace also benefits for Xtrabackup partial backup scheme.

Weird spikes in MySQL query times

I'm running a NodeJS with MySQL (InnoDB) for a game server (player info, savedata, stuff). Server is HTTP(S) based so nothing realtime.
I'm having these weird spikes as you can see from the graphs below (first graph is requests/sec and last graph is queries/sec)
On the response time graph you can see max response times with purple and avg response times with blue. Even with those 10-20k peaks avg stays at 50-100ms as do 95% of the requests.
I've been digging around and found that the slow queries are nothing special. Usually update query with savedata (blob of ~2kb) or player profile update which modifies like username or so. No joins or anything like that. We're talking about tables with less than 100k rows.
Server is running in Azure on Ubuntu 14.04 with MySQL 5.7 using 4 cores and 7GB of RAM.
MySQL settings:
innodb_buffer_pool_size=4G
innodb_log_file_size=1G
innodb_buffer_pool_instances=4
innodb_log_buffer_size=4M
query_cache_type=0
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=32M
wait_timeout=300
interactive_timeout=300
innodb_file_per_table=ON
Edit: It turned out that the problem was never MySQL performance but Node.js performance before the SQL queries. More info here: Node.js multer and body-parser sometimes extremely slow
check your swappiness (suppose to be 0 mysql machines maximizing ram usage):
> sysctl -A|grep swap
vm.swappiness = 0
with only 7G of RAM and 4G of just buffer pool, your machine will swap if swappiness is not zero.
could you post your swap graph and used memory. 4G buffer is "over the edge" for 7G ram. For 8G ram, I would give 3G as you have +1G on everything else mysql wise + 2G on OS.
Also you have 1G for transaction log file and I assume you have two log files. Do you have so many writes to have such large files? You can use this guide: https://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/

MySQL max_connections

I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!

Max Mysql Connection Per Load

i am running a wordpress website with on VULTR VPS 1GB RAM SSD,
my website has 20000+ posts and now its even slow on 4GB RAM VPS i think this is just for max mysql load right? im just noob in programming, please figure this out for me , how to load my website faster with this 20000+ posts or what to configure in the server ?
You provided very little info so it's impossible to diagnose the problem.
First you should monitor the system: CPU, memory, I/O and check if any of these are close to the limits.
Second you should monitor the database: have you access to the DB server? have you access to any monitoring facilities?
If the performance decreased when the post increased it's possible that the problem is the DB but you must understand what: a missing index? an outdated statistic?
Anyway nothing can be said without a proper monitoring
1GB RAM -- that is tiny by today's standards.
Check for swapping. That is a killer for MySQL.
Which "Engine" are your tables? (Do SHOW CREATE TABLE for a typical table.)
If ENGINE=MyISAM, look in my.cnf for key_buffer_size; it should be something like 50M. (400M for 4GB of RAM)
If ENGINE=InnoDB, look in my.cnf for innodb_buffer_pool_size; it should be something like 150M (1200M for 4GB of RAM) and key_buffer_size should be about 10M.
If your settings are significantly smaller than those, that is likely to be the problem. To double check the settings, do (from phpmyadmin, mysql commandline tool, or wherever):
SHOW VARIABLES LIKE '%buffer%';