I have a 2GB RAM machine running MySQL.
When I set my innodb_buffer_pool_size in my.cnf to "1G", the MySQL process uses around 1.3GB which is the expected behaviour. However, when I set it to "1000M" the RAM runs out and the MySQL process crashes.
According to the documentation "M" stands for Megabytes and "G" for Gigabytes. So, 1000M should be the same as 1G. Is there anything I am missing?
For only 2GB of RAM, neither 1000M nor 1G is a reasonable value for that setting. When MySQL swaps, it gets a lot slower. So, it is better to avoid swapping than to have a high value for the buffer_pool. Recommend only 500M.
Related
I used Mysql 5.5.57 and used innodb
Currency I face performance related issues due to which our system gets crashed.
and got many errors like lockwait time out and deadlock problems on many tables.
I trying to solve that.
but i think if i change my innodb configuration default setting then also system performance gets increased but i dont know what type of variable needs to change so please help me
below is our default innodb configuration setting:
The below is the settings in my.cnf:
innodb_file_format=barracuda
innodb_file_format_max=barracuda
innodb_file_per_table=1
query-cache-size = 64M
thread_cache_size = 8
default-time-zone = '+05:30'
query_cache_limit = 10M
character_set_server=utf8mb4
collation_server=utf8mb4_general_ci
innodb_thread_concurrency=8
key_buffer_size=183500800
group_concat_max_len=50000
innodb_log_file_size=178257920
#innodb_lock_wait_timeout=150
innodb_buffer_pool_size=134 MB
innodb_thread_concurrency=8
innodb_thread_sleep_delay=10000
innodb_concurrency_tickets=500
innodb lock_wait_time_out=150
We have 132GB Ram on the server with 2 processors and each has 6 cores.
16GB should be more than enough for MySql.
please help to set proper variables size for each parameter i provided here
I suggest you read this atrticle https://www.percona.com/blog/2013/09/20/innodb-performance-optimization-basics-updated/
You will find example for each variable there, i think this is what you need.
I've got a significant performance boost by setting
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=2G
innodb_log_file_size=256M
innodb_log_buffer_size=8M
innodb_buffer_pool_instances=1
Specifically, innodb_flush_log_at_trx_commit=0 speeds up loading a dump file from minutes to seconds! This setting is not recommended for "mission-critical" environments but is pretty fine for a development machine.
I'm running a NodeJS with MySQL (InnoDB) for a game server (player info, savedata, stuff). Server is HTTP(S) based so nothing realtime.
I'm having these weird spikes as you can see from the graphs below (first graph is requests/sec and last graph is queries/sec)
On the response time graph you can see max response times with purple and avg response times with blue. Even with those 10-20k peaks avg stays at 50-100ms as do 95% of the requests.
I've been digging around and found that the slow queries are nothing special. Usually update query with savedata (blob of ~2kb) or player profile update which modifies like username or so. No joins or anything like that. We're talking about tables with less than 100k rows.
Server is running in Azure on Ubuntu 14.04 with MySQL 5.7 using 4 cores and 7GB of RAM.
MySQL settings:
innodb_buffer_pool_size=4G
innodb_log_file_size=1G
innodb_buffer_pool_instances=4
innodb_log_buffer_size=4M
query_cache_type=0
tmp_table_size=64M
max_heap_table_size=64M
sort_buffer_size=32M
wait_timeout=300
interactive_timeout=300
innodb_file_per_table=ON
Edit: It turned out that the problem was never MySQL performance but Node.js performance before the SQL queries. More info here: Node.js multer and body-parser sometimes extremely slow
check your swappiness (suppose to be 0 mysql machines maximizing ram usage):
> sysctl -A|grep swap
vm.swappiness = 0
with only 7G of RAM and 4G of just buffer pool, your machine will swap if swappiness is not zero.
could you post your swap graph and used memory. 4G buffer is "over the edge" for 7G ram. For 8G ram, I would give 3G as you have +1G on everything else mysql wise + 2G on OS.
Also you have 1G for transaction log file and I assume you have two log files. Do you have so many writes to have such large files? You can use this guide: https://www.percona.com/blog/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/
I am currently using MySQL. I have noticed several times my front application runs slow after some usage. when i checked server status in MySQL workbench. I have noticed that innodb buffer usage was going to 100% . so I increased parameter innodb_buffer_pool_size to 1G in my.ini file of xampp. but innodb is not flushing the buffer and application runs slow after some time. is there any other parameters to change as-well?
consider using a size for innodb_buffer_pool_size of 70%-80% of available ram. Depending on how big your dataset is, you should increase the size.
I'm not sure if stack is the right place to ask this, but I recently upgraded to Percona 5.6 from 5.5 and my memory usage has skyrocketed!
This is from PS:
mysql 4598 0.0 29.5 1583356 465312 ? Sl Oct17 9:07 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib6
I'm on a dedicated VSS
My server only has a gig of ram...how is this only memory usage only 30% according to PS?
I have my ram set in the config to be less than this, and when I run MySQLTuner i get:
[OK] Maximum possible memory usage: 338.9M (22% of installed RAM)
So how am I using almost 500MB of physical memory and over a gig and a half of virtual?
Is this a bug in mySQL or something with my server?
found out that in mysql 5.6 performance_schema is on by default. It was disabled by default prior in 5.5 and before. It has been enabled by default since 5.6.6
performance_schema=off to my config file fixes the issue.
I'd imagine anyone who doesn't have the memory to be running performance_schema would not be using it anyway.
This might affect other distros of mysql 5.6.6 as well.
I had this problem and fixing a few (increased) cache values in MySQL.ini sorted the problem out.
table_definition_cache - set to 400
From "http://bugs.mysql.com/bug.php?id=68287" where this is discussed
Yes, there are thresholds based on table_open_cache and table_definition_cache and max_connections and crossing the thresholds produces a big increase in RAM used. The thresholds work by first deciding if the server size is small, medium or large.
Small: all three are same as or less than defaults (2000, 400, 151).
Large: any of the three is more than twice the default.
Medium: others.
From memory. mine was set to 2000+ and dropping it sorted the problem.
What helped me was on CentOS was changing the memory allocator:
yum install jemalloc-devel
and add to my.cnf:
[mysqld_safe]
malloc-lib = /usr/lib64/libjemalloc.so.1
I have a server with 12GB RAM, and max_heap_table_size in my.cnf is set to 6GB. ("max_heap_table_size=6442450944"). I restarted the MySQL server after setting this.
The trouble is, whenever my table gets to just 2GB during inserts I get error "table full". Why is it not letting me add more than 2GB worth of data? (The 2GB figure is what is shown as the size in phpMyAdmin)
A 32 bit MySQL server (or any 32 bit application for that matter) only have 2-3Gb(Depending on the OS, etc.) of virtual memory available, and thus can't address more memory. You need a 64 bit OS, and a 64 bit MySQL server to take advantage of more memory.