Innodb crashes, mysql doesn't start until server reboot - mysql

i have the following Issue.
I have an AWS EC2 t2.micro instance with LAMP installed on it.
My WebApp uses InnoDB tables. If there are a lot of users, InnoDB used to crash because of buffer pool size. It says it cannot allocate enought memory.
170511 10:32:05 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
I have put innodb_buffer_pool_size to 750M (if I put it more that 1GB - mysql doesn't start at all). And I have put stress test of my WebApp under LoadImpact. From 30 person and up WebApp was trying to allocate something like 800M and crashes MySQL again.
170511 12:16:04 InnoDB: Initializing buffer pool, size = 752.0M
InnoDB: mmap(807010304 bytes) failed; errno 12
Another problem is that I cannot just run sudo service mysql start or restart after mysql Crash. It says job failed to start.
How to make my server more stable. What options should I use.
How to prevent such "crashy" behaviour. I mean can I make something that makes server and mysql prosess not chash but freeze or something like that. I dont want do sudo reboo every time there are a lot of people on the website

t2.micro instance has only 512M RAM. If you run MySQL and Apache on same box you should allocate memory sparingly.
It looks like apache processes use up all memory so MySQL gets less than necessary minimum.
I would suggest to start with adding swap if you haven't added it yet.
dd if=/dev/zero of=/swapfile bs=1024 count=65536
chmod 0600 /swapfile
mkswap /swapfile
swapon /swapfile
and add it to /etc/fstab.
/swapfile swap swap defaults 0 0
Next, limit Apache workers to make sure MySQL always gets innodb_buffer_pool_size. Depending on what you use - prefork or worker configuration will be different, but you got an idea.

Related

mysql server log file is increasing in size with warnings

Version: Mysql server version is 8.0 and it is installed on Windows server 2019.
Problem statement: Error log file from location C:\programdata\mysql80\data\ is increasing and it is about ~100GB.
Error log statements are :
[Warning] [MY-011959] [InnoDB] Difficult to find free blocks in the buffer pool (6091 search iterations)! 6091 failed attempts to flush a page! Consider increasing the buffer pool size. It is also possible that in your Unix version fsync is very slow, or completely frozen inside the OS kernel. Then upgrading to a newer version of your operating system may help. Look at the number of fsyncs in diagnostic info below. Pending flushes (fsync) log: 0; buffer pool: 0. 1738 OS file reads, 217 OS file writes, 45 OS fsyncs. Starting InnoDB Monitor to print further diagnostics to the standard output.
Scenario:
There was failure to update more than 4MB data size to the database, so we changed the "max_allowed_packet" from default 4M to 256M in "mysql.ini" file.
After this settings restarted mysqld service, everything worked after this db changed.
But after 2 days we are facing the above mentioned issue, as a result disk space is getting full and DB connections are pending in a queue.
Tried to stop the mysql80 service and deleted the error.log file and reverted the max_allowed_packet to its default size.
But with the change also error file is getting created again with same warning.
What could be the possible issues or fix for it?

What is the best mysql configuration for mysql instance with a lot of databases and lot of tables inside?

I have a mysql database instance with more than 3000 database inside. Each database contains more than 200 tables. I have more than 100 gb of data in all these database at present. I am using windows server 2012R2 operating system with a 4GB of RAM. The RAM memory utilization of the server system was always showing very high. So I tried to restart the system and restart is not working. It is showing restarting for long time and not restarting. When i checked the logs I understood that there is a memory issue. I want to restart my mysql instance and continue. What is the best configuration for the mysql with above architecture? what i need to do to make this work with out failure in future?
[Warning] InnoDB: Difficult to find free blocks in the buffer pool (1486 search iterations)! 1486 failed attempts to flush a page! Consider increasing the buffer pool size. It is also possible that in your Unix version fsync is very slow, or completely frozen inside the OS kernel. Then upgrading to a newer version of your operating system may help. Look at the number of fsyncs in diagnostic info below. Pending flushes (fsync) log: 0; buffer pool: 0. 26099 OS file reads, 1 OS file writes, 1 OS fsyncs. Starting InnoDB Monitor to print further diagnostics to the standard output.

MySQL crashes often

I have a droplet on DigitalOcean created using Laravel Forge and since a few days ago the MySQL server just crashes and the only way to make it work again is by rebooting the server (MySQL makes the server unresponsive).
When I type htop to see the list of processes is showing a few of /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysql.pid (currently is showing 33 of them).
The error log is bigger than 1GB (yes, I know!) and shows this message hundreds of times:
[Warning] InnoDB: Difficult to find free blocks in the buffer pool (21
search iterations)! 21 failed attempts to flush a page! Consider
increasing the buffer pool size. It is also possible that in your Unix
version fsync is very slow, or completely frozen inside the OS kernel.
Then upgrading to a newer version of your operating system may help.
Look at the number of fsyncs in diagnostic info below. Pending flushes
(fsync) log: 0; buffer pool: 0. 167678974 OS file reads, 2271392 OS
file writes, 758043 OS fsyncs. Starting InnoDB Monitor to print
further diagnostics to the standard output.
This droplet has been running during 6 months but this problem only started last week. The only thing that changed recently is now we send weekly notifications to customers (only the ones that subscribed to it) to let them know about certain events happening in the current week. This is kind of a intensive process, because we have a few thousands of customers, but we take advantage of Laravel Queues in order to process everything.
Is this a MySQL-settings related issue?
Try increasing innodb_buffer_pool_size in my.cnf
The recommendation for a dedicated DB server is 80% - if you're already at that level then you should consider moving to a bigger instance type.
in the my.cnf set this value:
innodb_buffer_pool_size = 12G
innodb_buffer_pool_instances = 12
innodb_page_cleaners = 12

MySQL - Out of Memory

I'm not even close to a server expert - used XAMPP to install phpmyadmin/mysql and everything was running smoothly until a few weeks ago. Was able to put a bandaid on the issue when these errors started happening by doubling some of the memory parameters (buffer size etc) but now these issues are occurring again. There is nothing else running on this machine except MySQL, so not sure how how we are running out of memory. I'm willing to try anything - haven't found very clear instructions on how to use 'ulimit' - any suggestions from any experienced MySQL Server admins out there on what might be causing this issue, and what I can try to fix it?
2016-08-24 12:26:48 2456 [ERROR] C:\xampp\mysql\bin\mysqld.exe: Out of memory (Needed 767416 bytes)
2016-08-24 12:26:48 2456 [ERROR] Out of memory; check if mysqld or some other process uses all available memory; if not, you may have to use 'ulimit' to allow mysqld to use more memory or you can add more swap space
2016-08-24 12:26:48 2456 [ERROR] Out of memory; check if mysqld or some other process uses all available memory; if not, you may have to use 'ulimit' to allow mysqld to use more memory or you can add more swap space
2016-08-24 13:41:33 6188 [Note] Plugin 'FEDERATED' is disabled.
2016-08-24 13:41:33 16d0 InnoDB: Warning: Using innodb_additional_mem_pool_size is DEPRECATED. This option may be removed in future releases, together with the option innodb_use_sys_malloc and with the InnoDB's internal memory allocator.
http://mysql.rjweb.org/doc.php/memory
All my answers were there. Again - i'm no server Admin, but basically my settings were all wrong in my.ini file. My innodb_buffer_pool_size was way too low, and my key_buffer_size was much too high. I'm guessing that the original settings were enough to get us buy as we were building the databases, but as tables/views/threads increased, performance was impacted.

Maddening Intermittent Wordpress: Error establishing database connection

Every day or so our Wordpress sites stop responding, the pages begin returning the dreaded 'Error establishing database connection'. There is nothing in the MySQL logs, and I'm at a loss as to what could be causing the issue. We don't have a lot of site visitors, and the machine is a Medium EC2 instance. Anyone have any ideas on how to resolve this?
There's not a whole lot to work with here. But ... I had the same issue with my micro instance. My problem was that the server kept running out of memory and then the mysql server would stop. It would start again when restarting the computer but it was only a matter of time before it would crash again.
Here's what I was getting in my MySQL logs.
151023 6:15:44 InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: mmap(137363456 bytes) failed; errno 12
151023 6:15:44 InnoDB: Completed initialization of buffer pool
151023 6:15:44 InnoDB: Fatal error: cannot allocate memory for the buffer pool
151023 6:15:44 [ERROR] Plugin 'InnoDB' init function returned error.
151023 6:15:44 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
151023 6:15:44 [ERROR] Unknown/unsupported storage engine: InnoDB
151023 6:15:44 [ERROR] Aborting
You might want to check for something similar. I use Ubuntu and the log is at /var/log/mysql/ by default.
I solved the problem by setting up a swap file as per Amazon EC2, mysql aborting start because InnoDB: mmap (x bytes) failed; errno 12. The AWS instances don't come with a swap space setup by default (whereas the install I downloaded from Ubuntu back in the day did). You need to set it up manually. Here's the method -
ssh into your AWS instance. Then:
Run dd if=/dev/zero of=/swapfile bs=1M count=1024
Run mkswap /swapfile
Run swapon /swapfile
Add this line /swapfile swap swap defaults 0 0 to /etc/fstab
Read the linked question for more details. Hope that helps!
I had a similar issue with intermittent crashing of MySQL. It turned out to be Apache configurations. Bots were brute forcing the site and eventually caused Apache to crash (check your logs: $ cat /var/log/apache2/access.log). Apache automatically restarts, but there isn't enough spare memory to restart MySQL too, hence the Database Connection error. The short fix is to reduce the number of RequestWorkers in Apache to better fit the amount of RAM you have.
You can run a diagnostic on your apache configuration using Apache2Buddy. It'll calculate how many Apache Workers you can run given the amount of RAM you have and how big your application is:
$ curl -L http://apache2buddy.pl/ | perl
It's probably going to recommend changing MaxRequestWorkers (or MaxClients on older Apache systems) in your MPM-Prefork configurations. That file is at /etc/apache2/mods-available/mpm_prefork.conf on my system. After changing the value to what Apache2Buddy recommends, just restart Apache and you should be good to go.
I wrote an article on this situation if you want a deeper explanation, a method to stress test, or ideas on how to block some of the bot traffic: http://brunzino.github.io/blog/2016/05/21/solution-how-to-debug-intermittent-error-establishing-database-connection/
When I tried to install local wordpress the same error
error establishing database connection
occurred because I forget to stop the SQL and Apache which was started in xampp. I stopped it and reinstalled the wordpress for Windows and it worked.