CloudSQL database crashes periodically (Out of memory) - mysql

We are having a problem where our cloudSQL database crashes periodically.
The error we are seeing in the logs is:
[ERROR] InnoDB: Write to file ./ib_logfile1failed at offset 237496832, 1024 bytes should have been written, only 0 were written. Operating system error number 12. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
From what I understand, error number 12 means 'Cannot allocate memory'. Is there a way we can configure cloudsql to leave a larger buffer of free memory? The alternative would be to upgrade to have more memory, but from what I understand cloudSQL automatically uses all the memory available to it... Is this likely to reduce the problem or would it likely continue in the same way?
Are there any other things we can do to reduce this issue?

It is possible your system is running out of disk space rather than memory, especially if you are running in a HA config.
(If disk isn't the issue you should file a GCP support ticket rather than here)

Related

What is the best mysql configuration for mysql instance with a lot of databases and lot of tables inside?

I have a mysql database instance with more than 3000 database inside. Each database contains more than 200 tables. I have more than 100 gb of data in all these database at present. I am using windows server 2012R2 operating system with a 4GB of RAM. The RAM memory utilization of the server system was always showing very high. So I tried to restart the system and restart is not working. It is showing restarting for long time and not restarting. When i checked the logs I understood that there is a memory issue. I want to restart my mysql instance and continue. What is the best configuration for the mysql with above architecture? what i need to do to make this work with out failure in future?
[Warning] InnoDB: Difficult to find free blocks in the buffer pool (1486 search iterations)! 1486 failed attempts to flush a page! Consider increasing the buffer pool size. It is also possible that in your Unix version fsync is very slow, or completely frozen inside the OS kernel. Then upgrading to a newer version of your operating system may help. Look at the number of fsyncs in diagnostic info below. Pending flushes (fsync) log: 0; buffer pool: 0. 26099 OS file reads, 1 OS file writes, 1 OS fsyncs. Starting InnoDB Monitor to print further diagnostics to the standard output.

Zabbix - The buffer pool utilization is too low

I'm using Zabbix as my Linux monitoring solution.
It shows MySQL - The buffer pool utilization is less thanĀ 50% in the last 5 minutes. This means that there is a lot of unused RAM allocated for the buffer pool, which you can easily reallocate at the moment as a warning.
should I worry about this do?
How to overcome this issue?
You have configured your MySQL with more RAM than needed, check your configuration (my.cnf, my.cnf.d and so on) for the innodb_buffer_pool_size and lower it.
How much lower? It depends on the effective usage, you can see it on your Zabbix graphs.
Don't forget to restart the mysql service!
If you are not swapping, and nothing else would benefit from using the RAM that this is wasting, then don't worry. (There's an old saying: "If it ain't broke, don't fix it.")

NDB Cluster DataNode Consuming much RAM

I have set up NBC Cluster at my Office. There are two physical Machines with 128G/each. The database size is around 2G. We are an ISP and we have kept the RADIUS database in the cluster.
The thing that is worrying me at the moment is, in both the Systems, the process is consuming 122G each out of 128 and I think its shocking.
I am quite new to database so I am having trouble debugging the issue.
The memory used by NDB data nodes is defined by your cluster
configuration. So even if the database is only 2GB in size, if you
have configured to run with up to 64 GByte of memory, this memory
is preallocated to ensure that it is there when it is needed.
So look into your config.ini file to see how you configured the
NDB data nodes.

MySQL max_connections

I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!

How big the MySQL Data can be on a PC?

I have a Mac Pro with i7 processor, 16GB RAM, and sufficient storage running Win 8.1 via Parallel on top of OS X Yosemite. I have a 23GB MySQL data and I am wondering if I am able to have such a big data loaded into MySQL in my PC. I started to import data but it stops after an hour throwing error
Error 1114 (HY000) at line 223. The table X is full.
I googled the error and found the same error discussed in Stackoverflow (but not this much of data). I tried to resolve using the given solutions but failed. MySQL imports about 3G of data and then throws the error.
Now, here are my 3 main questions.
Is my data much more bigger than a MySQL data engine can have on a PC?
If this is not the case and I am good to go with that much data, do I have any configuration required to enable running a 23GB data on my PC?
Final concluding question is how big is big that one cannot run on its machine? Is it only matter to be able to store data locally or it needs some other things?
Of course MySQL on Windows can handle 23GB of data. That's not even close to its limit.
Keep in mind that a database takes lots of disk space for indexes and other things. 23GB of raw data probably will need 100GB of disk space to load, to index, and to get running. If you are loading it into an InnoDB table you will also need transaction rollback space for the load.
It seems likely that your Windows 8.1 virtual machine running on Parallels is running out of disk space. You can allocate more of your Mac's disk for use by Parallels. Read this. http://kb.parallels.com/en/113972
Your answers can be found within the MySQL reference
The effective maximum table size for MySQL databases is usually
determined by operating system constraints on file sizes, not by MySQL
internal limits. The following table lists some examples of operating
system file-size limits. This is only a rough guide and is not
intended to be definitive. For the most up-to-date information, be
sure to check the documentation specific to your operating system.