AWS RDS Memory Issue - mysql

I have a database running on Amazons RDS platform and it does not seem to be using the full amount of memory available.
The instance type is db.m4.xlarge, this should give me 16 GiB of memory but when i look at the monitoring page it shows I am reaching the threshold with a current value of 2460 MB.
When I look at the db-parameter-group it shows that the innodb_buffer_pool_size should be the 3/4 of the DBinstanceClassMemory however when i check the actual value set (by logging into the db and running show global variables) is it set as 12465471488 (i assume this is bytes?)
Does anyone know why this is and what options I should set to make the RDS instance take full advantage of the memory that is available?

The number shown in the console is free memory -- not used memory. It's arguably counter-intuitive, but that's what's being shown, here. Note that the small bar graph adjacent to the number is mostly full, not mostly empty.

Related

AWS RDS Mysql Cluster not Scaling Automatically on Write Queries

I have an AWS RDS MySql Cluster. I'm trying to Auto Scale on Mass Write operations, but unable to do so. But, when I'm Running Read Queries it Scales properly. I'm getting "Too Many Connections" error on write. Can anyone let me know what I'm doing wrong? Thanks in advance.
[Edit: 1]
Screenshot of AWS RDS Cluster Config
I've kept the connection limit to 2 because I was testing.
When I'm sending Multiple read requests to AWS RDS I can see new Instances being launched in my RDS Instances Section:
I've also set Scale In Cool Time to 0 so that it will launch a new Instance Instantly. When I'm reading from the database using read endpoint, Auto Scaling is working properly. But when I'm trying to insert data using write endpoint, Auto-Scaling is not working.
Your question is short on specifics so I will list some possible ways to figure this out and possible solve it.
RDS scaling takes time, so you cannot expect that your DB will increase in capacity instantly when a sudden spike of traffic exceeds its current capacity.
The maximum number of connects to a MySQL instance is set by max_connections in your parameter group. How many connections are happening and what is the max_connections value? This value affects memory usage, so be review any changes. Note: Increasing this value does not always help if there is a bug in your client code that erroneously creates too many connections. If the number of peak connections is exceeding the max_connections value, sometimes you just need to scaled up to a larger instance. Details determine the correct solution.
Use MySQL's Gobal Status History and look into what happens and when. This is useful for detecting locking or memory issues.

Why Amazon RDS memory monitor threshold shows me in RED?

Today I have notice that my Amazon RDS instance memory monitor threshold shows me with red line. Here I have attached screen shot for the same.
So, My question is what is that Memory threshold, and why it is crossing limit? Anything wrong with my instance? What is the solution to decrease/control this hike?
The Red line you see is a threshold set by AWS if the RDS is causing that threshold many times then there might be a performance issue that you need to take a look.
MySQL try to use all available memory as needed. However, the limits are defined by RDS' server parameters which you can modify and you may not need to scale up your server.
RDS instances are created with default values for those parameters (the most relevant of them being innodb_buffer_pool_size) to optimize memory usage. In order to see which server variables are applied to your instance, connect to it and execute the "show global variables" command.
It is normal for that number to go up and down as matter of course.
If you are seeing performance issues and you have no more freeable memory, then you should be looking at causes or upgrading to a larger instance.
Those values may not be right for all workloads, but you can adjust them as needed using parameter groups. This document explains how you can use parameter groups:
http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

gCloud / GCE Disk Size warning - is it meaningful?

When I create a boot disk with gCloud less than 200GB in size, I see this error:
WARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks/persistent-disks#pdperformance.
I don't, however, see the details about this 200GB size which it alludes to being at somewhere on the page at that url.
Should I care about this warning at all? I wonder is it is more of a ploy for them to make more money trying to encourage you to lease more space?
Note: I'm using a standard disk, not a solid state. My disk access performance which is of any concern is via MySQL with very small read / writes 99% of the time, and occasionally blobs in the range of say 1 to 100 MBs.
It looks like the documentation has shifted around a little, and the warning is out of date.
There is a section of the Block Storage page that explains the relationship between persistent disk size and performance.
We'll fix the URL in gcloud.

Node.js high memory usage

I'm currently running a node.js server that communicates with a remote MySQL database as well as performs webrequests to various APIs. When the server is idle, the CPU usage ranges from 0-5% and RAM usage at around 300MB. Yet when the server is under load, the RAM usage linearly goes up and CPU usage jumps all around and even up to 100% at times.
I setup a snapshot solution that that would take a snapshot of the heap when a leak was detected using node-memwatch. I downloaded 3 different snapshots when the server was at 1GB 1.5GB and 2.5GB RAM usage and attempted to analyze them yet I have no idea where the problem is because the total amount of storage in the analytics seem to add up to something much lower.
Here is one of the snapshots, when the server had a memory usage of 1107MB.
https://i.gyazo.com/e3dadeb727be3bdb4eeb833094291ebf.png
Does that match up? From what I see there is only a maximum of 500 MB allocated to objects there. Also, would anyone have any ideas of the crazy CPU usage that I'm getting? Thanks.
what you need is better tool to proper diagnose that leak, Looks like you can get some help using N|Solid https://nodesource.com/products/nsolid , it will help you to visualize and monitor your app, is free to use in a develop environment.

Does couchbase actually support datasets larger than memory?

Couchbase documentation says that "Disk persistence enables you to perform backup and restore operations, and enables you to grow your datasets larger than the built-in caching layer," but I can't seem to get it to work.
I am testing Couchbase 2.5.1 on a three node cluster, with a total of 56.4GB memory configured for the bucket. After ~124,000,000 100-byte objects -- about 12GB of raw data -- it stops accepting additional puts. 1 replica is configured.
Is there a magic "go ahead and spill to disk" switch that I'm missing? There are no suspicious entries in the errors log.
It does support data greater than memory - see Ejection and working set management in the manual.
In your instance, what errors are you getting from your application? When you start to reach the low memory watermark, items need to be ejected from memory to make room for newer items.
Depending on the disk speed / rate of incoming items, this can result in TEMP_OOM errors being sent back to the client - telling it needs to temporary back off before performing the set, but these should generally be rare in most instances. Details on handling these can be found in the Developer Guide.
My guess would be that it's not the raw data that is filling up your memory, but the metadata associated with it. Couchbase 2.5 needs 56 bytes per key, so in your case that would be approximately 7GB of metadata, so much less than your memory quota.
But... metadata can be fragmented on memory. If you batch-inserted all the 124M objects in a very short time, I would assume that you got at least a 90% fragmentation. That means that with only 7GB of useful metadata, space required to hold it has filled up your RAM, with lots of unused parts in each allocated block.
The solution to your problem is to defragment the data. It can either be achieved manually or triggered as needed :
manually :
automatically :
If you need more insights about why compaction is needed, you can read this blog article from Couchbase.
Even if none of your documents is stored in RAM, CouchBase still stores all the documents IDs and metadata in memory(this will change in version 3), and also needs some available memory to run efficiently. The relevant section in the docs:
http://docs.couchbase.com/couchbase-manual-2.5/cb-admin/#memory-quota
Note that when you use a replica you need twice as much RAM. The formula is roughly:
(56 + avg_size_of_your_doc_ID) * nb_docs * 2 (replica) * (1 + headroom) / (high_water_mark)
So depending on your configuration it's quite possible 124,000,000 documents require 56 Gb of memory.