We have a m3.2xlarge mysql rds box in production. It is a 8 Core and 30 GB ram.
The freeable memory approximately 4000 MBs when we got huge DB timeouts. Is there any direct correlation for the same? In other words, is having a db with freeable memory less than 4000MBs not healthy?
Questions referred with no vain
Amazon RDS running out of freeable memory. Should I be worried?
Below is the Freeable memory for RDS instances. We restarted the DB to increase the freeable memory
Related
There is unusual CPU activity on an AWS RDS instance running MySQL 5.7
Server Config:
Instance class: db.m4.2xlarge
vCPU: 8
RAM: 32 GB
Despite this, the CPU utilization remains on average between 25-50%. with occasional periods of 100% utilizations, with no more than 10 active connections at a time. Any troubleshooting methods are greatly appreciated.
Running mariadb in micro instance with 600MB memory, and its a very low usage instance and for testing purpose.
Is there a way to reduce the memory consumption for this test database instance?
Tried turning off performance schema and other options available in this link, but still, the memory consumption is about 600 MB.
Reducing memory consumption of mysql on ubuntu#aws micro instance
I have two database servers.
Server A has 128GB memory with 75% for buffer_pool
Server B has 64GB memory with 25% for buffer_pool
There is no activity on Server A but an ALTER on a 220 GB table.
There is replication activity on Server B on ALTER on same 220 GB table.
Server B completes in half the time.
Can someone explain what might cause this behavior? All settings across Server A and B are similar except for memory and buffer_pool alotments.
Both are identical OS and Server A has 16 core CPU, while Server B has 8 core.
Not everything is main memory, in my case, factors like OS and CPU make a big difference. I tested same DB in different machines (for a project I worked on), and found a better general performance in a Linux i5-6200U and 8gb DDR4 machine than a W7 i7-4000 and 16gb DDR3 (around a 20% better performance)
I am using Couchbase Server on stage environment. Things were working fine until yesterday. But since today I am observing high CPU usage when the load is increased moderately. (PFA)
Couchbase cluster configuration:-
3 node cluster running (4.5.1-2844 Community Edition (build-2844))
each having m4.2xlarge(8 cores, 32 GB RAM) AWS machines.
Data RAM quota: 25000 MB
Index RAM quota: 2048MB
It has 9 buckets. And used bucket is having 9 GB RAM (i.e. 3 GB per cluster)
Note: - Since we are using community edition, each node is running Data, Full Text, Index, and query service.
Let me know if I've done some misconfiguration or if any optimization required.
So I have an RDS MariaDB server running with the following specs.
Instance Class: db.m4.2xlarge
Storage Type: Provisioned IOPS (SSD)
IOPS: 4000
Storage: 500 GB
My issue is when the SQL server experience heavy load (connections in excess of 200) it will start to refuse new connections.
However according to the monitoring stats, it should be able to handle connections well above that. At it's peak load these are the following stats:
CPU Utilization: 18%
DB Connections: 430
Write Operations: 175/sec
Read Operations: 1/sec (This Utilizes MemCache)
Memory Usage: 1.2GB
The DB has the following hardware specs
8 vCPUs
32 GB Mem
1000 Mbps EBS Optimized
Also from what I can tell RDS has the "Max_Connections" setting in MySQL set to
2,664.
So I can't understand why it is rejecting new connections at such a low rate by comparison. Is there another setting that controls this either in RDS or in the MariaDB?