AWS RDS Concurrent Connections Issue - mysql

So I have an RDS MariaDB server running with the following specs.
Instance Class: db.m4.2xlarge
Storage Type: Provisioned IOPS (SSD)
IOPS: 4000
Storage: 500 GB
My issue is when the SQL server experience heavy load (connections in excess of 200) it will start to refuse new connections.
However according to the monitoring stats, it should be able to handle connections well above that. At it's peak load these are the following stats:
CPU Utilization: 18%
DB Connections: 430
Write Operations: 175/sec
Read Operations: 1/sec (This Utilizes MemCache)
Memory Usage: 1.2GB
The DB has the following hardware specs
8 vCPUs
32 GB Mem
1000 Mbps EBS Optimized
Also from what I can tell RDS has the "Max_Connections" setting in MySQL set to
2,664.
So I can't understand why it is rejecting new connections at such a low rate by comparison. Is there another setting that controls this either in RDS or in the MariaDB?

Related

"Storage-full " problem with aws RDS MYSQL read replica

I have one master RDS instance of 50 GB storage and created a read replica of the master DB with the same configuration.
I use this read replica only for SELECT operations nothing else. But suddenly I got the storage full problem with the read replica DB. Master Db is working properly, how is it possible if the size is same but replica DB's size is full?
Getting below error when executing the complex query(inner, join, group):-
ERROR 3 (HY000): Error writing file '/rdsdbdata/tmp/MY7U2XRf' (Errcode: 28 - No space left on device)
In AWS Console, The slave DB status is "Storage-full" and an event log message is : The free storage capacity for DB Instance: example-slave is low at 1% of the provisioned storage [Provisioned Storage: 49.07 GB, Free Storage: 527.80 MB]. You may want to increase the provisioned storage to address this issue.
Checked the free size with below command of both Master and replica Db instances, it's almost the same.
MySQL:
SELECT table_schema, ROUND(SUM(data_length+index_length)/1024/1024/1024,2) "size in GB"
FROM information_schema.tables
GROUP BY 1
ORDER BY 2 DESC;
Master DB's space is used 23.86 GB out of 50 GB and Slave DB's space is used 24 GB out of 50 GB.
What probably happened is that you enabled the auto-scaling in the master but not in the slave/read-replica

High CPU usage on couchbase server with moderate load

I am using Couchbase Server on stage environment. Things were working fine until yesterday. But since today I am observing high CPU usage when the load is increased moderately. (PFA)
Couchbase cluster configuration:-
3 node cluster running (4.5.1-2844 Community Edition (build-2844))
each having m4.2xlarge(8 cores, 32 GB RAM) AWS machines.
Data RAM quota: 25000 MB
Index RAM quota: 2048MB
It has 9 buckets. And used bucket is having 9 GB RAM (i.e. 3 GB per cluster)
Note: - Since we are using community edition, each node is running Data, Full Text, Index, and query service.
Let me know if I've done some misconfiguration or if any optimization required.

RDS freeable memory vs error rates in db

We have a m3.2xlarge mysql rds box in production. It is a 8 Core and 30 GB ram.
The freeable memory approximately 4000 MBs when we got huge DB timeouts. Is there any direct correlation for the same? In other words, is having a db with freeable memory less than 4000MBs not healthy?
Questions referred with no vain
Amazon RDS running out of freeable memory. Should I be worried?
Below is the Freeable memory for RDS instances. We restarted the DB to increase the freeable memory

Couchbase 4.0 Community Edition benchmark

We are benchmarking couchbase and observing a very strange behaviour.
Setup phase:
Couchbase cluster machines;
2 x EC2 r3.xlarge with General purpose 80GB SSD (Not EBS optimised ) , IOPS 240/3000.
Couchbase settings:
Cluster:
Data Ram Quota: 22407 MB
Index Ram Quota: 2024 MB
Index Settings (default)
Bucket:
Per Node Ram Quota: 22407 MB
Total Bucket Size: 44814 MB (22407 x 2)
Replicas enabled (1)
Disk I/O Optimisation (Low)
Each node runs all three services
Couchbase client;
1 x EC2 m4.xlarge General purpose 20 GB SSD (EBS Optimised), IOPS 60/3000.
The client is running the 'YCSB' benchmark tool.
ycsb load couchbase -s -P workloads/workloada -p recordcount=100000000 -p core_workload_insertion_retry_limit=3 -p couchbase.url=http://HOST:8091/pools -p couchbase.bucket=test -threads 20 | tee workloadaLoad.dat
PS: All the machines are residing within the same VPC and subnet.
Results:
While everything works as expected
The average ops/sec is ~21000
The 'disk write queue' graph is floating between 200K - 600K(periodically drained).
The 'temp OOM per sec' graph is at constant 0.
When things starting to get weird
After about ~27M documents inserted we start seeing 'disk write queue' is constantly rising (Not getting drained)
At about ~8M disk queue size the OOM failures are starting to show them selves and the client receives 'Temporary failure' from couchbase.
After 3 retries of each YCSB thread, the client stops after inserting only ~27% of the overall documents.
Even when the YCSB client stopped running, the 'disk write queue' is asymptotically moving towards 0, and is drained only after ~15 min.
P.S
When we benchmark locally on MacBook with 16GB of ram + SSD disk (local client + one node server) we do not observe such behaviour and the 'disk write queue' is constantly drained in a predictable manner.
Thanks.

rails + maximum connection pool size in database.yml

What is the maximum pool size i can set in my database.yml? I'm using Mysql Db.
I have 20 unicorn processes running in 24 core, 32GB RAM machine.
The default configuration in MySQL is 100 connections.
From MySQL 5.5 page
Linux or Solaris should be able to support at 500 to 1000 simultaneous connections routinely and as many as 10,000 connections if you have many gigabytes of RAM available and the workload from each is low or the response time target undemanding. Windows is limited to (open tables × 2 + open connections) < 2048 due to the Posix compatibility layer used on that platform.
You'll have to figure it out yourself, I don't think that the simultaneous connections to the DB will be your bottleneck.
From Rails ConnectionPool:
A connection pool synchronizes thread access to a limited number of database connections. The basic idea is that each thread checks out a database connection from the pool, uses that connection, and checks the connection back in. ConnectionPool is completely thread-safe, and will ensure that a connection cannot be used by two threads at the same time, as long as ConnectionPool’s contract is correctly followed. It will also handle cases in which there are more threads than connections: if all connections have been checked out, and a thread tries to checkout a connection anyway, then ConnectionPool will wait until some other thread has checked in a connection.
So the only thing you should not do is have more connections in Rails pool size than in your mysql configuration.