There is unusual CPU activity on an AWS RDS instance running MySQL 5.7
Server Config:
Instance class: db.m4.2xlarge
vCPU: 8
RAM: 32 GB
Despite this, the CPU utilization remains on average between 25-50%. with occasional periods of 100% utilizations, with no more than 10 active connections at a time. Any troubleshooting methods are greatly appreciated.
I'm in the process of both virtualizing and updating an old Linux server running a reporting system developed in house (Apache, MySQL, PHP).
The old physical server is running 64-bit Ubuntu 10.04.3 LTS, MySQL 5.1.41 and PHP 5.3. The server has an Intel Xeon CPU X3460 # 2.80GHz (4 cores), 4GB RAM.
We have ESXI 5.5 running on an HP DL380G6 with 2 x Intel Xeon X5650 6 Core 2.66ghz and 32GB RAM.
I created a new VM with 4 cores and 4GB RAM, and did a clean install of 64-bit Ubuntu 16.04.4 LTS, MySQL 5.7.21 and PHP 7.0, migrated our app, and everything is running much slower. I believe it's MySQL because when doing the same direct query on the old physical server vs the new virtualized one, queries can take 8 seconds (VM) instead of 1 (Physical). The tables all have appropriate indexes, running "EXPLAIN" on each server provides the same results, yet one is substantially slower. When a page is running numerous complex queries, it can take a minute+ to load instead of a few seconds.
Any idea why this can be? Same dataset, same query, same engine (MyISAM). The VM has much more recent versions of everything, same number of cores and same RAM. I even tried doubling the VM CPU to 2 sockets, 4 cores and 8GB RAM, and it doesn't seem to have a substantial impact.
I've compared the MySQL configuration and nothing is jumping out at me at being very different.
What might I be missing here? Is it the virtual host hardware?
Did you upgrade the vmware tools? if not do so.
Then, on this ESXi host, do you have other VMs on it? If yes, I would advise you to create a resource pool and then configure the resource limits.
In the resource pool the amount of resource you assign will be in a way reserved for your VM, so you won't have to share with other VMs.
So I have an RDS MariaDB server running with the following specs.
Instance Class: db.m4.2xlarge
Storage Type: Provisioned IOPS (SSD)
IOPS: 4000
Storage: 500 GB
My issue is when the SQL server experience heavy load (connections in excess of 200) it will start to refuse new connections.
However according to the monitoring stats, it should be able to handle connections well above that. At it's peak load these are the following stats:
CPU Utilization: 18%
DB Connections: 430
Write Operations: 175/sec
Read Operations: 1/sec (This Utilizes MemCache)
Memory Usage: 1.2GB
The DB has the following hardware specs
8 vCPUs
32 GB Mem
1000 Mbps EBS Optimized
Also from what I can tell RDS has the "Max_Connections" setting in MySQL set to
2,664.
So I can't understand why it is rejecting new connections at such a low rate by comparison. Is there another setting that controls this either in RDS or in the MariaDB?
We are benchmarking couchbase and observing a very strange behaviour.
Setup phase:
Couchbase cluster machines;
2 x EC2 r3.xlarge with General purpose 80GB SSD (Not EBS optimised ) , IOPS 240/3000.
Couchbase settings:
Cluster:
Data Ram Quota: 22407 MB
Index Ram Quota: 2024 MB
Index Settings (default)
Bucket:
Per Node Ram Quota: 22407 MB
Total Bucket Size: 44814 MB (22407 x 2)
Replicas enabled (1)
Disk I/O Optimisation (Low)
Each node runs all three services
Couchbase client;
1 x EC2 m4.xlarge General purpose 20 GB SSD (EBS Optimised), IOPS 60/3000.
The client is running the 'YCSB' benchmark tool.
ycsb load couchbase -s -P workloads/workloada -p recordcount=100000000 -p core_workload_insertion_retry_limit=3 -p couchbase.url=http://HOST:8091/pools -p couchbase.bucket=test -threads 20 | tee workloadaLoad.dat
PS: All the machines are residing within the same VPC and subnet.
Results:
While everything works as expected
The average ops/sec is ~21000
The 'disk write queue' graph is floating between 200K - 600K(periodically drained).
The 'temp OOM per sec' graph is at constant 0.
When things starting to get weird
After about ~27M documents inserted we start seeing 'disk write queue' is constantly rising (Not getting drained)
At about ~8M disk queue size the OOM failures are starting to show them selves and the client receives 'Temporary failure' from couchbase.
After 3 retries of each YCSB thread, the client stops after inserting only ~27% of the overall documents.
Even when the YCSB client stopped running, the 'disk write queue' is asymptotically moving towards 0, and is drained only after ~15 min.
P.S
When we benchmark locally on MacBook with 16GB of ram + SSD disk (local client + one node server) we do not observe such behaviour and the 'disk write queue' is constantly drained in a predictable manner.
Thanks.
I have an application on my server that uses many database requests to a reasonable simple and small database (10Mb size).
The number of simultaneous requests can be around 500. I have an Apache & Mysql server running on linux with 8GB RAM and 3 cores.
I've upgraded the server recently(from 512mb to 8GB), but this is not having effect. It seems that the aditional CPU and RAM is not being used. Before the CPU hit 100%, but after the upgrade I still get status WARN at only 40% CPU usage:
Free RAM: 6736.94 MB
Free Swap: 1023.94 MB
Disk i/o: 194 io/s
In the processes, the mysqld cpu usage is 100%.
I can't figure out what the right settings are to make the hardware upgrade work for mysql and MyISAM engine.
I have little experience with setting up and configuring a server, so detailed comments or help are very welcome.
UPDATE #1
the mysql requests are both readers and writers from a large number of php scripts.