How to find CPU utilazetion of mysql RDS instance - mysql

How to find which process/query consume CPU in amazon mysql RDS instance? I have medium instance on amazon RDS of mysql, and It is working smoothly previously, but since yesterday Its throwing error 'connection timeout' while accessing RDS instance. When I checked cloud watch, It shows me high CPU utilization during that period. Now I want to check what is the problem? So, can some one tell me how to check it?
thanks

use 'show processlist' in mysql. with this you can see which queries are in what state, doing what, since when
also check slow query log:
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html

Using show process list, you can see only current running thread info, but as per context of your query, you want to see the historical status. You can achieve it via enabling the slow query log and setting long query time to 1 second. You can pass slow query logs to cloud watch and can set alerts according to your db system loads and types of queries.

Related

First query or connection to AWS RDS is very very slow

I have a product built with laravel, with multi-tenancy.
Deployed on EC2 instance and using AWS RDS as the database server.
I am currently having around 100 databases on the production.
Laravel's hyn tenancy module is handling the connections.
Now, the problem is for each tenant after some idle time, the first request takes too long. around 15-20 seconds. and after that, it works smoothly.
In the test environment, we are not using RDS but a local MySQL instance. and the problem does not occur in the test environment. the only difference between test and production is the AWS RDS.
I have looked into max connections, query cache, and so on... but no luck so far.
Any suggestions?
The solution will depend on what kind of RDS you have.
I assume it's serverless (more common). In that case, there's a setting for min and max for ACU. It will (I believe) go down to zero by default if the DB is not accessed in a while. Check that and see if it is properly set.
If you have a Provisioned DB, then it's more complex. It will start caching things once queries are executed but until a particular query is run, you will be waiting for the DB to "wake up" and run a full query.
Check this page for relevant info.

AWS MySQL RDS instance becomes unresponsive and getting restarted automatically

We have a AWS MySQL RDS instance which is about 1.7T in size. Sometimes it becomes unresponsive and no operations can be performed.
CPU utilization, Write IOPS, read IOPS, queue depth, write throughput, write latency and read latency drops to zero.
Number of connections get piled up.
"Show engine innodb status" hangs
Lot of queries (around 25 for each) by rdsadmin which are in hang state.
SELECT count(*) from mysql.rds_replication_status WHERE action = 'reset slave' and master_host is NULL and master_port is NULL GROUP BY action_timestamp,called_by_user,action,mysql_version,master_host,master_port ORDER BY action_timestamp LIMIT 1;
SELECT NAME, VALUE FROM mysql.rds_configuration;
After sometime, instance gets rebooted automatically with following error.
MySQL restart initiated to address MySQL induced log backup issues. Note that as part of this resulution, a DB Snapshot will be performed after MySQL completes restarting.
What can be the issue? This happens quite often. Sometimes, for our surprise, this happens in off-peak times too.
I faced the same issue and raised an issue with AWS Support. Got the following explanation:
The RDS Monitoring service discovered issue regarding backing up Binary Logs of your databases which is critical for Point in Time Restore (PITR) feature. To mitigate this issue and in order to avoid data corruption, RDS monitoring restarted the RDS instance and hence a restart was automatically triggered. In order to make sure that there is no data loss it took a snapshot of DB instance.
Although the RDS instance was multi-AZ it didn't fail over because of following reason:
Multi AZ has 2 criteria:
1- Single Box Experience, which means that Customer always finds his data even after failover.
2- Higher Availability than Single AZ.
So both criteria have to be present when AWS monitoring service takes the Decision to failover to the standby instance, but in your case AWS monitoring service noticed some risk that can cause data loss after the failover and that is why it took the decision to reboot instead of failing over.
Hope this helps. This has happened to me 3 times in last one week though.
check your db maintenance window timing i mean when your schedule maintenance is happening , and note at what time this issue happening is it happening in regular interval or randomly .
check both mysql error logs and slow query logs.
if possible paste the suspected issue here
We were able to resolve this issue by upgrading the instances to 5.6.34.

AWS MySQL RDS becomes unresponsive time to time

We have two MySQL RDS instances (Master and read replica). As usual we write to the master and read from the slave.
Master server works fine, but we observed that slave server becomes unresponsive time to time.
Observations:
Monitoring Graphs
CPU utilization drops down to 0
Increase in number of connections
Write IOPS, read IOPS, queue depth, write throughput, write latency and read latency drop to 0.
This can be resolved with a restart, but we are interested in finding the root cause. Basically when this happens, we can still log in to mysql prompt, but we can't execute any queries. AWS console shows instance as healthy, no errors are shown.
According to the graphs, there is no any abnormal activity or increase in resource utilization just before this happens. Everything looks normal.
(Small climbs in the attached graphs are normal. Those are in line with the business pattern. Historically instance survived against larger mountains)
Please let me know if you happen to come across such a situation.
Thanks.
Note:
Instance Information
db.m4.xlarge
IOPS 2000
Size 50G
Basically, instance is being under utilized when the issue happens
Note:
If we wait without restarting the instance, it gets restarted automatically with following error.
MySQL restart initiated to address MySQL induced log backup issues. Note that as part of this resulution, a DB Snapshot will be performed after MySQL completes restarting.

Amazon RDS CPU spikes

Does anybody have an idea why we have these hourly spikes in CPU usage on our Amazon RDS database?
We don't have any crons running every hour so it seems to be some internal RDS stuff because it's exactly every hour.
Does RDS do some index updates or something every hour?
What's the best way to find out what is causing this?
We have a number of RDS instances running, from micros up to larges, and I don't see this pattern anywhere - so I suspect that it is something in your code.
Probably the best option to figure out what is going on is to be logged into the database at the top of the hour, and monitor what is connecting and what they're doing. For mysql, you can probably start with SHOW FULL PROCESSLIST. There are monitoring tools that will monitor this for you.
You can enable slow and general log on MySQL server and analyze them using pt-query-digest
also there are some commercial tools like MEM and MONyog which can be handy in monitoring SQL queries.

Amazon RDS (Mysql2::Error 110)

I've had a Rails application running in production for the past 6 months, with weekly deployments, without any issue.
Now, I've been having a recurring issue for about 3 weeks and it seems to get worst every week.
When my app boots and reaches the point where it's trying to connect to the DB, I get this error :
Can't connect to MySQL server on '***.amazonaws.com' (110) (Mysql2::Error)
AFAIK, this error tells me that I've reached MySQL's max connections limit.
From the configs, I should be able to open 296 connections. My app is set to run 7 instances with each a database connection pool of 5, so it can't really exceed 70 connections when deploying a new instance.
I've never seen the connection count go above 20 in either the AWS RDS Console or the SHOW PROCESSLIST command.
I don't think it has anything to do with either Rails or my application server (Puma), since I can't connect through the MySQL Command-Line Tool when the issue occurs.
Has anyone had a similar issue with MySQL on RDS or MySQL itself?
The database pool isn't per application, it's per process. If it's threading/multi process per instance it could be using more than that. Have you tried restarting mysql? It sounds like you have some hanging connections for whatever reason.
I've been getting these issues recently. Could it be related to the pending-restart change of parameter group on my RDS Instance? I sure hope not. As I understand a pending change should have no effect on the current performance.