Amazon RDS CPU Utilization due to COUNT query - mysql

I have published my website on Amazon EC2 (Singapore region) and I have used MySQL RDS medium instance for data storage in the same region.
In my case, most of the select queries have some COUNT functionality. These queries are showing very slow results. I have already created appropriate indexes on the table and I checked the EXPLAIN command to analyze these queries. It shows me that full table scans are necessary to get results.
On my RDS medium instance, I have configured the custom parameter group with the following settings.
log_queries_not_using_index = true,
slow_query_log = true,
long_query_time = 2 sec,
max_connections = 303,
innodb_buffer_pool_size = {DBInstanceClassMemory*3/4}
Yesterday my CPU utilization went above 95% and my site crashed due to this. There was no major increase in traffic.
Also, I dumped the data on my local system, and tested one of the COUNT queries. While it takes about 1.5 seconds for it to run on RDS, it takes only about 400 milliseconds for it to run on my local system. The configuration on my local system (4GB RAM, Intel core 2 duo 2.8GHz) is:
max_connections = 100,
slow_query_log = true,
long_query_time = 2 sec,
innodb_buffer_pool_size = 72351744
So, what could be the reason for the spike in CPU utilization as well as the difference in performance times between RDS and my local system?
Thanks,

Depending on the table size - the RDS instance uses EBS to store the data - if you're doing a table scan and its going to have to get the data from EBS instead of a locally cached in-memory key and then scan it. So - you're likely seeing the increased lag of the network between the RDS instance where the CPU resides and the EBS data in the SAN. When you do the same query on your local computer the only lag is the disk head seek time.
Then there is the difference between CPU time - an m1.medium has less CPU time (and therefore less opportunity to scan the results) than the core2 duo based on Amazon's definition of EC2 units.
HTH - in general, I'd try to avoid doing COUNT(s) in your queries as this is a terribly inefficient query (as you've seen) which can and will continue to cause nasty undesired results when the DB is under real-time varying levels of load.
R

Related

Slow COUNT WHERE performabnce on RDS MySql compared to MariaDB on local server

I'm trying to figure out what to look at to try to understand why I'm seeing much slower performance of COUNT WHERE queries on an AWS RDS MySql database compared to the same query on a MariaDB database running on a local CentOS server.
The queries look like:
SELECT COUNT(serial) FROM devices
WHERE device_family="foo"
AND serial > 1000
AND serial < 10000000;
On the local instance queries like this return in a small number of seconds even when there are 20M records or so for the device family. On RDS it's taking many minutes.
My DB experience is limited, and I'm wondering how to understand what's happening here.
The RDS instance is db.m5.xlarge, 4 vCPU, 16 GB RAM, Provisioned IOPS (SSD) 1000 IOPS. I revved the IOPS up to 10K and only saw modest improvements.
The data in the relevant table was migrated from the local server to RDS and is essentially the same: 150M records with a handful of fields, no relationships or foreign keys (it's currently the only table in the DB).
The indexes (SHOW INDEXES FROM ) are consistent.
Not sure what else is relevant or where to go from here?
There are many reasons why there could be a discrepancy between your local and RDS instance. Besided running EXPLAIN on your query in both environments, you may consider adding the following index:
CREATE INDEX idx ON devices (device_family, serial);
This index, if used, would completely cover the WHERE clause and should speed up the query. You may also try swapping the order of the two columns in the index.

AWS RDS MySQL performance

I am running MySQL 8 on AWS RDS, I have an InnoDB type table with 260,000 rows inside, no extraordinary data size.
My development server features 1GB RAM, 1vCPU, and my AWS RDS server is t3.small.
SELECT Count operations take too long (average 33 seconds) and my data tables in my Laravel project time out, what could be the problem?
select count('special_cargo_id') from special_cargos
33seconds
Is special_cargo_id your PK, what is it's type, and does it fit into innodb_buffer_pool_size?
Run:
select count(1) from special_cargos;
a few times. Does it run quickly after the first time? If it does, then the reason it slows down sometimes is because you are memory starved and other data pushes your PK on that table out of the innodb_buffer_pool. If it is always slow, the PK most likely never fits into the buffer pool.
If you're trying to debug performance of your Database, RDS has a great built in tool for that.
With RDS Performance Insights you should be able to identify where the bottleneck is.

How to diagnose extremely slow AWS RDS MySQL Performance?

My DB has around 15 tables, each with 40 columns, with 10.000 rows each.
Most of it with VARCHAR, some indexes and foreign keys.
Sometime I need to reconstruct my database (design flaw, working on it), which takes about 40 seconds locally. Now I'm trying to do the same to a AWS RDS MySQL 5.75 instance, but it takes forever, something like 40-50 minutes. The last time I had to do this same process it took no more than 5 minutes, still way more than the local 40 seconds, but I'm happy with it.
My internet speed is at about 35 Mbps Download / 5 Mbps Upload.
I know it's not fast, but it's consistent, and it hasn't changed since my last rebuilt.
I enabled General Logs, but all I can see are the INSERT queries, occasionally some "SELECT 1".
I do have same space for improvements on my code, but still, from 00:40:00 to 50:00:00, it seems that there's something else going on.
Any ideas on how to diagnose and find the bottleneck?
Thanks
--
Additional relevant information:
It is a Micro instance from AWS, all of the relevant monitoring indicators are basically flat: CPU at 4%, Free Storage Space at 20.000 MB, Freeable Memory at 200 MB, Write IOPS at around 2,5, the server runs a 5.7.25 MySQL, 1vCPU, 1Gb of RAM and 20GB of SSD. This is the same as 3 months ago when I last rebuilt the database.
SHOW GLOBAL STATUS: https://pastebin.com/jSrAzYZP
SHOW GLOBAL VARIABLES: https://pastebin.com/YxD7dVhR
SHOW ENGINE INNODB STATUS: https://pastebin.com/r5wffB5t
SHOW PROCESS LIST: https://pastebin.com/kWwiyGwf
SELECT * FROM information_schema...: https://pastebin.com/eXGBmetP
I haven't made any big changes to the server configuration, except enabling logs, e maxing out max_allowed_packets and saving logs to file.
In my backend I have a Flask app running, when it receives the API call, it takes a bunch of pickled objects and adds them all to the database (appending the Flask SQLAlchemy class to a list) and then running db.session.add_all(entries), trying to run a bulk operation. The code is the same, both for localhost and my remote server.
It does get slower in three specific tables, most of them with VARCHAR columns, but nothing different from my last inserts - it seems odd that the problem would be data, or the way the code is structured, or at least doesn't seem reasonable that this would result in a 20 second (localhost) to 40 minutes (hosted server) time, specially when the rest of the tables work mostly the same.
Enable the slow log, set long_query_time=0, run your code, then put the resulting log through mysqldumpslow.
Establish which queries contribute most to slowness and take it from there.
Compare the config between your old server and your new one.
Also, are they the same version of MySQL? 5.6, 5.7 and 8.0 can produce very different execution plans (with 5.6 usually coming up with the sane one if they differ).
Rate Per Second = RPS
Suggestions to consider for your AWS RDS Parameters group
thread_cache_size=24 # from 8 to reduce threads_created count
innodb_io_capacity=1900 # from 200 to enable more use of SSD IOPS capacity
read_rnd_buffer_size=128K # from 512K to reduce handler_read_rnd_next RPS of 21
query_cache_size=0 # from 1M since you have QC turned off with query_cache_typ=OFF
Determine why com_flush is running 13 times per hour and get it stopped to avoid table open thrashing.
I found that after migrating to RDS all my database Indexes are gone! They weren't migrated along with the schema and data. Make sure you're indexes are there.
Also, MySQL query cache is OFF by default in RDS. This won't help the performance of your initial query, but it may speed things up in general.
You can set query_cache_type to 1 and define a value for query_cache_size. I also changed the thread_cache_size from 8 to 24 and innodb_io_capacity from 200 to 1900 don't know if it helps you.
Also creating AWS DB Parameter Groups helped me a lot with configuring and tuning DB variables. Here you can read more:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

Query caching and take database in-memory in Sql Server like Mysql

in mysql there is feature to cache large chunk of database in Memory. Using the mysql configuration file. (my.ini)
Database in RAM InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and row data. The bigger you set this the less disk I/O is needed to access data in tables. On a dedicated database server you may set this parameter up to 80% of the machine physical memory size. Do not set it too large, though, because competition of the physical memory may cause paging in the operating system. Note that on 32bit systems you might be limited to 2-3.5G of user level memory per process, so do not set it too high.
innodb_buffer_pool_size=6000M
Query caching Size in my.ini
Query cache is used to cache SELECT results and later return them without actual executing the same query once again. Having the query cache enabled may result in significant speed improvements, if your have a lot of identical queries and rarely changing tables. See the "Qcache_lowmem_prunes" status variable to check if the current value is high enough for your load.
query_cache_type = 1
query_cache_size = 80M
They drastically boost db performance for a medium scale database.
Do we have similar features in SQL Server?

Improving MySQL I/O Performance (Hardware & Partitioning)

I need to improve I/O performance for my database. I'm using the "2xlarge" HW described below & considering upgrading to the "4xlarge" HW (http://aws.amazon.com/ec2/instance-types/). Thanks for the help!
Details:
CPU usage is fine (usually under 30%), uptime load averages anywhere from 0.5 to 2.0 (but I believe I'm supposed to divide that by the number of CPU's) so that looks okay as well. However, the I/O is bad: iostat show favorable service times, but the time spent in queue (I suppose this means waiting to access the disk) is far too high. I've configured MySQL to flush to disk every 1sec instead of every write, which helps, but not enough. Profiling shows there are a handful of tables that are the culprits for most of the load (both read && write operations). Queries are already indexed & optimized, but not partitioned. Average MySQL states are: Sending Data # 45%, statistics # 20%, Updating # 15%, Sorting result # 8%.
Questions:
How much performance will I get by upgrading HW?
Same question, but if I partition the high-load tables?
Machines:
m2.2xlarge
64-bit
4 vCPU
13 ECU
34.2 Gb Mem
EBS-Optimized
Network Performance: "Moderate"
m2.4xlarge
64-bit
6 vCPU
26 ECU
68.4 Gb Mem
EBS-Optimized
Network Performance: "High"
In my experience, the biggest boost in MySQL performance comes from IO. You have alot of RAM. Try setting up a ram drive and point the tmpdir to it.
I have several MySQL servers that are very busy. My settings are below - maybe this can help you tweak your settings.
My Setup is:
-Dual 2.66 CPU 8 core box with a 6-drive Raid-1E array - 1.3TB.
-innodblogs on a separate SSD drives.
-tmpdir is on a 2GB tempfs partition.
-32GB of ram
InnoDB settings:
innodb_thread_concurrency=16
innodb_buffer_pool_size = 22G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_log_files_in_group=8
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2 (This is a slave machine - 1 is not required fo my purposes)
innodb_flush_method=O_DIRECT
Current Queries per second avg: 5185.650
I am using Percona Server, which is quite a bit faster that other MySQLs from my testing.