Slow Performance for multiple requests in RDS - mysql

I have started using AWS RDS MYSQL for my use-case.
I am using RDS to store the user uploaded server information from a website.
I have tested uploading 1 file with 500 records; took 20 seconds. Again for testing, I have uploaded 9 files simultaneously with a total of 500 records; took 55 seconds.
I don't know what is the possible reason for this? I am using db.t3.large instance.
Which RDS metric do I need to look after or which Performance Insights metrics?
Is this the issue due to lesser baseline I/O performance of the gp2 volume?

Check the following metrics:
Write IOPS (Count/Second)
Write Latency (Milliseconds)
Network Receive Throughput
Are you running the operation over the internet or within the VPC? There should be some network latency if you are using the internet.

Related

Network performance issues in AWS RDS with bulk insert

I need to load a large csv file (16 GB in total) into AWS RDS (MariaDB).
I tried to load via LOAD DATA LOCAL INFILE.
It was running very fast in my local environment, but very slow in RDS (about 0.01 MB/Sec).
Of course, I loaded the 16GB file separately, and the connection to the DB was also released normally in the code I wrote, and there was nothing unusual about checking the current process with show processlist.
I would like to know what to check and fix in this situation to improve network performance.
Thank You.

MySQL / Web Server Bottleneck

I’m trying to determine a MySQL / Web server bottleneck.
I have three servers. A Web server running Nginx, a remote MySQL server with my Wordpress DB and another remote MySQL server storing our data.
The bottleneck I’m trying to find is between my second MySQL server storing our data and my Web server.
We have a page that has three DataTables on it (three separate queries). It’s loading very slowly, if it does all. Occasionally I’ll get a gateway time out error.
I don’t think the queries themselves are the issue. From DataGrip all three average between 200-500ms. Currently the queries aren’t indexed as I’ve been told the plugin cannot take advantage of indexes, but I might try anyways.
Hardware and Setup:
My MySQL server is an AWS R6G.Large, 2 cores and 16gb ram, SSD of 150 IOPS and 128 MB throughput. innodb_page_size is 32, buffer_pool_size is 11000M, innodb_buffer_pool_instances is 10 and innodb_log_file_size is 1G
Web server is an AWS C6G.Xlarge, 4 cores and 8gb ram, SSD of 150 IOPS and 128 MB throughput. Uses FPM and Opcache.
I’ve tried monitoring using TOP on both servers, but to be honest I’m not sure I have knowledge to properly utilize the information.
I’d really like to determine if it’s hardware or software, somehow, and if it’s hardware is there a way to isolate? I have no problem increasing hardware if that’s actually the problem.
I’m not sure if this is allowed on Stack, but I figure it might be easier to know what’s going on if I record my screen with TOP running on both servers. I added a video to my public Google Drive. The video has both my MySQL server (on the top) and Web server (Nginx, on the bottom). What I did was load the page (3sec mark in video) and recorded the outcome. The video is 1:05, which how long it took for the last table to appear. The video was recorded while my site was in maintenance, so no other IP / traffic could reach either server.
My google drive link:
https://drive.google.com/drive/folders/1NtdE1Z4875i1Xx2Wy2EXGgknt9yuY1IN?usp=sharing
Hopefully someone can help.
Aimee

How do I reduce the latency to AWS RDS database?

I have a Laravel application running on AWS Elastic Beanstalk, and a MYSQL RDS Database. In localhost a query that might take .002 seconds will take .2 seconds in the live server, thats a 100x hit on performance, this becomes incredibly problematic when I have to load several datapoints, a simple table that will take half a second to load on Localhost now takes 13 seconds to load live.
I figured out the problem is the latency between the instances, so I created an RDS database connected directly to the Elastic Beanstalk application, and viola! Even faster than localhost...However having your database tied to your EC2 instance is awful and non scalable.
I have tried everything, from reducing the queries, trying different queries, I tested without eloquent and used raw MYSQL, and that does not solve the problem, the queries are not the problem, the latency is!
I am sure I am doing something wrong. The two instances are in the same timezone, I tried with higher tier databases and the latency does get reduced, but not significantly, amazon aurora reduced it from 13 seconds to 3 seconds which is still unacceptable. So the question is, how do I configure my elastic beanstalk + RDS so that it performs well...at least as good as in localhost? How do I reduce that latency?
Thanks!

Slow https requests after sudden spike in mysql queries

I have wordpress website running in a VPS server and is handling about 150 mysql queries / second.
Ocassionaly when we notice a spike in traffic to about 200 mysql queries a second the https requests to the site is extremely slow.
The site loads decently with http but with https it takes 20+ seconds.
Gradually over a period of an hour after the spike, the load times get better and then it gets back to normal again.
There server load and memory looks fine. There is only a spike in mysql queries, firewall traffic and eth0 requests. There are no mysql slow queries
Any help would be appreciated.
Thank You
I think your answer is in "disk latency" and "disk utilization" charts.
MySQL works well under small loads, when it can cache all data it needs. But when your results or queries get too big, or you request too many of them, it will start doing many disk I/O operations. This enables you to handle huge loads and very big data, but in the moment you exceed your MySQL allocated memory, you will need to read and write everything to disk.
This would not be so bad if you ran on a local SSD drive. But from the device name I see that you are running on an EBS volume, which is not a real hard drive. It is networked drive, so all the traffic overloads your network connection.
You have several options:
1.) install mysqltuner, let the server operate for some time and then run it and see what it suggest. My guess is, that it will suggest you to increase your MySQL memory pool, decrease number of parallel connections or restructure your queries.
2.) use EC2 instance type with actual local storage (e.g. m3 or r3) and write to local SSD. You can do RAID on several SSD drives to make it even faster.
3.) use EBS optimized instance (dedicated EBS network bandwidth) and correct volume type (some EBS volume types have I/O credits, similar to CPU credits for t-type instances, and when you run of those, your operations will slow down to crawl).

Is it normal for mysql to be slow when connecting to a remote host?

Is it normal for mysql to be slow when connecting to a remote host or should it have the same performance as connecting to a local host?
I noticed a small performance difference, when I tried to connect to a remote host, so I'm wondering if that's normal?
Assuming that the remote machine is equal in terms of processing power as your local machine, then the primary difference in speed should be network latency - the round trip time for a network traffic. If you are sending huge amounts of data (e.g., reading or writing large BLOBs), then the network bandwidth can come into play as well and "slow" things down. But in general, the round trip cost is often the biggest factor. If you are executing a large number of "small" queries, this cost difference can be fairly significant when comparing a local connection to a remote connection.
Out of curiosity, I just now ran a test that I had already built that simply runs a bunch of update queries. This is not using MySQL but another client/server DBMS. Thus the results would likely be different, but the idea is the same and I would imagine the relative differences would not be significantly different.
Local host (using IPC comm): 5.3 seconds
Remote host (UDP comm): 20.2 seconds
This involved about 50,000 operations. The remote host was 2 hops away on the LAN with (if I measured it correctly) a round trip latency of approximately 0.25 ms for a packet with a 1 byte payload.
It depends entirely on the network connection between the program and the MySQL database server. A slow network will make the database appear slow.
I'd expect a "small performance difference" (as you described it) to be normal for a remote connection.
By default the MySQL server will perform a reverse DNS lookup the first time a client connects to it. It then stores this in its cache. This can potentially give a performance hit depending on the speed of the reverse DNS resolution.
It can depend on how many MySQL queries you're doing: Slow MySQL Remote Connection
You can optimize your code by converting many small queries into larger ones.