MySQL / Web Server Bottleneck - mysql

I’m trying to determine a MySQL / Web server bottleneck.
I have three servers. A Web server running Nginx, a remote MySQL server with my Wordpress DB and another remote MySQL server storing our data.
The bottleneck I’m trying to find is between my second MySQL server storing our data and my Web server.
We have a page that has three DataTables on it (three separate queries). It’s loading very slowly, if it does all. Occasionally I’ll get a gateway time out error.
I don’t think the queries themselves are the issue. From DataGrip all three average between 200-500ms. Currently the queries aren’t indexed as I’ve been told the plugin cannot take advantage of indexes, but I might try anyways.
Hardware and Setup:
My MySQL server is an AWS R6G.Large, 2 cores and 16gb ram, SSD of 150 IOPS and 128 MB throughput. innodb_page_size is 32, buffer_pool_size is 11000M, innodb_buffer_pool_instances is 10 and innodb_log_file_size is 1G
Web server is an AWS C6G.Xlarge, 4 cores and 8gb ram, SSD of 150 IOPS and 128 MB throughput. Uses FPM and Opcache.
I’ve tried monitoring using TOP on both servers, but to be honest I’m not sure I have knowledge to properly utilize the information.
I’d really like to determine if it’s hardware or software, somehow, and if it’s hardware is there a way to isolate? I have no problem increasing hardware if that’s actually the problem.
I’m not sure if this is allowed on Stack, but I figure it might be easier to know what’s going on if I record my screen with TOP running on both servers. I added a video to my public Google Drive. The video has both my MySQL server (on the top) and Web server (Nginx, on the bottom). What I did was load the page (3sec mark in video) and recorded the outcome. The video is 1:05, which how long it took for the last table to appear. The video was recorded while my site was in maintenance, so no other IP / traffic could reach either server.
My google drive link:
https://drive.google.com/drive/folders/1NtdE1Z4875i1Xx2Wy2EXGgknt9yuY1IN?usp=sharing
Hopefully someone can help.
Aimee

Related

Applications downs due to heavy MySQL server load

We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.

Slow https requests after sudden spike in mysql queries

I have wordpress website running in a VPS server and is handling about 150 mysql queries / second.
Ocassionaly when we notice a spike in traffic to about 200 mysql queries a second the https requests to the site is extremely slow.
The site loads decently with http but with https it takes 20+ seconds.
Gradually over a period of an hour after the spike, the load times get better and then it gets back to normal again.
There server load and memory looks fine. There is only a spike in mysql queries, firewall traffic and eth0 requests. There are no mysql slow queries
Any help would be appreciated.
Thank You
I think your answer is in "disk latency" and "disk utilization" charts.
MySQL works well under small loads, when it can cache all data it needs. But when your results or queries get too big, or you request too many of them, it will start doing many disk I/O operations. This enables you to handle huge loads and very big data, but in the moment you exceed your MySQL allocated memory, you will need to read and write everything to disk.
This would not be so bad if you ran on a local SSD drive. But from the device name I see that you are running on an EBS volume, which is not a real hard drive. It is networked drive, so all the traffic overloads your network connection.
You have several options:
1.) install mysqltuner, let the server operate for some time and then run it and see what it suggest. My guess is, that it will suggest you to increase your MySQL memory pool, decrease number of parallel connections or restructure your queries.
2.) use EC2 instance type with actual local storage (e.g. m3 or r3) and write to local SSD. You can do RAID on several SSD drives to make it even faster.
3.) use EBS optimized instance (dedicated EBS network bandwidth) and correct volume type (some EBS volume types have I/O credits, similar to CPU credits for t-type instances, and when you run of those, your operations will slow down to crawl).

How to configure Amazon EC2 t1.micro instance for Buddypress

We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.

Using 2 servers for MySQL

I have 2 servers for my website
Web Server
For sending dynamic content, mostly created with PHP
A lot of RAM and a fast processor, only a few GBs of hard drive space.
and a
File Server
For sending static content, images, videos etc..
A few TBs of hard drive space, not as much RAM and a slower processor.
I want to Use the speed of the Web Server, but the space of the File Server. But I heard the overhead of NFS will make it so slow it will not matter...
I will be using MySQL and I want to know how I should optimize the database so I can keep the data on the File Server, but have the queries preformed, and processed by the Web Server.
The advice you received is correct in my experience... running mysqld on one box and using a remote server via NFS for file storage is not very fast (if the remote storage were a SAN that would be a different matter).
You can reduce the number of times your database is hit and leverage the RAM on the web server by caching on that tier. Look into introducing something like memcached to help with the most expensive MySQL operations.
If some of your tables are small but used frequently, you could consider running a second instance of MySQL on your web server just for those tables. Keep in mind, though, that you will have two separate points of database failure that need to be managed (appropriate backups, security updates, etc.).

Tomcat and MySql on VPS

I am doing some experimentation with VPS before moving my application from private Tomcat hosting to cloud. It is a read intensive app built on Struts 2 + Spring + Hibernate + MySql. Its a moderately popular app in India with 1500 visitors and 10,000 pageviews per day. I have some basic questions about choosing a server configuration.
1) Would it be enough to have 256M RAM for running both Tomcat and Mysql. I wont be running anything else other than SSH. No apache, FTP etc. My current heap size is 190M. can i still set the heap size as 190M with 256M RAM? What are the pros and cons?
2) Is it better to have 2 256M servers one with Tomcat and one with Mysql? or 1 server with 512M running both MySql and Tomcat?
I am open to any suggestions. Thanks!
1)
I think it could be done.. I've seen a similarly sized app running on 256MB Linux VPS.
However, you're leaving very little memory for MySQL, which will cause it to have to go to disk often. It could be quite slow.
2)
One server is better than two. You have less to configure and you don't pay for the OS + virtual machine container overhead twice. Also, your app server and your database may not use equal memory, so being on separate machine could be an inefficient use of memory.
I don't believe a 256MB VPS can effectively run a Java WebApp. You need at least a 512MB VPS. I wouldn't consider splitting until you go past a 1GB container. When you split, you have the overhead of the OS running 2x. One of the advantages of splitting though, is you may have more burst-able capacity. If the two VPS are on separate hardware and you get a rush of traffic, you may be able to use more CPU cycles than a single container on a single host. This depends on the load of the other containers of course, and your VPS provider's policy on bursting.