Error Establishing Connection if there are multiple users using the site - mysql

We have a ASP.NET application deployed internally and it is being used by almost 500 users. There are instances that when a lot of users are using the site, there's an error connection.
We have tried several options like pooling, optimizing our code, but we still don't know if this has something to do with the server. Right now, we are using shared hosting (dedicated 1.5 gb ram). Do you think upgrading the server will resolve this issue?
Also, is database computation dependent on ram and # of cores?
Thanks!

Related

AWS: Too many connections

I have an RDS instance hosting a mySQL database. Instance size is db.t2.micro
I also have an ExpressJS backend connecting to the mySQL RDS instance via a connection pool:
Additionally i have a mobile app, the client, feeding off the ExpressJS API.
The issue i'm facing is, either via the mobile app or via Postman, there are times where i get a 'Too many connections' error and therefore several requests fail:
On the RDS instance. On current activity i sometimes get 65 connections, showing it's reaching the limit. What i need clarity on is:
When 200 mobile app instances connect to the API, to the RDS instance, does it register as 200 connections or 1 connection from ExpressJS?
Is it normal to be reaching the RDS instance 65 connection limit?
Is this just a matter of me using db.t2.micro instance size which is not recommended for prod? Will upgrading the instance size resolve this issue?
Is there something i'm doing wrong with my requests?
Thank you and your feedback is appreciated.
If your app creates a connection pool of 100, that's the number of database connections it will try to open. It must be lower than your MySQL connection limit.
Typically connection pools open all the connections for the pool, so they are ready when a client calls the http API. The connections might normally be running no SQL queries, if there are not many clients using the API at a given moment. The database connections are nevertheless connected.
Sort of like when you ssh to a remote linux server but you just sit there at a shell prompt for a while before running any command. You're still connected.
You asked if a db.t2.micro instance was not recommended for production. Yes, I would agree with that. It's tempting to use the smallest instance possible to save money, but a db.t2.micro is too small for anything but light testing, in my opinion.
In fact, I would not use any t2 instance for production, regardless of size. The t2 type uses "burstable" performance. This means it can provide only brief periods of good performance. Once the instance depletes its performance credits, they recharge slowly, and while they recharge, the performance of that instance is very low. This is okay for testing, but not for production, if you expect to provide consistent performance at any time.

MySQL / Web Server Bottleneck

I’m trying to determine a MySQL / Web server bottleneck.
I have three servers. A Web server running Nginx, a remote MySQL server with my Wordpress DB and another remote MySQL server storing our data.
The bottleneck I’m trying to find is between my second MySQL server storing our data and my Web server.
We have a page that has three DataTables on it (three separate queries). It’s loading very slowly, if it does all. Occasionally I’ll get a gateway time out error.
I don’t think the queries themselves are the issue. From DataGrip all three average between 200-500ms. Currently the queries aren’t indexed as I’ve been told the plugin cannot take advantage of indexes, but I might try anyways.
Hardware and Setup:
My MySQL server is an AWS R6G.Large, 2 cores and 16gb ram, SSD of 150 IOPS and 128 MB throughput. innodb_page_size is 32, buffer_pool_size is 11000M, innodb_buffer_pool_instances is 10 and innodb_log_file_size is 1G
Web server is an AWS C6G.Xlarge, 4 cores and 8gb ram, SSD of 150 IOPS and 128 MB throughput. Uses FPM and Opcache.
I’ve tried monitoring using TOP on both servers, but to be honest I’m not sure I have knowledge to properly utilize the information.
I’d really like to determine if it’s hardware or software, somehow, and if it’s hardware is there a way to isolate? I have no problem increasing hardware if that’s actually the problem.
I’m not sure if this is allowed on Stack, but I figure it might be easier to know what’s going on if I record my screen with TOP running on both servers. I added a video to my public Google Drive. The video has both my MySQL server (on the top) and Web server (Nginx, on the bottom). What I did was load the page (3sec mark in video) and recorded the outcome. The video is 1:05, which how long it took for the last table to appear. The video was recorded while my site was in maintenance, so no other IP / traffic could reach either server.
My google drive link:
https://drive.google.com/drive/folders/1NtdE1Z4875i1Xx2Wy2EXGgknt9yuY1IN?usp=sharing
Hopefully someone can help.
Aimee

Applications downs due to heavy MySQL server load

We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.

Activerecord Connection Pool Optimization for Highly Cached System

I have a rails activerecord project that has been scaled out to serve approximately 60-100k requests per minute. We use AWS and it takes about 5 xlarge c4 ec2 instances to serve this many requests. We have optimized the system to serve 99.99% of those requests off of a redis cache rendering our mysql DB barely used.
This is great and all but we keep running into connection limits for mysql. Amazon RDS apparently limits the number of connections we can have and it seems silly for upping our RDS instance size just so that we can have a larger number of sleeping connections. We literally profiled the RDS server and it maybe gets 10-50 queries a day depending on how many times we update the system.
Is there any way to keep the activerecord connection pool from reserving connections?
We tried simply lowering the connection pool and it helped in lowering connections, but then we started getting:
(ActiveRecord::ConnectionTimeoutError) "could not obtain a database connection within 5 seconds
What I would like to achieve is for the rails project to stop trying to pre-allocate connections and only open then up when they are necessary. I'm not well-versed enough in the rails framework and activerecord to understand how the system is reserving connections and why we are getting ConnectionTimeoutErrors even though the application isn't even making any DB calls.

AppHarbor MySql Yocto connection error

My AppHarbor Web Instance uses the free MySql Yocto 20 MB plan, off late I'm seeing "Unable to connect to any of the specified MySQL hosts". Is this because the database used up the free 20 MB or is MySql really down right now, I am not able to figure it out.
Is there any straight forward way of finding the curent database memory size that would be proportional to the plans published w.r.t MySql AddOn.
We've had problems with our MySQL service in connection with a big Amazon Web Services outage. The MySQL service should be restored now, at least to read-only mode. We're tracking the situation closely.
The error you're seeing is unlikely to be related to database size issues.