AppHarbor MySql Yocto connection error - mysql

My AppHarbor Web Instance uses the free MySql Yocto 20 MB plan, off late I'm seeing "Unable to connect to any of the specified MySQL hosts". Is this because the database used up the free 20 MB or is MySql really down right now, I am not able to figure it out.
Is there any straight forward way of finding the curent database memory size that would be proportional to the plans published w.r.t MySql AddOn.

We've had problems with our MySQL service in connection with a big Amazon Web Services outage. The MySQL service should be restored now, at least to read-only mode. We're tracking the situation closely.
The error you're seeing is unlikely to be related to database size issues.

Related

AWS: Too many connections

I have an RDS instance hosting a mySQL database. Instance size is db.t2.micro
I also have an ExpressJS backend connecting to the mySQL RDS instance via a connection pool:
Additionally i have a mobile app, the client, feeding off the ExpressJS API.
The issue i'm facing is, either via the mobile app or via Postman, there are times where i get a 'Too many connections' error and therefore several requests fail:
On the RDS instance. On current activity i sometimes get 65 connections, showing it's reaching the limit. What i need clarity on is:
When 200 mobile app instances connect to the API, to the RDS instance, does it register as 200 connections or 1 connection from ExpressJS?
Is it normal to be reaching the RDS instance 65 connection limit?
Is this just a matter of me using db.t2.micro instance size which is not recommended for prod? Will upgrading the instance size resolve this issue?
Is there something i'm doing wrong with my requests?
Thank you and your feedback is appreciated.
If your app creates a connection pool of 100, that's the number of database connections it will try to open. It must be lower than your MySQL connection limit.
Typically connection pools open all the connections for the pool, so they are ready when a client calls the http API. The connections might normally be running no SQL queries, if there are not many clients using the API at a given moment. The database connections are nevertheless connected.
Sort of like when you ssh to a remote linux server but you just sit there at a shell prompt for a while before running any command. You're still connected.
You asked if a db.t2.micro instance was not recommended for production. Yes, I would agree with that. It's tempting to use the smallest instance possible to save money, but a db.t2.micro is too small for anything but light testing, in my opinion.
In fact, I would not use any t2 instance for production, regardless of size. The t2 type uses "burstable" performance. This means it can provide only brief periods of good performance. Once the instance depletes its performance credits, they recharge slowly, and while they recharge, the performance of that instance is very low. This is okay for testing, but not for production, if you expect to provide consistent performance at any time.

Applications downs due to heavy MySQL server load

We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.

AWS MySql RDS: "Got error 28 from storage engine"

I’m working on a relatively small application, serving about 1,500 users and running on a Mysql database that is about 300 megs. The entire system runs on AWS with a single dedicated EC2 node running the Grails application on Tomcat 8 and a single dedicated Mysql RDS instance. The system has been running live in production for about three years with no database issues. The two largest tables contain about 40k records. The application is built using Grails and Java 1.7.
Yesterday our application began throwing the following exception, with the underlying error message of:
"Got error 28 from storage engine"
The logs available from the RDS admin web console are empty.
Googling has not revealed any promising leads that have helped us resolve the issue, other than most messages point out that the disk is out of space. Given that most search results refer to disk space. Being software developers rather than DBA's with significant Mysql expertise, we boosted the storage space of the Mysql RDS instance. Unfortunately, today our application is still sporadically throwing the same exception. Having created our Mysql DRS instance with 15 gigs of space -- which is several orders of magnitude of additional space than our application makes use of -- we are at a lost as to what is the root cause of this issue. Our guess is that there is possibly some out of the box Mysql limitation that we are hitting up against but have no idea what it may be or how to solve it. Indeed, the whole reason we host onRDS was to avoid issues of this type.
Doing some Googling, this seems like a somewhat common Mysql error but that does not have any concrete trail for us to follow. Most suggestions talk about checking the filesystem or "inode" space. Given that this is a hosted Mysql RDS instance on AWS, I am unsure if or how to check such things. Looking at the CloudWatch for the RDS instance, I can see that the CPU is idle and that the instance is dramatically under the 15 gig storage limit.
Does anyone have any suggestions for us to investigate?
Given that we are new to RDS, can you please point us to any documentation or -- even better -- suggest what settings we can tweak in the RDS console -- to help prevent this error from occuring? Ideally, we moved to RDS thinking that if this was a mysql sizing or scaling issue that moving to RDS would solve the problem. As a last resort, this morning we then deleted about 20k rows of unessential data. Unfortunately, the issue persists and we continue to experience the issue.
A few questions:
Are there any RDS settings we can adjust to avoid this issue?
Can this be solved by moving to a larger RDS instance, perhaps with more memory?
Would we experience this issue if we moved to Aurora?
Well from your comment this is definitely a low on storage issue. Because 13 Gb is very less storage. you can check the free available storage in the dashboard. Check this screenshot below in the "Storage" metric under monitoring if it goes ahead of the red line you will start getting Error 28. You will have to increase the storage of your RDS instance or free up some space. I will suggest increase the storage to avoid this issue in future.

Why is large MySQL database crashing app?

I have a webapp which I'm trying to deploy to OpenShift, which serves historical plots of market data. The MySQL database is around 600MB and has >12,000,000 rows over about 6 tables. If I deploy and run the app, it may run at most a couple minutes, but then it crashes with exceptions related to null database connections. I tried restarting mysql and looking at the quota (~750MB of 1024MB) to no avail. If I reduce the database size drastically, the webapp runs fine. So far a couple hours with no crashes. I also have another webapp on OpenShift that uses a small MySQL database that runs just fine.
From what I could tell from reading the documentation is that a gear provides up to 1 GB of storage, and I'm under that amount. If I lower my data footprint drastically, MySQL behaves fine. Any ideas regarding why with a 600MB database, the app fails and what limitations are causing failure? The webapp has also been running solidly on a VPS for about 2 years with no issues, so I'm pretty confident the problem lies within OpenShift's MySQL cartridge.
Edit: Additional Information
The MySQL cartridge is on a small gear and shared with the webapp.
The number of connections is 5 from a pool

Amazon RDS (Mysql2::Error 110)

I've had a Rails application running in production for the past 6 months, with weekly deployments, without any issue.
Now, I've been having a recurring issue for about 3 weeks and it seems to get worst every week.
When my app boots and reaches the point where it's trying to connect to the DB, I get this error :
Can't connect to MySQL server on '***.amazonaws.com' (110) (Mysql2::Error)
AFAIK, this error tells me that I've reached MySQL's max connections limit.
From the configs, I should be able to open 296 connections. My app is set to run 7 instances with each a database connection pool of 5, so it can't really exceed 70 connections when deploying a new instance.
I've never seen the connection count go above 20 in either the AWS RDS Console or the SHOW PROCESSLIST command.
I don't think it has anything to do with either Rails or my application server (Puma), since I can't connect through the MySQL Command-Line Tool when the issue occurs.
Has anyone had a similar issue with MySQL on RDS or MySQL itself?
The database pool isn't per application, it's per process. If it's threading/multi process per instance it could be using more than that. Have you tried restarting mysql? It sounds like you have some hanging connections for whatever reason.
I've been getting these issues recently. Could it be related to the pending-restart change of parameter group on my RDS Instance? I sure hope not. As I understand a pending change should have no effect on the current performance.