How to fix ResourceNumberLimitExceededException when creating Glue job in AWS? - aws-sdk

I am trying to create an AWS Glue job and the creation fails with the following exception:
{"service":"AWSGlue","statusCode":400,"errorCode":"ResourceNumberLimitExceededException","requestId":"XXXX","errorMessage":"Failed to meet resource limits for operation","type":"AwsServiceError"}
I have tried to reduce the dpu count, but the issue still persists.
How can I fix that?

By default you are allowed only 25 jobs. Please verify if you reached the limit. If you need to create more jobs, you have to raise a limit increase request to Aws. Please see here for information on limits: https://docs.aws.amazon.com/glue/latest/dg/troubleshooting-service-limits.html

Related

how to fix this error show in preview layer geoserver?

i use mysql(phpmyadmin)like a database and geoserver to publish my layers, when i have the big layers they show me in preveiw layer this error, what is the problem with the big layers and connection mysql_geoserver and how to fix this error?
Assuming that this is not caused by the database simply crashing, then there are a number of setting in the store that you can use to help maintain connections to a remote database.
You may need some or all of them to overcome this issue. First I would turn up the connection timeout (in seconds), then I would look at
increasing the max connections as you may be running out of them. If none of these help you will need to examine the GeoServer logs (at GeoTools Debug level) to find the actual query being sent to the database and then use the database tools to see if you can speed up that query (may be add an index).

Too many conenctions error on AWS-EC2 (Laravel, MySQL) - SQLSTATE[HY000] [1040]

I want to handle 1000 to 10000 Async requests at a time & each request consists of 2 MySQL queries. Problem is that the server is able to handle the request(confirmed through the https logs) but unable to process it due to "Too many connections error".
Questions:
Should I increase the "Max user connections" for MySQL & increase the necessary hardware (or) Should I contact DB admin?
How to handle this error? (I am unable to connect to the server when this error occurred to restart the MySQL & it is not coming back to the normal state until I restart the whole server)
I am currently struct with this situation, If anyone has an idea about one of these questions alone, Please answer so that it can be a head's up for the remaining ones. Since I am a developer, It will be the last option to go with paid AWS support.
maybe you need to review your app architecture.
If I were you, and instead of trying to process all the requests at the same time, I'd do the following:
- each time the app receives a request, I'll dispatch a job that will execute those SQL request.
- I'll have multiple background workers processing those jobs, and I'll limit the number of parallel process to how many the DB can handle.
this way, you'll ensure that all requests all processed without doing it all at the exact same time
for more information:
https://laravel.com/docs/6.x/queues

AWS RDS Mysql Cluster not Scaling Automatically on Write Queries

I have an AWS RDS MySql Cluster. I'm trying to Auto Scale on Mass Write operations, but unable to do so. But, when I'm Running Read Queries it Scales properly. I'm getting "Too Many Connections" error on write. Can anyone let me know what I'm doing wrong? Thanks in advance.
[Edit: 1]
Screenshot of AWS RDS Cluster Config
I've kept the connection limit to 2 because I was testing.
When I'm sending Multiple read requests to AWS RDS I can see new Instances being launched in my RDS Instances Section:
I've also set Scale In Cool Time to 0 so that it will launch a new Instance Instantly. When I'm reading from the database using read endpoint, Auto Scaling is working properly. But when I'm trying to insert data using write endpoint, Auto-Scaling is not working.
Your question is short on specifics so I will list some possible ways to figure this out and possible solve it.
RDS scaling takes time, so you cannot expect that your DB will increase in capacity instantly when a sudden spike of traffic exceeds its current capacity.
The maximum number of connects to a MySQL instance is set by max_connections in your parameter group. How many connections are happening and what is the max_connections value? This value affects memory usage, so be review any changes. Note: Increasing this value does not always help if there is a bug in your client code that erroneously creates too many connections. If the number of peak connections is exceeding the max_connections value, sometimes you just need to scaled up to a larger instance. Details determine the correct solution.
Use MySQL's Gobal Status History and look into what happens and when. This is useful for detecting locking or memory issues.

Codeigniter 3 Sessions and RDS - Random Logouts

Can someone explain or tell me if some kind of 'setting' in Codeigniter will solve this?
System : Amazon AWS EC2 (AWS Linux Centos 6.9)
Database : Amazon AWS RDS (Mysql Compatible 5.6.34)
Framework : Codeigniter 3.0.6
PHP : 5.6
Issue : When running my website, I have random logouts that happen after 5 minutes. This means when I start my computer, after 5 minutes the session is cleared and I find myself logged out. (Random amount of time). This continues for the rest of the day. (Random Logouts)
This does NOT happen when setting sessions in codeigniter to files
Setting it to database causes this (Logouts happen within 24 hours).
My pages use active ajax which is hitting the codeigniter system all the time. I read that there is an Ajax Race condition that can cause it, BUT I've noticed that after reading the CI code, it's related to this command
if ($this->_db->query("SELECT GET_LOCK('".$arg."', 300) AS ci_session_lock")->row()->ci_session_lock)
It appears that this 'locks' during an AJAX race condition and that causes CI to drop my session cookie info (And of course logs me out under a `$this->fail()) call). We are using the RDS system, so I am suspecting that GET_LOCK on RDS is slightly different than GET_LOCK on a true MYSQL system.
Anyone have thoughts / ideas? And Yes I tried a ton of combinations of sess_expiration and sess_time_to_update and the only way to fix it is go back to files.
As I expect my system to run on multiple servers in the future, the files might not be desirable (If you know CodeIgniter you know why, too complicated to explain here).
Can anyone give some suggestions / answers on why RDS has an issue with GET_LOCK?

Error when trying to increase storage on RDS MySQL instance

I'm trying to increase the allocated storage from 2000GB to 2260GB (and IOPS from 6000 to 7000) on an RDS MySQL instance and I'm getting the following error message:
null (Service: AmazonRDS; Status Code: 500; Error Code: InternalFailure; Request ID: ea593451-3454-11e5-bc38-b7fa8a060cf1)
The read replica for this instance has had it's storage and IOPS increased, so that's not the issue.
Any ideas what might be causing this? If I've missed any key info, please let me know in the comments.
This error was caused by the fact that Memchached was one enabled in the options group that was assigned to the RDS instance that I was trying to increase storage for.
From AWS support engineer:
You got in contact as you were unable to initiate a scale storage for your RDS instance olympus- you were receiving an internal error.
After reviewing this further on my side I let you know that the following error was being reported:
The option 'MEMCACHED' cannot be deleted as instance olympus has Read Replica which has the MEMCACHED present.
This has been brought up with the RDS team as an issue. They are aware of it and are working on a fix, unfortunately Im unable to give a timeframe on this.
To workaround the issue I suggested:
Modify the instance so that the instance is using the default option group i.e. disable memcache
Then modify the allocated storage on the instance.
Once the scale is complete re-add the memcache option group
After carrying out the above suggested steps, I was able to resize the instance and all is well now.