Cannot create persistent volume claim "mysql". persistentvolumeclaims "mysql" is forbidden: exceeded quota: 6176696e6173686b616e74616b-noncompute, requested: requests.storage=1Gi, used: requests.storage=0, limited: requests.storage=0
I have paid account in OpenShift 3.
Can anyone help get this resolved?
Related
This is the first time I am working with Openshift. I have successfully installed Redhat Openshift on a single EC2 server ( Single Node ).
I will be installing IBM Cloud Data Pak on this Openshift Server.
I was trying to create a separate admin user for the same.
I have executed the following commands :
oc login -u system.admin
Then
oc create user bob
But I am facing the following error :
Error from server (Forbidden): users.user.openshift.io is forbidden: User "system.admin" cannot create users.user.openshift.io at the cluster scope: no RBAC policy matched
I am not able to understand the root cause of the issue.
It will be great if someone could help me to resolve this and help me understand the root cause of this issue.
It's "system:admin", not "system.admin"
And be sure client-cert and client-key is present for system:admin user in your .kube/config
When trying to deploy my application, I recently got the following error:
ERROR: Service:AmazonCloudFormation, Message:Stack named
'awseb-e-123-stack' aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS'
Reason: The following resource(s) failed to update: [AWSEBRDSDatabase].
ERROR: Updating RDS database named: abcdefg12345 failed
Reason: DB Security Groups can no longer be associated
with this DB Instance. Use VPC Security Groups instead.
ERROR: Failed to deploy application.
How do you switch over a DB Security Group to a VPC Security Group? Steps for using the Elastic Beanstalk Console would be greatly appreciated.
For anyone arriving via Google, here's how you do it via CloudFormation:
The official docs contains an example, at the very bottom https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.DeleteDBVPCGroups
SecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
VpcId: <vpc_id>
GroupDescription: Explain your SG
SecurityGroupIngress:
- Description: Ingress description
CidrIp: 10.214.0.0/16
IpProtocol: tcp
FromPort: 3306
ToPort: 3306
RDSDb:
Type: 'AWS::RDS::DBInstance'
Properties:
VPCSecurityGroups:
- Fn::GetAtt:
- SecurityGroup
- GroupId
Had the same issue but was able to fix it by doing the following
Created a RDS db instance from the RDS console
Created a snapshot of the instance
From Elastic Beanstalk console under configuration/database, create the RDS db using the instance
Once the new RDS db instance was created by EBS, inn configuration/software add db environment properties
I hope it helps you resolve this issue.
When experimenting with Openshift v3 - I could create and deploy a very simple webapplication with Wildfly & postgres.
When trying to create a very simple SpringBoot application (as a WAR) with Mysql (with 1 table), the MySql volume storage immediately exceeds the quota. As a result the very simple application cannot run properly.
Error creating: pods "springbootmysql-8-" is forbidden: exceeded
quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi,
used: limits.cpu=2,limits.memory=1Gi, limited:
limits.cpu=2,limits.memory=1Gi 19 times in the last 11 minutes
Update: now I configured both pod's with 480Mi memory - the memory quota's are not exceeded.
I now get an error message stoping the build and deployment:
Error creating: pods "springbootmysql6-2-" is forbidden: exceeded
quota: compute-resources, requested:
limits.cpu=957m,limits.memory=490Mi, used:
limits.cpu=1914m,limits.memory=980Mi, limited:
limits.cpu=2,limits.memory=1Gi
On OpenShift Online Starter, if running both a database and frontend with both using 512MB each, you only have enough resources to use the Recreate deployment strategy. You will need to go into the deployment configuration for the front end and change the deployment strategy from Rolling to Recreate.
If after making the change it is still having the same issue, scale down the number of replicas of the front end to 0, and then back to 1. This will ensure that Kubernetes is not stuck in the prior state since it was still trying to deploy under the old settings. Things should then be okay.
I am trying to setup a hadoop cluster on AWS with two datanodes and one namenode.
I have followed tutorial point multinode cluster setup. I have started the name node and secondary name node and datanodes from the namenode server.
Datanode is started but not able to connect to the name node. I get the following error:
17/02/06 13:31:06 INFO ipc.Client: Retrying connect to server: hadoop-master/172.31.3.137:9000. Already tried 0 time(s); maxRetries=45
I am running Resque on Heroku, and my database is ClearDB. I am getting this error:
"Mysql2::Error: User 'bdb2aedbee2c38' has exceeded the 'max_user_connections' resource (current value: 10): SHOW FULL FIELDS FROM projects"
That error is coming from my Resque admin of my Heroku app.
How can I figure out how many connections Resque is making to ClearDB?
How can I tell ClearDB to either allow more connections, or tell Resque to create less?
Does "current value: 10" refer to how many connections ClearDB is allowing, or is this how many current connections Resque is trying to make?
Thanks!
Your application server dynos or Resque workers are consuming more connections than your database plan provides.
You have two options:
Scale up your database by upgrading to a higher ClearDB plan (http://dashboard.heroku.com)
Scale down your application by reducing the number of dynos/workers (heroku ps:scale command)
The first link when I googled your addons links to the page describing the service and pricing tiers. You are on the free, 10 connection tier.
https://addons.heroku.com/cleardb