Does anyone know why 3Scale APIManager demands so much resource when I install it from the Openshift Operator?
The default install has requested 4CPUs and approx 9Gi of memory, but with a limit of 90Gi. So it will not install unless my project has over 90Gi of memory allocated.
If I load test with 100 threads the most I can get the load up to is 2 CPUs and 4Gi of memory accross all the pods.
Here's my API Manager yaml
apiVersion: apps.3scale.net/v1alpha1
kind: APIManager
metadata:
annotations:
apps.3scale.net/apimanager-threescale-version: '2.9'
apps.3scale.net/threescale-operator-version: 0.6.0
name: apimanager
generation: 2
namespace: user-greg-clinker-sandbox <--TODO
spec:
imageStreamTagImportInsecure: false
resourceRequirementsEnabled: true
system:
appSpec:
replicas: 1
database:
postgresql:
persistentVolumeClaim:
storageClassName: nfs-non-vdi-retain-backup-enc
fileStorage:
persistentVolumeClaim:
storageClassName: nfs-non-vdi-retain-backup-enc
redisPersistentVolumeClaim:
storageClassName: nfs-non-vdi-retain-backup-enc
sidekiqSpec:
replicas: 1
sphinxSpec: {}
appLabel: 3scale-api-management
zync:
appSpec:
replicas: 1
queSpec:
replicas: 1
backend:
cronSpec:
replicas: 1
listenerSpec:
replicas: 1
redisPersistentVolumeClaim:
storageClassName: nfs-non-vdi-retain-backup-enc
workerSpec:
replicas: 1
tenantName: 3scale
apicast:
managementAPI: status
openSSLVerify: false
productionSpec:
replicas: 3
registryURL: 'http://apicast-staging:8090/policies
responseCodes: true
stagingSpec:
replicas: 3
wildcardDomain: 3scale2.apps.ocp.net
Here's my resource usage afterwards
spec:
hard:
limits.cpu: '24'
limits.memory: 128Gi
requests.cpu: '6'
requests.memory: 13743895347200m
scopes:
- NotTerminating
status:
hard:
limits.cpu: '24'
limits.memory: 128Gi
requests.cpu: '6'
requests.memory: 13743895347200m
used:
limits.cpu: 16100m
limits.memory: 85048677Ki
requests.cpu: 3950m
requests.memory: '8832423808'
You can find the necessary compute resources for each 3scale component here when you set resourceRequirementsEnabled: true. These are the minimum required when deploying a 3scale for Production purposes. In case you're deploying just a staging environment setting resourceRequirementsEnabled: false you'll be fine.
Note that 3scale is already in version 2.11 (2.12 will be released soon)
I hope this information is helpful for you.
Related
I created .yaml file to create mysql service on kubernetes for my internal application, but it's unreachable. I can reach application and also phpmyadmin to reach database but it's not working properly. I'm stuck with pending status on mysql pod.
.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
replicas: 1
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
containers:
- name: mysql
image: mysql
imagePullPolicy: Never
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: flaskapi-cred
key: db_root_password
ports:
- containerPort: 3306
name: db-container
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: cust
labels:
app: db
spec:
ports:
- port: 3306
protocol: TCP
name: mysql
selector:
app: db
type: LoadBalancer
kubectl get all output is:
NAME READY STATUS RESTARTS AGE
pod/flaskapi-deployment-59bcb745ff-gl8xn 1/1 Running 0 117s
pod/mysql-99fb77bf4-sbhlj 0/1 Pending 0 118s
pod/phpmyadmin-deployment-5fc964bf9d-dk59t 1/1 Running 0 118s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flaskapi-deployment 1/1 1 1 117s
deployment.apps/mysql 0/1 1 0 118s
deployment.apps/phpmyadmin-deployment 1/1 1 1 118s
I already did docker pull mysql.
Edit
Name: mysql-99fb77bf4-sbhlj
Namespace: z2
Priority: 0
Node: <none>
Labels: app=db
pod-template-hash=99fb77bf4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-99fb77bf4
Containers:
mysql:
Image: mysql
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'db_root_password' in secret 'flaskapi-secrets'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gmbnd (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-gmbnd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gmbnd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
Warning FailedScheduling 3m44s default-scheduler persistentvolumeclaim "mysql-pv-claim" not found
you are missing the volume to attach with the pod or deployment. PVC is required as your deployment configuration is using it.
you can see clearly : persistentvolumeclaim "mysql-pv-claim" not found
you can apply below YAML and try.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
As the error in the message clearly shows persistentvolumeclaim "mysql-pv-claim" not found. So you need to provision a persistentvolumeclaim (PVC).
There is a static & dynamic provisioning but I'll explain static provisioning here as it will be relatively easy for you to understand & setup. You need to create a PersistentVolume (PV) which the PVC will use. There are various types of Volumes, about which you read here.
Which type of Volume you would wanna create would be your choice depending on your environment and needs. A simple example would be of volume type hostPath.
Create a PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: cust
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
# The configuration here specifies that the volume is at /tmp/data on the cluster's Node
path: "/tmp/data"
And then create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: cust
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: mysql-pv-volume
Once the PVC is successfully created, your deployment shall go through.
Currently I am trying to have volume persistence for my MYSQL database using Kubernetes with Kubeadm.
The environment is based on an amazon EC2 instance using EBS storage disks.
As you can see below a storage class, a persistent volume as well as a persistent volume claim have been implemented in order to have a mysql persistence.
However an error occurs when I try to deploy the mysql pod (on the attached image).
mysql-pv.yml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
labels:
type: amazonEBS
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-ID
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
mysql.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7.30
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: MYPASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: 31306
selector:
app: mysql
This is my mysql pod description:
Name: mysql-5c9788fc65-jq2nh
Namespace: default
Priority: 0
Node: ip-172-31-31-210/172.31.31.210
Start Time: Sat, 23 May 2020 12:19:24 +0000
Labels: app=mysql
pod-template-hash=5c9788fc65
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/mysql-5c9788fc65
Containers:
mysql:
Container ID:
Image: mysql:5.7.30
Image ID:
Port: 3306/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: MYPASS
Mounts:
/data/ from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-cshk2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-cshk2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-cshk2
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Here is the error I get:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler
Successfully assigned default/mysql-5c9788fc65-jq2nh to ip-172-31-31-210
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv
Output: Running scope as unit: run-r11fefbbda1d241c2985931d3adaaa969.scope
mount: /var/lib/kubelet/pods/29d5cee7-da11-4a0c-b5aa-e262f919d1ba/volumes/kubernetes.io~aws-ebs/mysql-pv: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-06212746d87534157 does not exist.
Warning FailedMount 39m kubelet, ip-172-31-31-210 MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
Someone can help me ?
Check for the state of PV and PVC if the PVC is in bounded state or not.
kubectl describe pvc mysql-pv-claim
kubectl describe pv mysql-pv
Do you have the EBS CSI driver installed?
Other reason can be, I think you missed adding the option of --cloud-provider=aws, this is required by CCM for the nodes. Check out the similar issue.
The following link has all the IAM permissions and a working example on how to create and mount an EBS volume in Kubernetes from docs on cluster configuration for using EBS.
With a kubeadm configuration, configuration is defined in: /var/lib/kubelet/config.yaml and /var/lib/kubelet/kubeadm-flags.env
If the cluster was deployed using kubeadm then, define environment variable on all nodes in kubeadm-flags.env
To resolve this manually you need to add the --cloud-provider=aws tag to the kubeadm-flags.env and restarted the services, which will resolve the issue:
systemctl daemon-reload && systemctl restart kubelet
or provide the following configuration for kubeadm. Change openstack to AWS in your case.
Check the following blog for better understanding.
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "aws"
#cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"```
Here is some additional information:
kubectl describe pvc mysql-pv-claim :
Name: mysql-pv-claim
Namespace: default
StorageClass: standard
Status: Bound
Volume: mysql-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-5c9788fc65-jq2nh
Events: <none>
kubectl describe pv mysql-pv :
Name: mysql-pv
Labels: type=amazonEBS
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Bound
Claim: default/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-06212746d87534157
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
lsblk :
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 18M 1 loop /snap/amazon-ssm-agent/1566
loop1 7:1 0 93.9M 1 loop /snap/core/9066
loop2 7:2 0 93.8M 1 loop /snap/core/8935
nvme0n1 259:0 0 10G 0 disk
nvme1n1 259:1 0 15G 0 disk
└─nvme1n1p1 259:2 0 15G 0 part /
I want to use nvme0n1.
I don't have kubelet and kube-controller-manager.log file log.
i have the deployment yaml file on the cluster and the connection name of the sql instance and the public ip address, so what should i add and where in order for me to connect the intance and cluster? i wanna be able to add something to the sql cluster and it gets automatically saved to the instance and vice-versa.
this is the deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp:
generation: 1
labels:
app: mysql
name: mysql
namespace: default
resourceVersion: "1420"
selfLink:
uid:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mysql
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mysql
spec:
containers:
- env:
- name:
valueFrom:
secretKeyRef:
key:
name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name:
persistentVolumeClaim:
claimName:
status:
availableReplicas: 1
conditions:
- lastTransitionTime:
lastUpdateTime:
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime:
lastUpdateTime:
message: ReplicaSet "mysql" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
You should use the Cloud SQL proxy and add it as a sidecar to your application making queries to the Cloud SQL instance. Google has a suggested best practice found here.
After deploying ocp3.11 with all-in-one mode, creating application with s2i for registry.redhat.io/jboss-webserver-3/webserver31-tomcat8-openshift:1.2 failed with no service is created.
The steps I used is described in this link:
https://access.redhat.com/documentation/en-us/red_hat_jboss_web_server/3.1/html-single/red_hat_jboss_web_server_for_openshift/index#Create-an-OpenShift-application-using-existing-maven-binaries
I installed ocp3.11 on rhel 7.6, no error happened during installation.
I setup docker external registry, it works.
I changed /etc/origin/master/master-config.yaml file, change internalRegistryHostname:docker-registry.default.svc:5000 to my external registry.
After processing s2i for jws31, pod can be startup, but there is no svc with oc get svc.
I checked the dc, found no ports is defined in the dc, but don't know why there is no ports in dc, if there is no ports, I doubt there will be no svc created.
dc is:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2019-04-12T09:20:44Z
generation: 2
labels:
app: myjws
name: myjws
namespace: jws-tomcat
resourceVersion: "162555"
selfLink: /apis/apps.openshift.io/v1/namespaces/jws-tomcat/deploymentconfigs/myjws
uid: 3fbc2413-5d04-11e9-aef5-080027a7340f
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
app: myjws
deploymentconfig: myjws
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: myjws
deploymentconfig: myjws
spec:
containers:
- image: master311.example.com:5555/jws-tomcat/myjws#sha256:5c65d07aba3ba4e1946a92198588d2d30c5eaef9ea7fe2c209b4db4479e2d130
imagePullPolicy: Always
name: myjws
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- myjws
from:
kind: ImageStreamTag
name: myjws:latest
namespace: jws-tomcat
lastTriggeredImage: master311.example.com:5555/jws-tomcat/myjws#sha256:5c65d07aba3ba4e1946a92198588d2d30c5eaef9ea7fe2c209b4db4479e2d130
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-04-12T09:20:52Z
lastUpdateTime: 2019-04-12T09:20:52Z
message: Deployment config has minimum availability.
status: "True"
type: Available
- lastTransitionTime: 2019-04-12T09:20:53Z
lastUpdateTime: 2019-04-12T09:20:53Z
message: replication controller "myjws-1" successfully rolled out
reason: NewReplicationControllerAvailable
status: "True"
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 1
observedGeneration: 2
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
svc should be created so that route can be exposed.
Running under Kubernetes 1.2.0/CoreOS 991.1.0/Google Compute Engine, Heapster 0.18.2 fails due to not recognizing the source kubernetes.summary_api. How do I solve this?
The Log of the Failing Heapster Controller
I0415 07:23:58.623481 1 heapster.go:55] /heapster --source=kubernetes.summary_api:'' --sink=gcm --sink=gcmautoscaling --sink=gcl --stats_resolution=30s --sink_frequency=1m
I0415 07:23:58.623616 1 heapster.go:56] Heapster version 0.18.2
F0415 07:23:58.623654 1 heapster.go:62] Unknown source: kubernetes.summary_api
The Heapster Kubernetes Service Spec:
apiVersion: v1
kind: ReplicationController
metadata:
name: heapster-v10
namespace: kube-system
labels:
k8s-app: heapster
version: v10
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: heapster
version: v10
template:
metadata:
labels:
k8s-app: heapster
version: v10
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/heapster:v0.18.2
name: heapster
resources:
limits:
cpu: 100m
memory: 300Mi
command:
- /heapster
- --source=kubernetes.summary_api:''
- --sink=gcm
- --sink=gcmautoscaling
- --sink=gcl
- --stats_resolution=30s
- --sink_frequency=1m
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs
readOnly: true
- name: usrsharecacerts
mountPath: /usr/share/ca-certificates
readOnly: true
volumes:
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: usrsharecacerts
hostPath:
path: /usr/share/ca-certificates
That's a bug in the manifest. Support for the kubelet summary API was not added until a later version (starting at v0.20.0-alpha8). You can either change to a more recent heapster version (the default manifest uses v1.0.2), or you can revert to the old (cAdvisor API) source: --source=kubernetes:''