How to resolve paths conflict in nginx ingress? - kubernetes-ingress

I'm running sentry on my EKS cluster and according to the official documentation it can only be exposed on rootPath "/" i'm also exposing keycloak on "/auth" which is the default web-context
So i deployed nginx ingress controller and ingress resources to match these paths, but the problem i encountered was that sentry path ("/") is always redirected to "/auth" which is the default path of keycloak which would cause a conflict. in my case i'm not allowed to change the web-context of keycloak so i tried to deploy another nginx ingress controller for sentry with the same class but i did not know how to do it since all examples are using ingress controllers with different classes. so i would like to know if this possible how to deploy a seconf nginx ingress which is pretty much the same as the first one or if there is another solution please help me know it.
Here is nginx ingress controller i use :
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- apiGroups:
- ''
resources:
- endpoints
verbs:
- create
- get
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.31.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=ingress-nginx/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=ingress-nginx/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission
namespace: ingress-nginx
webhooks:
- name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- extensions
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
clientConfig:
service:
namespace: ingress-nginx
name: ingress-nginx-controller-admission
path: /extensions/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: jettech/kube-webhook-certgen:v1.0.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc
- --namespace=ingress-nginx
- --secret-name=ingress-nginx-admission
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: jettech/kube-webhook-certgen:v1.0.0
imagePullPolicy:
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=ingress-nginx
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-2.0.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.31.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx
Here is the ingress resources :
Keycloak :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "keycloak-ingress"
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: keycloak-ingress
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: keycloak
servicePort: 8080
Sentry :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "sentry-ingress"
namespace: "tools"
annotations:
kubernetes.io/ingress.class: sentry-nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
labels:
app: sentry-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: "sentry"
servicePort: 9000

keycloak is exposed on "/auth" which is the default web-context.
I understand that as default web-context, means that you want everything sent to / should be redirected to keycloak.
So you need to set a different target for Sentry, like /sentry.
kubernetes.io/ingress.class: sentry-nginx is not a valid ingress.class in kubernetes.io that's probably why your ingress is not being considered.
Only one deployment of Nginx-Ingress is needed to proxy traffic between multiple apps.
The trick here is to expose sentry as mydomain.com/sentry and the app itself receives the connection directly on / as required.
In order to achieve it you can use rewrite-target, learn more here.
It will create a capture group and send to the appropriate service.
This is what your ingress should look like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "my-ingress"
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
app: my-ingress
spec:
rules:
- http:
paths:
- path: /(.*)
backend:
serviceName: keycloak
servicePort: 8080
- path: /sentry(/|$)(.*)
backend:
serviceName: sentry
servicePort: 9000
This will do the following:
Requests to / will be delivered to keycloak as /
Requests to /auth will be delivered to keycloak as /
Requests to /auth/foo will be delivered to keycloak as /foo
Requests to /sentry will be delivered to sentry as /
Requests to /sentry/bar will be delivered to sentry as /bar
Nginx Ingress uses Path Priority:
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Example:
this is the ingress in my example echo-ingress.yaml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /(.*)
backend:
serviceName: echo1-svc
servicePort: 80
- path: /sentry(/|$)(.*)
backend:
serviceName: echo2-svc
servicePort: 80
I created 2 echo apps, to demonstrate it:
echo1-deploy.yaml: emulates your keycloak
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1-deploy
spec:
selector:
matchLabels:
app: echo1-app
template:
metadata:
labels:
app: echo1-app
spec:
containers:
- name: echo1-app
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo1-svc
spec:
selector:
app: echo1-app
ports:
- protocol: TCP
port: 80
targetPort: 80
echo2-deploy.yaml: emulates your sentry
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2-deploy
spec:
selector:
matchLabels:
app: echo2-app
template:
metadata:
labels:
app: echo2-app
spec:
containers:
- name: echo2-app
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo2-svc
spec:
selector:
app: echo2-app
ports:
- protocol: TCP
port: 80
targetPort: 80
Let's apply and test the outcome:
$ kubectl apply -f echo1-deploy.yaml
deployment.apps/echo1-deploy created
service/echo1-svc created
$ kubectl apply -f echo2-deploy.yaml
deployment.apps/echo2-deploy created
service/echo2-svc created
$ kubectl apply -f echo-ingress.yaml
ingress.networking.k8s.io/echo-ingress created
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
echo-ingress mydomain.com 35.188.7.149 80 48s
$ tail -n 1 /etc/hosts
35.188.7.149 mydomain.com
$ curl mydomain.com/sentry
{"path": "/",
...suppressed output...
"os": {"hostname": "echo2-deploy-7bcb8f8d5f-dwzkr"}
}
$ curl mydomain.com/auth
{"path": "/",
...suppressed output...
"os": {"hostname": "echo1-deploy-764d5df7cf-6m5nz"}
}
$ curl mydomain.com
{"path": "/",
"os": {"hostname": "echo1-deploy-764d5df7cf-6m5nz"}
}
We can see that the requested were correctly forwarded to the pod responsible for the app set in the ingress with the rewrite target.
Considerations:
Sentry "can only be exposed on rootPath "/"
I found out that Sentry can be exposed on other paths, check here and here, it might be worth checking.
If I got wrong your environment or you have any question let me know in the comments and I'll modify it =)

Related

Problem with mysql pod and nodejs on kubernetes (EAI_AGAIN)

i'm having troubles with my kubernetes cluster (hosted on AWS) where i'm trying to let my two pods communicate trough services. I have one pod on a deployment based on NODEJS and one pod on a deployment based on MYSQL. This is my yaml configuration file for the deployments and the services (all in one)
apiVersion: apps/v1
kind: Deployment
metadata:
name: db-deployment-products
namespace: namespace-private
labels:
app: productsdb
spec:
replicas: 1
selector:
matchLabels:
app: productsdb
template:
metadata:
labels:
app: productsdb
spec:
containers:
- name: productsdb
image: training-registry.com/library/productsdb:latest
env:
- name: DB_HOST
value: "productsdb-service.namespace-private.svc.cluster.local"
- name: DB_NAME
value: "products_db"
- name: DB_USER
value: "root"
- name: DB_PWD
value: "productsPWD"
- name: MYSQL_DATABASE
value: "products_db"
- name: MYSQL_ROOT_USER
value: "root"
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: productsdb-secret
key: MYSQL_ROOT_PASSWORD
- name: DB_DIALECT
value: "mysql"
- name: LOG_LEVEL
value: "debug"
- name: ES_LOG_LEVEL
value: "debug"
- name: ES_CLIENT
value: "http://elasticsearch:9200"
- name: ES_INDEX
value: "demo-uniroma3-products"
- name: ES_USER
value: "elastic"
- name: ES_PWD
value: "elastic"
- name: LOGGER_SERVICE
value: "products-service"
- name: DB_PORT
value: "3306"
- name: SERVER_PORT
value: "5000"
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: productsdb-service
namespace: namespace-private
spec:
selector:
app: productsdb
ports:
- protocol: TCP
port: 3306
targetPort: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: products-service-metaname
namespace: namespace-private
labels:
app: products-service
spec:
replicas: 1
selector:
matchLabels:
app: products-service
template:
metadata:
labels:
app: products-service
spec:
containers:
- name: products-service
image: training-registry.com/library/products-service:latest
env:
- name: DB_HOST
value: "productsdb-service.namespace-private.svc.cluster.local"
- name: DB_NAME
value: "products_db"
- name: DB_USER
value: "root"
- name: DB_PWD
value: "productsPWD"
- name: MYSQL_DATABASE
value: "products_db"
- name: MYSQL_ROOT_USER
name: "root"
- name: MYSQL_ROOT_PASSWORD
value: "productsPWD"
- name: DB_DIALECT
value: "mysql"
- name: ES_USER
value: "elastic"
- name: ES_PWD
value: "elastic"
- name: LOGGER_SERVICE
value: "products-service"
- name: DB_PORT
value: "3306"
- name: SERVER_PORT
value: "5000"
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: products-service-service
namespace: namespace-private
spec:
selector:
app: products-service
type: LoadBalancer
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30001
As you can see, i created the two services and used the complete name of the db service as variable "DB_HOST", but if i try to test the connection with port-forward on the address "localhost:5000/products", the browser tell me
{"success":false,"reason":{"name":"SequelizeConnectionError","parent":{"errno":-3001,"code":"EAI_AGAIN","syscall":"getaddrinfo","hostname":"productsdb-service.namespace-private.svc.cluster.local","fatal":true},"original":{"errno":-3001,"code":"EAI_AGAIN","syscall":"getaddrinfo","hostname":"productsdb-service.namespace-private.svc.cluster.local","fatal":true}}}
I tried to change the DB_HOST env variable with the name of the service, with the service IP but nothing seems it work. Do you know why and how can i resolve this? Thank you in advance

Error with Mapping values. Can't figure out where the issue is

This is my mysql-deployment.yaml I am trying to get this to run on kubernetes but I am getting error I have mentioned the errors below my deployment.yml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tier: database
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessMode:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
apps: mysql
tier: database
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-root-credentials
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef::
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretkeyRef:
name: db-credentials
key: password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: dbbuddyto_mstr_local
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
PersistentVolumeClaim:
claimName: mysql-pv-claim
I am getting two errors:
error parsing mysql-deployment.yml: error converting YAML to JSON: yaml: line 24: mapping values are not allowed in this context
and the second error is
Error from server (BadRequest): error when creating "mysql-deployment.yml": PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: strict decoding error: unknown field "spec.accessMode"
I am trying to build a Kubernetes deployment for angular, spring and mysql.
and the mentioned errors are the ones I am currently facing.
The issue with your PVC is a typo. It needs to be spec.accessModes, you missed the s at the end.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Edit:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tier: database
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
apps: mysql
tier: database
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-root-credentials
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: dbbuddyto_mstr_local
key: name
resources: {}
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
PersistentVolumeClaim:
claimName: mysql-pv-claim
Besides the typo with accessModes, ports and volumes were not indented enough. They are both elementes of a container. Also fixed the secretKeyRef typo:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tier: database
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
tier: database
spec:
containers:
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-root-credentials
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: dbbuddyto-mstr-local
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Update:
Also fixed labels to match matchLabels, case error in persistentVolumeClaim and name of configMap dbbuddyto-mstr-local. This is important. _is not allowed.
On minikube there is no error now.

Error while deploying Hyperledger fabric on openshift

I am using Hyperledger fabric-1.0.1 , openshift v3.4.1.44 , Kubernetes v1.4.0
In my deployment I am having
2 organization, 4 peers , 1 orderer and 2 ca's
I am deploying following YAML on openshift to create PODS and services.
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0
name: ca0
spec:
ports:
- name: "7054"
port: 7054
targetPort: 7054
selector:
io.kompose.service: ca0
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1
name: ca1
spec:
ports:
- name: "8054"
port: 8054
targetPort: 7054
selector:
io.kompose.service: ca1
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer
name: orderer
spec:
ports:
- name: "7050"
port: 7050
targetPort: 7050
selector:
io.kompose.service: orderer
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01
name: peer01
spec:
ports:
- name: "7051"
port: 7051
targetPort: 7051
- name: "7053"
port: 7053
targetPort: 7053
selector:
io.kompose.service: peer01
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02
name: peer02
spec:
ports:
- name: "9051"
port: 9051
targetPort: 7051
- name: "9053"
port: 9053
targetPort: 7053
selector:
io.kompose.service: peer02
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11
name: peer11
spec:
ports:
- name: "8051"
port: 8051
targetPort: 7051
- name: "8053"
port: 8053
targetPort: 7053
selector:
io.kompose.service: peer11
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12
name: peer12
spec:
ports:
- name: "10051"
port: 10051
targetPort: 7051
- name: "10053"
port: 10053
targetPort: 7053
selector:
io.kompose.service: peer12
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0
name: ca0
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0
spec:
containers:
- args:
- sh
- -c
- fabric-ca-server start --ca.certfile /var/code/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem
--ca.keyfile /var/code/peerOrganizations/org1.example.com/ca/PK-KEY
-b admin:adminpw -d
env:
- name: FABRIC_CA_HOME
value: /etc/hyperledger/fabric-ca-server
- name: FABRIC_CA_SERVER_CA_NAME
value: ca-org1
- name: FABRIC_CA_SERVER_TLS_CERTFILE
value: /var/code/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem
- name: FABRIC_CA_SERVER_TLS_ENABLED
value: "false"
- name: FABRIC_CA_SERVER_TLS_KEYFILE
value: /var/code/peerOrganizations/org1.example.com/ca/PK-KEY
image: hyperledger/fabric-ca:x86_64-1.0.1
name: ca-peerorg1
ports:
- containerPort: 7054
resources: {}
volumeMounts:
- mountPath: /etc/hyperledger
name: ca0-claim0
- mountPath: /var/fabricdeploy
name: common-claim
restartPolicy: Always
volumes:
- name: ca0-claim0
persistentVolumeClaim:
claimName: ca0-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca0-pvc
name: ca0-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1
name: ca1
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1
spec:
containers:
- args:
- sh
- -c
- fabric-ca-server start --ca.certfile /var/code/peerOrganizations/org2.example.com/ca/ca.org2.example.com-cert.pem
--ca.keyfile /var/code/peerOrganizations/org2.example.com/ca/PK-KEY
-b admin:adminpw -d
env:
- name: FABRIC_CA_HOME
value: /etc/hyperledger/fabric-ca-server
- name: FABRIC_CA_SERVER_CA_NAME
value: ca-org2
- name: FABRIC_CA_SERVER_TLS_CERTFILE
value: /var/code/peerOrganizations/org2.example.com/ca/ca.org2.example.com-cert.pem
- name: FABRIC_CA_SERVER_TLS_ENABLED
value: "false"
- name: FABRIC_CA_SERVER_TLS_KEYFILE
value: /var/code/peerOrganizations/org2.example.com/ca/PK-KEY
image: hyperledger/fabric-ca:x86_64-1.0.1
name: ca-peerorg2
ports:
- containerPort: 7054
resources: {}
volumeMounts:
- mountPath: /etc/hyperledger
name: ca1-claim0
- mountPath: /var/fabricdeploy
name: common-claim
restartPolicy: Always
volumes:
- name: ca1-claim0
persistentVolumeClaim:
claimName: ca1-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: ca1-pvc
name: ca1-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer
name: orderer
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer
spec:
containers:
- args:
- orderer
env:
- name: ORDERER_GENERAL_GENESISFILE
value: /var/fabricdeploy/fabric-samples/first-network/channel-artifacts/genesis.block
- name: ORDERER_GENERAL_GENESISMETHOD
value: file
- name: ORDERER_GENERAL_LISTENADDRESS
value: 0.0.0.0
- name: ORDERER_GENERAL_LOCALMSPDIR
value: /var/code/ordererOrganizations/example.com/orderers/orderer.example.com/msp
- name: ORDERER_GENERAL_LOCALMSPID
value: OrdererMSP
- name: ORDERER_GENERAL_LOGLEVEL
value: debug
- name: ORDERER_GENERAL_TLS_CERTIFICATE
value: /var/code/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
- name: ORDERER_GENERAL_TLS_ENABLED
value: "false"
- name: ORDERER_GENERAL_TLS_PRIVATEKEY
value: /var/code/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
- name: ORDERER_GENERAL_TLS_ROOTCAS
value: '[/var/code/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt]'
image: hyperledger/fabric-orderer:x86_64-1.0.1
name: orderer
ports:
- containerPort: 7050
resources: {}
volumeMounts:
- mountPath: /var/fabricdeploy
name: common-claim
- mountPath: /var
name: ordererclaim1
workingDir: /opt/gopath/src/github.com/hyperledger/fabric
restartPolicy: Always
volumes:
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
- name: ordererclaim1
persistentVolumeClaim:
claimName: orderer-pvc
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: orderer-pvc
name: orderer-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01
name: peer01
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer01.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer01.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer0.org1.example.com
- name: CORE_PEER_LOCALMSPID
value: Org1MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer01
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer01claim0
- mountPath: /var/fabricdeploy
name: common-claim
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer01claim0
persistentVolumeClaim:
claimName: peer01-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer01-pvc
name: peer01-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02
name: peer02
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer02.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: peer02.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer0.org2.example.com
- name: CORE_PEER_LOCALMSPID
value: Org2MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer02
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer02claim0
- mountPath: /var/fabricdeploy
name: common-claim
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer02claim0
persistentVolumeClaim:
claimName: peer02-pvc
- name: common-claim
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer02-pvc
name: peer02-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11
name: peer11
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer11.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: peer01.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer11.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer1.org1.example.com
- name: CORE_PEER_LOCALMSPID
value: Org1MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer11
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer11claim0
- mountPath: /var/fabricdeploy
name: peer11claim1
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer11claim0
persistentVolumeClaim:
claimName: peer11-pvc
- name: peer11claim1
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer11-pvc
name: peer11-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12
name: peer12
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12
spec:
containers:
- args:
- peer
- node
- start
env:
- name: CORE_LOGGING_LEVEL
value: DEBUG
- name: CORE_PEER_ADDRESS
value: peer12.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: peer12.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: peer12.first-network.svc.cluster.local:7051
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_ID
value: peer1.org2.example.com
- name: CORE_PEER_LOCALMSPID
value: Org2MSP
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_PEER_TLS_CERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.crt
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_TLS_KEY_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.key
- name: CORE_PEER_TLS_ROOTCERT_FILE
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
- name: CORE_PEER_MSPCONFIGPATH
value: /var/code/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp
image: hyperledger/fabric-peer:x86_64-1.0.1
name: peer12
ports:
- containerPort: 7051
- containerPort: 7053
resources: {}
volumeMounts:
- mountPath: /var
name: peer12claim0
- mountPath: /var/fabricdeploy
name: peer12claim1
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
restartPolicy: Always
volumes:
- name: peer12claim0
persistentVolumeClaim:
claimName: peer12-pvc
- name: peer12claim1
persistentVolumeClaim:
claimName: fabric-deploy
status: {}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: peer12-pvc
name: peer12-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
status: {}
kind: List
metadata: {}
When I tried to executed steps of script.sh https://github.com/hyperledger/fabric-samples/tree/release/first-network/scripts (Hyperledger fabric -Building Your First Network) to build network I am getting error at step installChaincode.
:/var/fabricdeploy/fabric-samples/first-network/scripts$ ./script.sh
Build your first network (BYFN) end-to-end test
Channel name : mychannel
Creating channel...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
CORE_PEER_TLS_KEY_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
.
.
.
2017-08-31 13:56:02.520 UTC [main] main -> INFO 021 Exiting.....
===================== Channel "mychannel" is created successfully =====================
Having all peers join the channel...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:02.565 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: F98AD2F3EFC2B7B6916C149E819B7F322C29595623D48A90AB14899C0E2DDD51
2017-08-31 13:56:02.591 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:02.591 UTC [main] main -> INFO 007 Exiting.....
===================== PEER0 joined on the channel "mychannel" =====================
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:04.669 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:04.669 UTC [main] main -> INFO 007 Exiting.....
===================== PEER1 joined on the channel "mychannel" =====================
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:06.760 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:06.760 UTC [main] main -> INFO 007 Exiting.....
===================== PEER2 joined on the channel "mychannel" =====================
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:08.844 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
2017-08-31 13:56:08.844 UTC [main] main -> INFO 007 Exiting.....
===================== PEER3 joined on the channel "mychannel" =====================
Updating anchor peers for org1...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:10.934 UTC [main] main -> INFO 010 Exiting.....
===================== Anchor peers for org "Org1MSP" on "mychannel" is updated successfully =====================
Updating anchor peers for org2...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
.
.
.
2017-08-31 13:56:11.006 UTC [main] main -> INFO 010 Exiting.....
===================== Anchor peers for org "Org2MSP" on "mychannel" is updated successfully =====================
Installing chaincode on org1/peer0...
CORE_PEER_TLS_ROOTCERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
CORE_PEER_TLS_KEY_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
CORE_PEER_LOCALMSPID=Org1MSP
CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
CORE_PEER_TLS_CERT_FILE=/var/code/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
CORE_PEER_TLS_ENABLED=false
CORE_PEER_MSPCONFIGPATH=/var/code/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
CORE_PEER_ID=cli
CORE_LOGGING_LEVEL=DEBUG
CORE_PEER_ADDRESS=peer01.first-network.svc.cluster.local:7051
2017-08-/opt/go/src/runtime/panic.go:566 +0x95EBU 001 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.30.217.53:7051: getsockopt: connection refused";
runtime.sigpanic()peer01.first-network.svc.cluster.local:7051 <nil>}
fatal er/opt/go/src/runtime/sigpanic_unix.go:12 +0x2ccn
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7fb7242db259]
goroutine 20 [syscall, locked to thread]:
runtime.cgocall(0xb08d50, 0xc4200265f8, 0xc400000000)
runtime./opt/go/src/runtime/cgocall.go:131 +0x110 fp=0xc4200265b0 sp=0xc420026570
net._C2f??:0 +0x68 fp=0xc4200265f8 sp=0xc4200265b0018d6e0, 0xc42013c158, 0x0, 0x0, 0x0)
net.cgoL/opt/go/src/net/cgo_unix.go:146 +0x37c fp=0xc420026718 sp=0xc4200265f8
net.cgoI/opt/go/src/net/cgo_unix.go:198 +0x4d fp=0xc4200267a8 sp=0xc420026718
runtime./opt/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200267b0 sp=0xc4200267a8
created /opt/go/src/net/cgo_unix.go:208 +0xb4
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/clientconn.go:434 +0x856
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.Dial(0xc420018092, 0x2b, 0xc420357300, 0x4, 0x4, 0xc420357300, 0x2, 0x4)
github.c/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/clientconn.go:319 +0x960018092, 0x2b, 0xc420357300, 0x4, 0x4, 0x0, 0x0, 0x0)
github.c/opt/gopath/src/github.com/hyperledger/fabric/core/comm/connection.go:191 +0x2a9b, 0x490001, 0x0, 0x0, 0xc, 0xc420018092, 0x2b)
github.c/opt/gopath/src/github.com/hyperledger/fabric/core/peer/peer.go:500 +0xbe018092, 0x2b, 0xc420018092, 0x2b, 0xc4201a5988)
github.c/opt/gopath/src/github.com/hyperledger/fabric/core/peer/peer.go:475 +0x4e4201a59c0, 0x0)
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/common/common.go:114 +0x29 0x0, 0xc4200001a0)
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/common.go:240 +0x77a
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/install.go:166 +0x5a8 0xd9d943, 0x5)
github.c/opt/gopath/src/github.com/hyperledger/fabric/peer/chaincode/install.go:54 +0x54, 0x0, 0x6, 0x0, 0x0)
!!!!!!!!!!!!!!! Chaincode installation on remote peer PEER0 has Failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========

OpenShift create template from existing setup

I was trying to generate a template from my existing setup with
oc export dc,svc,bc --selector="microservice=imagesvc" -o yaml --as-template=imagesvc
The problem is that the template points the container source to my reigstry. I would like to modify the template in a way that the build configuration is building the container from source, then attaches it to the deploymentconfig. How can I achieve something like that?
This is the config I currently have. When I apply I get various errors. As an example in Builds I get "Invalid output reference"
Any help with this would be greatly appreciated.
apiVersion: v1
kind: Template
metadata:
creationTimestamp: null
name: imagesvc
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
generation: 1
labels:
app: gcsimageupload
microservice: imagesvc
name: gcsimageupload
spec:
replicas: 1
selector:
deploymentconfig: gcsimageupload
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
app: gcsimageupload
deploymentconfig: gcsimageupload
microservice: imagesvc
spec:
containers:
- imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: gcsimageupload
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /secret
name: gcsimageupload-secret
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: gcsimageupload-secret
secret:
defaultMode: 420
secretName: gcsimageupload-secret
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- gcsimageupload
from:
kind: ImageStreamTag
name: gcsimageupload:latest
namespace: web
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
generation: 1
labels:
app: imagesvc
microservice: imagesvc
name: imagesvc
spec:
replicas: 1
selector:
deploymentconfig: imagesvc
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
app: imagesvc
deploymentconfig: imagesvc
microservice: imagesvc
spec:
containers:
- imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
name: imagesvc
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- imagesvc
from:
kind: ImageStreamTag
name: imagesvc:latest
namespace: web
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
generation: 1
labels:
app: imaginary
microservice: imagesvc
name: imaginary
spec:
replicas: 1
selector:
app: imaginary
deploymentconfig: imaginary
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: imaginary
deploymentconfig: imaginary
microservice: imagesvc
spec:
containers:
- image: h2non/imaginary
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: imaginary
ports:
- containerPort: 9000
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9000
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- imaginary
from:
kind: ImageStreamTag
name: imaginary:latest
namespace: web
type: ImageChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: gcsimageupload
microservice: imagesvc
name: gcsimageupload
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: gcsimageupload
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
service.alpha.openshift.io/dependencies: '[{"name":"gcsimageupload","namespace":"","kind":"Service"},{"name":"imaginary","namespace":"","kind":"Service"}]'
creationTimestamp: null
labels:
app: imagesvc
microservice: imagesvc
name: imagesvc
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
deploymentconfig: imagesvc
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: imaginary
microservice: imagesvc
name: imaginary
spec:
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: imaginary
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: gcsimageupload
microservice: imagesvc
name: gcsimageupload
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: gcsimageupload:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: master
uri: https://github.com/un1x86/openshift-ms-gcsFileUpload.git
type: Git
strategy:
sourceStrategy:
env:
- name: GCS_PROJECT
value: ${GCS_PROJECT_ID}
- name: GCS_KEY_FILENAME
value: ${GCS_KEY_FILENAME}
- name: GCS_BUCKET
value: ${GCS_BUCKET}
from:
kind: ImageStreamTag
name: nodejs:4
namespace: openshift
type: Source
triggers:
- github:
secret: f9928132855c5a30
type: GitHub
- generic:
secret: 77ece14f810caa3f
type: Generic
- imageChange: {}
type: ImageChange
- type: ConfigChange
status:
lastVersion: 0
- apiVersion: v1
kind: BuildConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: imagesvc
microservice: imagesvc
name: imagesvc
spec:
nodeSelector: null
output:
to:
kind: ImageStreamTag
name: imagesvc:latest
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: master
uri: https://github.com/un1x86/openshift-ms-imagesvc.git
type: Git
strategy:
sourceStrategy:
env:
- name: IMAGINARY_APPLICATION_DOMAIN
value: http://imaginary:9000
- name: GCSIMAGEUPLOAD_APPLICATION_DOMAIN
value: http://gcsimageupload:8080
from:
kind: ImageStreamTag
name: nodejs:4
namespace: openshift
type: Source
triggers:
- generic:
secret: 945da12357ef35cf
type: Generic
- github:
secret: 18106312cfa8e2d1
type: GitHub
- imageChange: {}
type: ImageChange
- type: ConfigChange
status:
lastVersion: 0
parameters:
- description: "GCS Project ID"
name: GCS_PROJECT_ID
value: ""
required: true
- description: "GCS Key Filename"
name: GCS_KEY_FILENAME
value: /secret/keyfile.json
required: true
- description: "GCS Bucket name"
name: GCS_BUCKET
value: ""
required: true
You will need to create two imagestreams named "imagesvc" and "gcsimageupload". You could do it by cli "oc create is " or by adding to the template:
- kind: ImageStream
apiVersion: v1
metadata:
name: <name>
spec:
lookupPolicy:
local: false

Volume not persistent in Google Cloud

I'm trying to build a mysql pod on Google Cloud. However, when I create a database and then restart the pod, the database that I created is not persisted.
I follow this official tutorial : https://github.com/kubernetes/kubernetes/tree/master/examples/mysql-wordpress-pd
mysql-deployment.yaml :
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: myApp
env: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: myapp-backend-production-mysql
spec:
replicas:
template:
metadata:
labels:
app: myapp
role: backend
env: production
type: mysql
spec:
containers:
- name: backend-mysql
image: mysql:5.6
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password.txt
ports:
- name: backend
containerPort: 3306
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
And the volume.yaml :
# Not applied for all build
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-production-pv-1
labels:
app: myapp
env: production
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: mysql-production-1
fsType: ext4
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-production-pv-2
labels:
app: myapp
env: production
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: mysql-production-2
fsType: ext4