Unable to create a tree in Trillian log mysql database - mysql

I am using an on premise kubernetes cluster (with istio) to integrate my application with Trillian. I have deployed a mysql database together with a personality, a server and a signer, but I am not able to create a tree using the command here (https://github.com/google/trillian/blob/master/examples/deployment/kubernetes/provision_tree.sh#L27)
echo TREE=$(curl -sb -X POST ${LOG_URL}/v1beta1/trees -d '{ "tree":{ "tree_state":"ACTIVE", "tree_type":"LOG", "hash_strategy":"RFC6962_SHA256", "signature_algorithm":"ECDSA", "max_root_duration":"0", "hash_algorithm":"SHA256" }, "key_spec":{ "ecdsa_params":{ "curve":"P256" } } }')
When I execute the command, I get 404 page not found as result.
The .yaml file of the trillian-server is defined as following:
apiVersion: v1
kind: ConfigMap
metadata:
name: tr-server-list
data: # TODO optional add env parameter initialization
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tr-server
labels:
name: tr-server
app: tr-server-app
spec:
replicas: 1
selector:
matchLabels:
name: tr-server-pod
db: trdb
app: tr-server-app
template:
metadata:
labels:
name: tr-server-pod
db: trdb
app: tr-server-app
spec:
containers:
- name: trillian-log-server
image: docker.io/fortissleviathan123/trillian-log-server:latest
imagePullPolicy: IfNotPresent
args: [
"--storage_system=mysql",
"--mysql_uri=test:zaphod#tcp(trdb.default.svc.cluster.local:3306)/test",
"--rpc_endpoint=0.0.0.0:8090",
"--http_endpoint=0.0.0.0:8091",
"--alsologtostderr",
]
envFrom:
- configMapRef:
name: tr-server-list
ports:
- name: grpc
containerPort: 8090
- name: https
containerPort: 8091
---
apiVersion: v1
kind: Service
metadata:
name: tr-server
labels:
name: tr-server
app: tr-server-app
spec:
ports:
- name: http
port: 8091
targetPort: 8091
- name: grpc
port: 8090
targetPort: 8090
selector:
name: tr-server-pod
db: trdb
app: tr-server-app
The services are running:
trdb-0 2/2 Running 6 (70m ago) 40h
tr-personality-5ffbfb44cb-2vp89 2/2 Running 3 (69m ago) 11h
tr-server-59d8bbd4c-kxkxs 2/2 Running 14 (69m ago) 38h
tr-signer-78b74df645-j5p7j 2/2 Running 15 (69m ago) 38h
Is there anything wrong with this deployment?

The solution is to use an application provided by google to create the tree, since servers' REST API is supposed to be old. Answer can be found here: https://github.com/google/trillian/issues/2675

Related

Wordpress+Mysql deployment don't get IP address from another pool

My deployment is about Wordpress and MYsql. I already defined a new pool and a new namespace and I was trying to make that my pods get an ip address from this new pool defined but they never get one.
My namespace file yaml
apiVersion: v1
kind: Namespace
metadata:
name: produccion
annotations:
cni.projectcalico.org/ipv4pools: ippool
my pool code
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
name: ippool
spec:
cidr: 192.169.0.0/24
blockSize: 29
ipipMode: Always
natOutgoing: true
EOF
My mysql deployment is
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
namespace: produccion
labels:
app: wordpress
spec:
ports:
- port: 3306
targetPort: 3306
nodePort: 31066
selector:
app: wordpress
tier: mysql
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
namespace: produccion
annotations:
cni.projectcalico.org/ipv4pools: ippool
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: PASS
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: "/var/lib/mysql"
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
My wordpress deployment
apiVersion: v1
kind: Service
metadata:
name: wordpress
namespace: produccion
labels:
app: wordpress
spec:
ports:
- port: 80
nodePort: 30999
selector:
app: wordpress
tier: frontend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: produccion
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress
name: wordpress
env:
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_HOST
value: IP_Address:31066
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
value: PASS
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: "/var/www/html"
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wordpress-persistent-storage
Additionally, I have also two PV yaml file for each service (mysql and wordpress).
When I execute the Kubectl of any deployment, they stay on ContainerCreating and the IP column stay on none.
produccion wordpress-mysql-74578f5d6d-knzzh 0/1 ContainerCreating 0 70m <none> dockerc8.tecsinfo-ec.com
If I check this pod I get the next errors:
Normal Scheduled 88s default-scheduler Successfully assigned produccion/wordpress-mysql-74578f5d6d-65jvt to dockerc8.tecsinfo-ec.com
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cdb1460246562cac11a57073ab12489dc169cb72aa3371cb2e72489544812a9b" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "672a2c35c2bb99ebd5b7d180d4426184613c87f9bc606c15526c9d472b54bd6f" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
Warning FailedCreatePodSandBox <invalid> kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "de4d7669206f568618a79098d564e76899779f94120bddcee080c75f81243a85" network for pod "wordpress-mysql-74578f5d6d-65jvt": networkPlugin cni failed to set up pod "wordpress-mysql-74578f5d6d-65jvt_produccion" network: invalid character 'i' looking for beginning of value
I was using some guides from Internet like this one: https://www.projectcalico.org/calico-ipam-explained-and-enhanced/
but even this doesn't work on my lab.
I am pretty new using Kubernetes and I don't know what else to do or check.
Your error is due to invalid values in the YAML, according to the Project Calico documentation here: Using Kubernetes annotations
You will need to provide a list of IP pools as the value in your annotation instead of a single string. The following snippet should work for you.
cni.projectcalico.org/ipv4pools: "[\"ippool\"]"

Kubernetes Inject Env Variable with File in a Volume

Just for training purpose, I'm trying to inject those env variables with this ConfigMap in my Wordpress and Mysql app by using a File with a Volume.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
namespace: ex2
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql
namespace: ex2
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 3306
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
---
apiVersion: v1
kind: Service
metadata:
labels:
app: wordpress
name: wordpress
namespace: ex2
spec:
ports:
- nodePort: 30999
port: 80
protocol: TCP
targetPort: 80
selector:
app: wordpress
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
namespace: ex2
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
ports:
- containerPort: 80
protocol: TCP
volumes:
- name: config
configMap:
name: wordpress-mysql
When I deploy the app the mysql pod fails with this error:
kubectl -n ex2 logs mysql-56ddd69598-ql229
2020-12-26 19:57:58+00:00 [ERROR] [Entrypoint]: Database is
uninitialized and password option is not specified
You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD
I don't understand because I have specified everything in my configMap. I also have tried by using envFrom and Single Env Variables and it works just fine. I'm just having an issue with File in a Volume
#DavidMaze is correct; you're mixing two useful features.
Using test.yaml:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wordpress-mysql
data:
wordpress.conf: |
WORDPRESS_DB_HOST mysql
WORDPRESS_DB_USER admin
WORDPRESS_DB_PASSWORD "1234"
WORDPRESS_DB_NAME wordpress
WORDPRESS_DB_PREFIX wp_
mysql.conf: |
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- image: busybox
name: test
args:
- ash
- -c
- while true; do sleep 15s; done
volumeMounts:
- name: config
mountPath: "/etc/env"
readOnly: true
volumes:
- name: config
configMap:
name: wordpress-mysql
Then:
kubectl apply --filename=./test.yaml
kubectl exec --stdin --tty deployment/test -- ls /etc/env
mysql.conf wordpress.conf
kubectl exec --stdin --tty deployment/test -- more /etc/env/mysql.conf
MYSQL_DATABASE wordpress
MYSQL_USER admin
MYSQL_PASSWORD "1234"
MYSQL_RANDOM_ROOT_PASSWORD "1"
NOTE the files are missing (and should probably include) = between the variable and its value e.g. MYSQL_DATABASE=wordpress
So, what you have is a ConfigMap that represents 2 files (mysql.conf and wordpress.conf) and, if you use e.g. busybox and mount the ConfigMap as a volume, you can see that it includes 2 files and that the files contain the configurations.
So, if you can run e.g. WordPress or MySQL and pass a configuration file to them, you're good but what you probably want to do is reference the ConfigMap entries as environment variables, per #DavidMaze suggestion, i.e. run Pods with environment variables set by the ConfigMap entries, i.e.:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
I would really suggest not to use configmap for wordpress. You can use directly the official repo https://github.com/docker-library/wordpress/tree/master/php7.4/apache it has a docker-entrypoint.sh which you can use to inject the env values from the deployment.yaml directly or if you use vault that works perfectly too.

MySQL router in kubernetes as a service

I want to deploy MySQL-router in Kubernetes working as a service.
My plan..
Deploy MySQL-router inside k8 and expose MySQL-router as a service using LoadBalancer (MetalLB)
Applications running inside k8 sees mysql-router service as its database.
MySQL-router sends application data to outside InnoDB cluster.
I tried to deploy using:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "192.168.123.130"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 6446
192.168.123.130 is MySQL cluster Master IP.
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- protocol: TCP
port: 6446
type: LoadBalancer
loadBalancerIP: 192.168.123.123
When I check mysql-router container logs, I see something like this:
Waiting for mysql server 192.168.123.130 (0/12)
Waiting for mysql server 192.168.123.130 (1/12)
Waiting for mysql server 192.168.123.130 (2/12)
....
After setting my external MySQL cluster info in deployment, I get following errors:
Successfully contacted mysql server at 192.168.123.130. Checking for cluster state.
Can not connect to database. Exiting.
I can not deploy mysql-router without specifying MYSQL_HOST. What am I missing here?
My ideal deployment
Of course you have to provide the MySQL Host. You could doing this with k8s DNS which setup with in the services.
MySQL Router is middleware that provides transparent routing between your application and any backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers.
Examples
For examples below i use dynamic volume provisioning for data using openebs-hostpath And using StatefulSet for the MySQL Server.
Deployment
MySQL Router :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "mariadb-galera.galera-cluster"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 3306
MySQL Server
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: galera-cluster
name: mariadb-galera
spec:
podManagementPolicy: OrderedReady
replicas: 1
selector:
matchLabels:
app: mariadb-galera
serviceName: mariadb-galera
template:
metadata:
labels:
app: mariadb-galera
spec:
restartPolicy: Always
securityContext:
fsGroup: 1001
runAsUser: 1001
containers:
- command:
- bash
- -ec
- |
# Bootstrap from the indicated node
NODE_ID="${MY_POD_NAME#"mariadb-galera-"}"
if [[ "$NODE_ID" -eq "0" ]]; then
export MARIADB_GALERA_CLUSTER_BOOTSTRAP=yes
export MARIADB_GALERA_FORCE_SAFETOBOOTSTRAP=no
fi
exec /opt/bitnami/scripts/mariadb-galera/entrypoint.sh /opt/bitnami/scripts/mariadb-galera/run.sh
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: BITNAMI_DEBUG
value: "false"
- name: MARIADB_GALERA_CLUSTER_NAME
value: galera
- name: MARIADB_GALERA_CLUSTER_ADDRESS
value: gcomm://mariadb-galera.galera-cluster
- name: MARIADB_ROOT_PASSWORD
value: root#123
- name: MARIADB_DATABASE
value: my_database
- name: MARIADB_GALERA_MARIABACKUP_USER
value: mariabackup
- name: MARIADB_GALERA_MARIABACKUP_PASSWORD
value: root#123
- name: MARIADB_ENABLE_LDAP
value: "no"
- name: MARIADB_ENABLE_TLS
value: "no"
image: docker.io/bitnami/mariadb-galera:10.4.13-debian-10-r23
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- bash
- -ec
- |
exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: mariadb-galera
ports:
- containerPort: 3306
name: mysql
protocol: TCP
- containerPort: 4567
name: galera
protocol: TCP
- containerPort: 4568
name: ist
protocol: TCP
- containerPort: 4444
name: sst
protocol: TCP
readinessProbe:
exec:
command:
- bash
- -ec
- |
exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /opt/bitnami/mariadb/.bootstrap
name: previous-boot
- mountPath: /bitnami/mariadb
name: data
- mountPath: /opt/bitnami/mariadb/conf
name: mariadb-galera-config
volumes:
- emptyDir: {}
name: previous-boot
- configMap:
defaultMode: 420
name: my.cnf
name: mariadb-galera-config
volumeClaimTemplates:
- apiVersion: v1
metadata:
name: data
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Services
MySQL Router Service
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- protocol: TCP
port: 3306
type: LoadBalancer
loadBalancerIP: 192.168.123.123
MySQL Service
apiVersion: v1
kind: Service
metadata:
namespace: galera-cluster
name: mariadb-galera
labels:
app: mariadb-galera
spec:
type: ClusterIP
ports:
- name: mysql
port: 3306
selector:
app: mariadb-galera
---
apiVersion: v1
kind: Service
metadata:
namespace: galera-cluster
name: mariadb-galera-headless
labels:
app: mariadb-galera
spec:
type: ClusterIP
ports:
- name: galera
port: 4567
- name: ist
port: 4568
- name: sst
port: 4444
selector:
app: mariadb-galera
What you need its #1 communication from App1-x to Mysql router and #2 a VIP/LB from MysqlRoutere to external mysql instances.
Well start with #2 configuration of Mysql instances VIP. You will need a service without selector.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: 3306
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysql-service
subsets:
- addresses:
- ip: 192.168.123.130
- ip: 192.168.123.131
- ip: 192.168.123.132
ports:
- name: mysql
port: 3306
protocol: TCP
You don't need LoadBalancer cuz you will connect only inside cluster. So, use ClusterIp instead.
#1 Create MysqlRouter deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-router
spec:
replicas: 1
selector:
matchLabels:
app: mysql-router
template:
metadata:
labels:
app: mysql-router
version: v1
spec:
containers:
- name: mysql-router
image: mysql/mysql-router
env:
- name: MYSQL_HOST
value: "mysql-service"
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_USER
value: "root"
- name: MYSQL_PASSWORD
value: "root#123"
imagePullPolicy: Always
ports:
- containerPort: 6446
To connect to external MySQL instances trough VIP/ClusterIP use mysql-service service and if deployment and service is in same namespace use mysql-service as hostname or put there a CLusterIP from kubectl get service mysql-service
apiVersion: v1
kind: Service
metadata:
name: mysql-router-service
namespace: mysql-router
labels:
app: mysql-router
spec:
selector:
app: mysql-router
ports:
- name: mysql
port: 6446
protocol: TCP
targetPort: 6446
type: ClusterIP
You can connect within kubernetes cluster to mysql-router-service hostname in same namespace and outside namespace to mysql-router-service.namespace.svc or outside kubernetes cluster use NodePort or LoadBalancer.

how to connect a cloud sql instance to an sql cluster?

i have the deployment yaml file on the cluster and the connection name of the sql instance and the public ip address, so what should i add and where in order for me to connect the intance and cluster? i wanna be able to add something to the sql cluster and it gets automatically saved to the instance and vice-versa.
this is the deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp:
generation: 1
labels:
app: mysql
name: mysql
namespace: default
resourceVersion: "1420"
selfLink:
uid:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mysql
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mysql
spec:
containers:
- env:
- name:
valueFrom:
secretKeyRef:
key:
name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name:
persistentVolumeClaim:
claimName:
status:
availableReplicas: 1
conditions:
- lastTransitionTime:
lastUpdateTime:
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime:
lastUpdateTime:
message: ReplicaSet "mysql" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
You should use the Cloud SQL proxy and add it as a sidecar to your application making queries to the Cloud SQL instance. Google has a suggested best practice found here.

How to set INGRESS_HOST and INGRESS_PORT and access GATEWAY_URL

How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service
Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change
Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)