I notice that when I create an Ingress on GKE several annotations are automatically generated thus:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-30266--edf23f6631e3474e":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-nginx-ingress--edf23f6631e3474e
ingress.kubernetes.io/target-proxy: k8s-tp-default-nginx-ingress--edf23f6631e3474e
ingress.kubernetes.io/url-map: k8s-um-default-nginx-ingress--edf23f6631e3474e
Is there any way these annotations can be viewed, as this would help me further understand better. TIA
Shaun
For this you need to use get and -o yaml
For example:
kubectl get deployment myapp-deployment -o yaml
result in terminal will be YAML with whole configuration of deployment myapp-deployment
More useful command you can find in official doc page
Related
I have deployed ipfs-cluster as a statefulset with 3 replicas in Azure Kubernetes cluster. Created loadbalancer service for ipfs and used the loadbalancer service ip to add files using ipfs-cluster-ctl like below,
ipfs-cluster-ctl --host /ip4/20.94.98.25/tcp/9094 add test.txt
The above command provides CID as an output. Sample output looks like,
added Qmeeyj7hldjsj9XCoLSK6dY7ZTVTt8YcjfHXAuTzhCrz test.txt
Now, I have created ingress using haproxy for the ipfs-cluster service and tried to access the added files using the ingress url, sample url looks like,
http://ipfs.testing.example.com/ipfs/Qmeeyj7hldjsj9XCoLSK6dY7ZTVTt8YcjfHXAuTzhCrz
The above url works fine and showing the file content of test.txt
But now I need to use the ingress url instead of loadbalancer ip to add files using ipfs-cluster-ctl. I don't find any reference to achieve this. Can anyone please guide me to add files to ipfs using ingress url.
Thanks in Advance!
I'm trying to put together a helm chart for provisioning namespaces/projects in OpenShift.
Helm version is 3.9.3
The templates folder has YAML files for the namespace, compute quota, docker pull secret, and a rolebinding for a service account.
The testvalues.yaml file is very simple:
namespace:
name: "mytest"
team: "DevOps"
description: "Test Namespace Created with Helm"
When I try to run helm upgrade --install testnamespace ./namespaceChart --values testvalues.yaml I get an error "namespaces 'mytest' not found".
However, if I remove the quota, secret, and rolebinding files from the templates directory(leaving only namespace.yaml) and run the same command, it works fine, empty namespace is created. I then re-add the other resource yaml files, run the same command for a 3rd time, it works and adds the missing resources accordingly.
The order is supposed to create the namespace first, correct? It seems like its not creating the namespace correctly, or not waiting until it is done before trying the other resources.
I've tried adding the --create-namespace option to the command and that doesn't work either.
Is there something I'm missing? Can I target only the namespace.yaml file on the first round, then just run the command again to complete the rest?
Realized my problem while typing this question up.
My namespace yaml was using:
kind: Project
apiVersion: project.openshift.io/v1
Because that is what our current project spaces show when I inspect their YAML in the Console UI.
Once I switched to:
kind: Namespace
apiVersion: v1
Everything gets setup perfectly fine in one shot. I'm guessing this is because Helm doesn't recognize the "Project" kind as the same as a namespace and doesn't place it at the top of the creation order, thus the "not found" error because it is actually seeing the quota as the first resource to build.
I defined a template (let's call it template.yaml) with a service, deploymentconfig, buildconfig and imagestream, applied it with oc apply -f template.yaml and ran oc new-app app-name to create new app from the template. What the app basically does is to build a Node.js application with S2I, write it to a new ImageStream and deploy it to a pod with the necessary service exposed.
Now I've decided to make some changes to the template and have applied it on OpenShift. How do I go about ensuring that all resources in the said template also get reconfigured without having to delete all resources associated with that template and recreating it again?
I think the template is only used to create the related resource first time. Even though you modify the template, it's not associated with created resources. So you should recreate or modify each resource that is modified.
But you can modify simply all resources created by template using the following cmd.
# oc apply -f template_modified.yaml | oc replace -f -
I hope it help you
The correct command turned out to be:
$ oc apply -f template_modified.yaml
$ oc process -f template_modified.yaml | oc replace -f -
That worked for me on OpenShift 3.9.
Now I wanna run a machine learning pod in openshift, but I need to upload some data like training set to the pod, and better to the PV when considering persistence. Is there some apis helpful on this?
Attach PV to pod. Then you you can use kubectl cp.
For example
kubectl cp /tmp/foo_dir <some-pod>:/your_pv/bar_dir
/your_pv should be specified in Pods spec.volumeMounts to use your PVC.
We have a Java web application that is supposed to be moved from a regular deployment model (install on a server) into an OpenShift environment (deployment as docker container). Currently this application consumes a set of Java key stores (.jks files) for client certificates for communicating with third party web interfaces. We have one key store per interface.
These jks files get manually deployed on production machines and are occasionally updated when third-party certificates need to be updated. Our application has a setting with a path to the key store files and on startup it will read certificates from them and then use them to communicate with the third-party systems.
Now when moving to an OpenShift deployment, we have one docker image with the application that is going to be used for all environments (development, test and production). All configuration is given as environment variables. However we cannot give jks files as environment variables these need to be mounted into the docker container's file system.
As these certificates are a secret we don't want to bake them into the image. I scanned the OpenShift documentation for some clues on how to approach this and basically found two options: using Secrets or mounting a persistent volume claim (PVC).
Secrets don't seem to work for us as they are pretty much just key-value-pairs that you can mount as a file or have handed in as environment variables. They also have a size limit to them. Using a PVC would theoretically work, however we'd need to have some way to get the JKS files into that volume in the first place. A simple way would be to just start a shell container mounting the PVC and copying the files manually into it using the OpenShift command line tools, however I was hoping for a somewhat less manual solution.
Do you have found a clever solution to this or a similar problem where you needed to get files into a container?
It turns out that I misunderstood how secrets work. They are indeed key-values pairs that you can mount as files. The value can however be any base64 encoded binary that will be mapped as the file contents. So the solution is to first encode the contents of the JKS file to base64:
cat keystore.jks| base64
Then you can put this into your secret definition:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
namespace: my-namespace
data:
keystore.jks: "<base 64 from previous command here>"
Finally you can mount this into your docker container by referencing it in the deployment configuration:
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
items:
- key: keystore.jks
path: keystore.jks
This will mount the secret volume secrets at /mnt/secrets and makes the entry with the name keystore.jks available as file keystore.jks under /mnt/secrets.
I'm not sure if this is really a good way of doing this, but it is at least working here.
You can add and mount the secrets like stated by Jan Thomä, but it's easier like this, using the oc commandline tool:
./oc create secret generic crnews-keystore --from-file=keystore.jks=$HOME/git/crnews-service/src/main/resources/keystore.jks --from-file=truststore.jks=$HOME/git/crnews-service/src/main/resources/truststore.jks --type=opaque
This can then be added via UI: Applications->Deployments->-> "Add config files"
where you can choose what secret you want to mount where.
Note, that the name=value pairs (e.g. truststore.jks=) will be used like filename=base64decoded-Content.
My generated base64 was multiline and I was getting the same error.
Trick is, use -w0 argument in base64 so that the whole encode is in 1 line!
base64 -w0 ssl_keystore.jks > test
Above will create a file named test and will contain the base64 in one line, copy paste like this in a secret:
apiVersion: v1
kind: Secret
metadata:
name: staging-ssl-keystore-jks
namespace: staging-space
type: Opaque
data:
keystore.jsk: your-base64-in-one-line
Building upon what both #Frischling and #Jan-Thomä said, and in agreement with Frischling as his way was easier and took care of both the trust cert keystores, after adding the keystores as a secret, under Applications->Deployments->[your deployments name] Click the environment link and add the following system properties:
Name: JAVA_OPTS_APPEND and
Value -Djavax.net.ssl.keyStorePassword=changeme -Djavax.net.ssl.keyStore=/mnt/keystores/your_cert_key_store.jks -Djavax.net.ssl.trustStorePassword=changeme -Djavax.net.ssl.trustStore=/mnt/keystores/your_ca_key_store.jks
This effectively will as indicated, append the keystore file paths, passwords to the java options used by the application, for example JBoss/WildFly or Tomcat.