I want to create machine set using vsphere provider. In yaml file I see: credentialsSecret: name: vsphere-cloud-credentials. How should this secret look like? Should that be key/value secret with login to vsphere as key and password as value? What if my login is with "#" and I get Error "Invalid value: "test#test": a valid config key must consist of alphanumeric characters, '-', '_' or '.' (e.g. 'key.name', or 'KEY_NAME', or 'key-name', regex used for validation is '[-._a-zA-Z0-9]+')" for field "data[test#test]".
kind: Secret
apiVersion: v1
metadata:
name: vsphere-cloud-credentials
namespace: openshift-machine-api
annotations:
cloudcredential.openshift.io/credentials-request: openshift-cloud-credential-operator/openshift-machine-api-vsphere
data:
vsphere-api-url.domain.com.password: xxxxx(base64-encoded)
vsphere-api-url.domain.com.username: xxxxx(base64-encoded)
type: Opaque
i think it should look somehow like this, the secret looks like this if we install a cluster with the vsphere provider and credentials in the install-config.yaml, may i ask why you don't configure this at install with the vsphere provider?
Related
I'm trying to use JSON Patch on one of my Kubernetes yaml file.
apiVersion: accesscontextmanager.cnrm.cloud.google.com/v1beta1
kind: AccessContextManagerServicePerimeter
metadata:
name: serviceperimetersample
spec:
status:
resources:
- projectRef:
external: "projects/12345"
- projectRef:
external: "projects/123456"
restrictedServices:
- "storage.googleapis.com"
vpcAccessibleServices:
allowedServices:
- "storage.googleapis.com"
- "pubsub.googleapis.com"
enableRestriction: true
title: Service Perimeter created by Config Connector
accessPolicyRef:
external: accessPolicies/0123
description: A Service Perimeter Created by Config Connector
perimeterType: PERIMETER_TYPE_REGULAR
I need to add another project to the perimeter (spec/status/resources).
I tried using following command:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op": "add", "path": "/spec/status/resources/-/projectRef", "value": {"external": {"projects/01234567"}}}]'
But it resulted in error:
The request is invalid: the server rejected our request due to an error in our request
I'm pretty sure that my path is not correct because it's nested structure. I'd appreciate any help on this.
Thank you.
I don't have the CustomResource you're using so I can't test this, but I think this should work:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op":"add","path":"/spec/status/resources/2","value":{"projectRef":{"external":"projects/12345"}}}]'
I have been breaking my head over the following:
I have a set of buildconfigs that build images and create imagestreams for it in the "openshift" namespace. This gives me for example the netclient-userspace imagestream.
krist#MacBook-Pro netmaker % oc get is netclient-userspace
NAME IMAGE REPOSITORY TAGS UPDATED
netclient-userspace image-registry.openshift-image-registry.svc:5000/openshift/netclient-userspace latest About an hour ago
What I have however not been able to figure out is how to use this imagestream in a deployment in a different namespace.
Take for example this:
kind: Pod
apiVersion: v1
metadata:
name: netclient-test
namespace: "kvb-netclient-test"
spec:
containers:
- name: netclient
image: netclient-userspace:latest
When I deploy this I get errors...
Failed to pull image "netclient-userspace:latest": rpc error: code =
Unknown desc = reading manifest latest in
docker.io/library/netclient-userspace: errors: denied: requested
access to the resource is denied unauthorized: authentication required
So openshift goest and looks for the image on dockerhub. It shouldn't. How do I tell openshift to use the imagestream here?
When using an ImageStreamTag for a Deployment image source, you need to use the image.openshift.io/triggers annotation. It instructs OpenShift to replace the image: attribute in a Deployment with the value of an ImageStreamTag (and to redeploy it when the ImageStreamTag changes in the future).
Importantly, note the annotation and the image: ' ' with the explicit space character in the yaml string.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-netclient-userspace:latest","namespace":"openshift"},"fieldPath":"spec.template.spec.containers[?(#.name==\"netclient\")].image"}]'
name: netclient-test
namespace: "kvb-netclient-test"
spec:
...
template:
...
spec:
containers:
- command:
- ...
image: ' '
name: netclient
...
I will also mention that, in order to pull images from different namespaces, it may be required to authorize the Deployment's service account to do so: OpenShift Docs.
As a admin with a lot of parameterized Openshift Templates, I am struggling to create a parameterized SECRET objects in the templates for type kubernetes.io/dockerconfigjson or kubernetes.io/dockercfg so that the secret can be used for docker pulls.
Challenge:Everything is pre-base64 encoded in JSON format for normal dockerconfigjson template setup, and not sure how to change it.
The Ask: How to create a SECRET template that takes parameters ${DOCKER_USER}, ${DOCKER_PASSWORD}, ${DOCKER_SERVER}, and ${DOCKER_EMAIL} to then create the actual secret that can be used to pull docker images from a private/secured docker registry.
This is to replace commandline "oc create secret docker-registry ...." techniques by putting them in a template file stored within gitlab/github to have a gitOps style deployment pattern.
Thanks!
The format of the docker configuration secrets can be found in the documentation (or in your cluster via oc export secret/mysecret) under the Using Secrets section.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:<base64encoded docker-config.json>
One method would be to accept the pre-based64 encoded contents of the json file in your template parameters and stuff them into the data section.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:${BASE64_DOCKER_JSON}
Another method would be to use the stringData field of the secret object. As noted on the same page:
Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson:${REGULAR_DOCKER_JSON}
The format of the actual value of the .dockerconfigjson key is the same as the contents of the .docker/config.json file. So in your specific case you might do something like:
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson:'{"auths": {"${REGISTRY_URL}": {"auth": "${BASE64_USERNAME_COLON_PASSWORD}"}}}'
Unfortunately the template langugage OpenShift uses for templates isn't quite powerful enough to base64 encode the actual paramter values for you, so you can't quite escape having to encode the username:password pair outside of the template itself, but your CI/CD tooling should be more than capable of performing this with raw username/password strings.
I have configured webhook on bitbucket server which points to Openshift.
I want to get GIT repo url , git branch etc from webhook payload in my inline jenkinsfile but I dont know how to retrieve them. (Webhook triggers build though).
Is it possible ?
Here is my BuildConfig
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
labels:
application: spring-demo
template: openjdk18-web-basic-s2i
name: spring-demo
spec:
output:
to:
kind: ImageStreamTag
name: 'spring--demo:latest'
runPolicy: Serial
source:
type: None
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
pipeline {
agent {
label 'maven'
}
stages {
stage('Build') {
steps{
sh "echo ${env.BRANCH_NAME}" <<<<------------- this is null
}
}
}
}
type: JenkinsPipeline
triggers:
- bitbucket:
secretReference:
name: yoyo
type: Bitbucket
--Thanks.
According to this stack overflow question and some Jenkins documentation, you need to install the git plugin on your Jenkins instance. Then you will have the variable GIT_BRANCH and GIT_URL available to you via ${env.GIT_BRANCH} and ${env.GIT_URL}. Make sure your branch names do not have slashes in them, (ex release/1.2.3) as this confuses a lot of key tools in Jenkins.
Alternatively as a last resort, in a Jenkins scripted pipeline you can set your own environment variables via parameter or defaults (like "master") if you know you won't change your branches often.
We use Beanstalk to deploy node applications. It works very well. I've created a couple of config files in an .ebextensions directory, to apply configuration info to our apps when we load them up. Again mostly works well. I have one thing that does not, and that is defining the application health check URL. I can't get it to go. One odd thing about it, it seems to be only parameter I have come across so far that has spaces in it, and I'm wondering about that. I have tried enclosing the values in quotes, just to see if that is the problem, but it still doesn't work. Has anyone done this before, and can tell me whether it works, and if there is something syntactical about this that is incorrect? As I said, the rest of the params get set correctly in beanstalk, just the last one doesn't. Note #environment# gets replaced with a grunt script before this gets deployed.
Here's the config file:
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: NODE_ENV
value: #environment#
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: NodeVersion
value: 0.10.10
- namespace: aws:autoscaling:trigger
option_name: LowerThreshold
value: 40
- namespace: aws:autoscaling:trigger
option_name: MeasureName
value: CPUUtilization
- namespace: aws:autoscaling:trigger
option_name: UpperThreshold
value: 60
- namespace: aws:autoscaling:trigger
option_name: Unit
value: Percent
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /load_balance_test
Adding this worked for me:
# .ebextensions/healthcheckurl.config
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /health
- namespace: aws:elasticbeanstalk:environment:process:default
option_name: HealthCheckPath
value: /health
I discovered the second setting by doing eb config, which gives a nice overview of environment settings that can be overriden with option_settings in .ebextensions/yet-another.config files.
The spaces in this property name are weird, but it works when used with the alternative shorthand syntax for options:
option_settings:
aws:elasticbeanstalk:application:
Application Healthcheck URL: /
I use CloudFormation for EB, and in CF the syntax for that parameter is very strange. If that config file works the same as CF, the following string should work for you:
HTTP:80/load_balance_test
If you're using Terraform, then just make sure you have spaces in the name, and it will work fine:
setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/api/health"
}
I just tried. It worked for me. Only the format specified in the original question worked for me i.e.,
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /api/v1/health/
you also might want to set the health_check_type to ELB instead of default EC2.
this is how I configured mine
$ cat .ebextensions/0090_healthcheckurl.config
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: "ELB"
HealthCheckGracePeriod: "600"
option_settings:
- namespace: aws:elasticbeanstalk:application
option_name: Application Healthcheck URL
value: /_status