I would like to add some labels (commit hash, branch name,...) to images I create using Openshift source-to-image binary build. These labels will have naturally different values for every build.
Currently oc start-build does not even support -e flags to add environment variables. (At least is seems to, it works for Git source, its a bug?)
And for binary build does not supports --build-arg to pass argument for docker file.
The only way I was able to accomplish this to call oc set env bc [build-name] then start the build. And use Label in Dockerfile with values from environment variables.
My question is isn't there a better way to do this? (Ideally in a way that Dockerfile is not necessarily changed) Doesn't s2i supports passing --label to docker build behind?
Thank you.
Do you want to add environment variable when you start oc start-build ? Because you mentioned oc set env bc [build-name].
Then you can use --env=<key>=<value> option, refer Starting Build for more details.
$ oc start-build <buildconfig_name> --env=<key>=<value>
I hope it help you.
Related
I've found version 0.6.0 of the Operator Framework's Operator Lifecycle Manager (OLM) to be lacking and see that 0.12.0 is available with lots of new updates. How can I upgrade to that version?
Also, what do I need to consider regarding this upgrade? Will I lose any integrations, etc.
One thing that needs to be considered is that in OpenShift 3, OLM is running from a namespace called operator-lifecycle-manager. In future versions that becomes simply olm. Some things to consider,
Do you have operators running right now and if you make this change will your catalog names change? This will need to be reflected in your subscriptions.
Do you want to change any of the default install configuration?
Look into values.yaml to configure your OLM
Look into the yaml files in step 2 to and adjust if needed.
1) First, turn off OLM 0.6.0 or whatever version you might have.
You can delete that namespace, or as I did stopped the deployments within and scale the replicasets down to 0 pods which effectively turns OLM 0.6.0 off.
2) Install OLM 0.12.0
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/crds.yaml
oc create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.12.0/olm.yaml
alt 2) If you'd rather just install the latest from the repo's master branch:
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yamlcrds.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml
So now you have OLM 0.12.0 installed. You should be able to see in the logs that it picks up where 0.6.0 left off. You'll need to start learning about OperatorGroups though as that's new and going to start impacting your operation of operators pretty quick. The Cluster Console's ability to show your catalogs does seem to be lost, but you can still view that information through the commandline with oc get packagemanifests.
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
Install your connector
Use the Confluent Hub client to install this
connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
Questions:
Does confluent-hub not come installed via those Helm charts?
Do I have to install confluent-hub myself?
If so, which pod do I have to install it on?
Ideally this should be configurable as part of the helm script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path) during your helm installation.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm installation.
Here is the path to the values.yaml file. You can find the image and plugin.path values here.
Just an add-on to Jegan's comment above: https://stackoverflow.com/a/56049585/6002912
You can choose to do the Dockerfile below. Recommended.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.0.0
Or you can use a Docker's multi-stage build instead.
FROM confluentinc/cp-server-connect-operator:5.4.0.0
COPY --from=debezium/connect:1.0 \
/kafka/connect/debezium-connector-postgres/ \
/usr/share/confluent-hub-components/debezium-connector-postgres/
This will help you to save time on getting the right jar files for your plugins like debezium-connector-postgres.
From Confluent documentation: https://docs.confluent.io/current/connect/managing/extending.html#create-a-docker-image-containing-c-hub-connectors
The Kafka Connect pod should already have the confluent-hub installed. It is that pod you should run the commands on.
The cp kafka connect pod has 2 containers, one of them is a cp-kafka-connect-server container.That container has confluent-hub installed.You can login into that container and run your connector commands there.To login into that container, run the following command:
kubectl exec -it {pod-name} -c cp-kafka-connect-server -- /bin/bash
As of latest version of chart, this can be achieved using customEnv.CUSTOM_SCRIPT_PATH
See README.md
Script can be passed as a secret and mounted as a volume
I'm trying to access the ContextBroker configuration file in the path /etc/sysconfig/contextBroker and it's empty. What is the problem?
https://fiware-orion.readthedocs.io/en/master/admin/running/index.html
I'm using Docker.
Also I am testing the installation by yum centos and tells me that the repository is wrong.
Is it copied from the web?
https://github.com/telefonicaid/fiware-orion/blob/master/doc/manuals/admin/yum.md
The /etc/sysconfig/contextBroker is used in RPM-base deployment. Docker is based in compiling Context Broker directly from sources, as you can see in the docker file.
So, in this case, you have to use CLI based configuration. Note the docker is built with some of them:
ENTRYPOINT ["/usr/bin/contextBroker","-fg", "-multiservice", "-ngsiv1Autocast" ]
But you can add aditional ones. For example, in the reference docker-compose.yml we set -dbhost, and more ones could be added in the same way.
command: -dbhost mongo
I guess that using docker run you could also add commands in the same way command works in docker-compose.yml, although I don't know the details. Maybe some docker expert could add more info :)
We are running OSE 3.2 and I'm trying to rsh with oc to various pods. I'm using Cygwin. As long as I pass it a command, it works, so I assume it's unable to give me a shell. I've tried setting my TERM environment variable to vt100, xterm, and ansi with no luck. I am able to rsh into pods with oc using the Windows cmd prompt with TERM not set at all, but I really don't like that thing and would prefer to use Cygwin for all functions. I've searched quite a bit for a solution to this, but have come up empty. Thanks much.
I am setting up Jenkins to replace our current TeamCity CI build.
I have created a free-style software project so that I can execute a shell script.
The Shell script runs the mvn command.
But the build fails complaining that the 'mvn' command cannot be found.
I have figured that this is because Jenkins is running the build in a different shell, which does not have Maven on it's path.
My question is; how do I add the path so 'mvn' is found in my Shell script? I've looked around but can't spot where the right place might be.
Thanks for your time.
I solved this by exporting and setting the Path in the Jenkins Job configuration where you can enter shell commands. So I set the environments variable before I execute my Shell script, works a treat.
Some possible solutions:
You can call maven with an absolute path
You configure a global environment variable in the jenkins system settings with the absolute path to your maven instance, and use this in your script call (if you use the inline shell script, I don't know if those are substituted to a called script, you have to test)
You use a maven project and configure your maven instance in the jenkins system settings
ps.: Usually /bin/sh is chosen from Jenkins, if you want to switch to eg. bash, you can configure this in the jenkins system settings, in case you want to configure global environment variables.
You can use envInject plugin. It's very powerful.
I use it to install rbenv. And it can inject environment variables into your current job.
Another option to Dags suggestion is that if you're only using a single version of maven, on each slave server you could do either;
* add PATH=${PATH}:
* symlink mvn into /usr/bin with; sudo ln -s /usr/bin
I'm not at a Jenkins box at the moment, but I can find some more detailed examples if you'd like.
Jenkins is using sh by default and not bash.
This is my first time defining a jenkins maven job, and I also followed soem regular maven instructions (for running from command line...), and tried to update ~/.bashrc with M2_HOME, M2, PATH, but it didn't work because jenkins used sh and not bash. Then I found out that there is a simpler and better way built into jenkins.
After installing maven, I was supposed to configure my maven installation in jenkins.
To configure your maven installation in Jenkins:
login to jenkins web console
click Manage Jenkins --> Configure System
Under Maven, click the "Maven Installations..." button
a. Give it some name
b. and under MVN_HOME set the path to where you installed maven, for example "/usr/local/apache-maven/apache-maven-3.0.5"
Click Save button
Define a job with maven target
edit your job
Click "Add build step"
on Maven Version, enter the name you gave your maven installation (step #4 above)
set some goal like clean install