Openshift CLI command to start S2I (source to image) build looks like this:
oc start-build buildname --from-dir=./someDirectory--wait=true
But how can we execute some shell command? os start-build is going to create image (described in the build definition) and copy someDirectory to it, but what if we need additional configuration of that image, not only to push compiled source code there?
There're a couple of options:
Provide a layer in the Dockerfile
Override assemble script
assemble script override options:
Provide .s2i/bin/assemble into the code repository
Specify spec.strategy.sourceStrategy.scripts in the BuildConfig with URL of a directory containing the scripts (see override builder image scripts)
Example of assemble script and s2i workflow you can check in s2i or there's simple example:
#!/bin/bash
# Run additional build before steps
# Execute original assemble script.
/usr/libexec/s2i/assemble
# Run additional build after steps
In addition, There's postCommit build hooks which executes after commiting the image and before pushing it to the registry. It executes in temporary container so it can only be used to do some tests.
Related
I need to do the following
Change environment variables according to the published env. Set Set up cron jobs according to the dev. I I would like to run just 1 command line "eb deploy dev" or something similar.
Use setenv
You can set environment variables with setenv. These will then be remembered for that environment.
More details: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-setenv.html
Example
For example, suppose you have created an EB environment called 'staging' and you want to set the variable DB to 'localhost', you can use:
eb setenv DB=localhost -e staging
Crons
Now that you have a different environment variables, you can check them in a script etc. to decide if the cron should be set up.
Note that the crons may not actually have access to your environment variables so you need to set those again for the cron while setting up the cron.
This is my solution to the problem, it took some time to setup but now i can do all the changes with 1 command line.
Make your own folder with all the files for all the environments.
In .ebextensions folder setup empty config files for eb.
npm runs a script named "deploy.js" together with the flag of the specific env.
The script will do the following
copy the requested env data to the empty files according to the env
git stash the changes of .ebextensions folder (eb deploys using git)
eb use env
eb deploy
So now i can tun npm run deploy:dev and everything runs
I wanted clarification on the possible scripts that can be added in the .s2i/bin directory in my project repo.
The docs say when you add these files they will override the default files of the same name when the project is built. For example, if I place my own "assemble" file in the .s2i/bin directory will the default assemble file run also or be totally replaced by my script? What If I want some of the behavior of the default file? Do I have to copy the default "assemble" contents into my file so both will be executed?
you will need to call out the original "assemble" script from your own. Similar to this
#!/bin/bash -e
# The assemble script builds the application artifacts from a source and
# places them into appropriate directories inside the image.
# Execute the default S2I script
source ${STI_SCRIPTS_PATH}/assemble
# You can write S2I scripts in any programming language, as long as the
# scripts are executable inside the builder image.
Using OpenShift, I want to execute my own run script (run).
So, I added in the src of my application a file in ./s2i/run
that slightly changes the default run file
https://github.com/sclorg/nginx-container/blob/master/1.20/s2i/bin/run
Here is my run file
#!/bin/bash
source /opt/app-root/etc/generate_container_user
set -e
source ${NGINX_CONTAINER_SCRIPTS_PATH}/common.sh
process_extending_files ${NGINX_APP_ROOT}/src/nginx-start ${NGINX_CONTAINER_SCRIPTS_PATH}/nginx-start
if [ ! -v NGINX_LOG_TO_VOLUME -a -v NGINX_LOG_PATH ]; then
/bin/ln -sf /dev/stdout ${NGINX_LOG_PATH}/access.log
/bin/ln -sf /dev/stderr ${NGINX_LOG_PATH}/error.log
fi
#nginx will start using the custom nginx.conf from configmap
exec nginx -c /opt/mycompany/mycustomnginx/nginx-conf/nginx.conf -g "daemon off;"
Then, changed the dockerfile to execute my run script as follows
The CMD command can be called once and dictates where is the script located that is executed when the Deployment pod starts.
FROM registry.access.redhat.com/rhscl/nginx-120
# Add application sources to a directory that the assemble script expects them
# and set permissions so that the container runs without root access
USER 0
COPY dist/my-portal /tmp/src
COPY --chmod=0755 s2i /tmp/
RUN ls -la /tmp
USER 1001
# Let the assemble script to install the dependencies
RUN /usr/libexec/s2i/assemble
# Run script uses standard ways to run the application
#CMD /usr/libexec/s2i/run
# here we override the script that will be executed when the deployment pod starts
CMD /tmp/run
What is the standard way of using "oc apply" command inside the java code using "Fabric8" library ?
We have been using client.buildConfigs().inNamespace ..... etc so far. However, felt using "oc apply" command can simplify code
Thanks
San
Do you mean inside a Jenkins Pipeline or just in general purpose Java code?
We have the equivalent of kubernetes apply with:
new Controller().apply(entity);
in the kubernetes-api module in the fabric8io/fabric8 git repo.
There's a similar API in kuberetes-client too which lets you apply a file or URI directly.
In pipelines we tend to use the kubernetesApply(file) function to do a similar thing.
But inside Jenkins Pipelines you can also just use oc or kubectl directly in pipelines using the clients docker image via the clientsNode() function in the fabric8-pipeline-library like these examples then you can run oc commands directly via this code in a Jenkinsfile.
I am running Jenkins on a script, that generates a junit.xml report file and other files. However, all those files are zipped by the script, hence Jenkins cannot find it.
Is there a way to make Jenkins unzip the .zip file , find my particular junit file and generate the run results ?
All this is in Linux.
Thanks
Jenkins has the ability to execute arbitrary shell commands as a build step, just add the 'Execute Shell' step to your build and put in the commands you want (presumably 'unzip' would be among them).
Once you've extracted the xml, provided your internal tool generates it using this schema the JUnit plugin just needs the path you extracted to and it will show the results on the build page.
If you have the option, I would really suggest executing your tests via gradle or maven, as outputs from those tasks will produce a report that Jenkins (and other tools) can already read, and can streamline the job setup process for your users. But, if you can't get away with that, the above should work for you.
Hudson allows us to specify the ant version that we want to use. Now, what I want to do is having Hudson call my perl script which then will call ant. how do I do this?
When I tried to call ant from perl script that was called from Hudson, I got the following error:
Can't exec "ant": No such file or directory at ...........
I understand that this error means "ant" is not in the path. The thing is that if I provide the exact location or add ant to the path, then that means I am not using ant that I specified from Hudson.
Can anyone help?
It looks like you don't actually want to invoke ant as a Hudson build step which will happen if you add it as a build step. How about making it a parameterized Choice as ANT_VERSION with options: /opt/apache-ant-1.8.0, /opt/apache-ant-x.y.z, etc. Then in your Execute Shell build step do:
my-perl-script.pl --ant-version=${ANT_VERSION}
where your script takes the path to the ant version as a paramter.