I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.
Related
I am running the command
podman machine init
in the /entrypoint.sh script which I reference in my Dockerfile:
# Container image that runs your code
FROM ubuntu:latest
# Copies your code file from your action repository to the filesystem path `/` of the container
COPY entrypoint.sh /entrypoint.sh
RUN apt-get update -y
RUN apt-get install -y git gcc python3-dev musl-dev libffi-dev podman
# Code file to execute when the docker container starts up (`entrypoint.sh`)
ENTRYPOINT ["/entrypoint.sh"]
The Dockerfile and the entrypoint.sh scripts are in my github repo where I have github actions configured. The problem I see my my log output from the Github action is this ...
2022-11-29T04:56:58.6855699Z + podman machine init
2022-11-29T04:56:59.1867174Z Downloading VM image: fedora-coreos-37.20221106.2.1-qemu.x8…
2022-11-29T04:56:59.3660034Z [1A[JDownloading VM image: fedora-coreos-37.20221106.2.1-qemu.x8…
...
2022-11-29T04:57:06.5082410Z [1A[JDownloading VM image: fedora-coreos-37.20221106.2.1-qemu.x8…
2022-11-29T04:57:06.9133659Z Extracting compressed file
2022-11-29T04:57:36.0586753Z Error: exit status 1
Extracting compressed file? Did I run out of disk space? What? I am not even sure where to begin to debug this.
I'm building a Docker Image on Pull Requests in my Github Actions setup. The images are built and pushed to Azure Container Registry. Often, it's only a small update in the code, and if I could reuse the layers from the previous build (pushed to ACR), I could save a lot of time.
As shown in Dockerfile, yarn install could be skipped, since new changes occur in the COPY statement below it only:
FROM node:16
# create dirs and chown
RUN mkdir -p /usr/src/node-app && chown -R node:node /usr/src/node-app
WORKDIR /usr/src/node-app
COPY package.json yarn.lock tsconfig.json ./
USER node
# install node modules
RUN yarn install --pure-lockfile
# ensure ownership
COPY --chown=node:node . .
# set default env
RUN mv .env.example .env
EXPOSE 3001
# entrypoint is node
# see https://github.com/nodejs/docker-node/blob/main/docker-entrypoint.sh
# default command: prod start
CMD ["yarn", "start"]
How can I download the previous image from ACR and use the layers there? Simply downloading the previous image (with different tag) does not seem to work.
You need to provide the --cache-from flag to the docker build command if you want to use the downloaded image as cache source.
https://docs.docker.com/engine/reference/commandline/build/#options
How can we use bitbucket pipelines to update an asp.net core website on aws elastic beanstalk?
i know this is late answer but i did same thing few days ago so here is example how i did it
firstly you have to enable pipeline in bitbucket choose .NET CORE
in bitbucket-pipelines.yml you need yo write something like this:
image: microsoft/dotnet:sdk
pipelines:
branches:
staging:
- step:
name: build publish prepare and zip
caches:
- dotnetcore
script:
- apt-get update && apt-get install --yes zip
- export PROJECT_NAME=<your-project-name>
- dotnet restore
- dotnet build $PROJECT_NAME
- dotnet publish --self-contained --runtime win-x64 --configuration Release
- zip -j site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/* -x aws-windows-deployment-manifest.json
- zip -r -j application.zip site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/aws-windows-deployment-manifest.json
artifacts:
- application.zip
- step:
name: upload to elasticbeanstalk
script:
- pipe: atlassian/aws-elasticbeanstalk-deploy:0.5.0
variables:
APPLICATION_NAME: '<application-name>'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
#COMMAND: 'upload-only'
ZIP_FILE: 'application.zip'
ENVIRONMENT_NAME: '<environment-name>'
WAIT: 'true'
in settings -> pipelines -> variables you have to set aws keys: access secret and region that will used by $ ($AWS_SECRET_ACCESS_KEY)
additionally you will have to create s3bucket "-elsticbeanstalk-deployments" (if you dont create it, when the environment will try to upload your zip it will show you error with name of bucket "not found" so just copy the name and create it in s3)
I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?
This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)
In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched
I am writing sample program to deploy into Openshift with configmap. I have the following configmap yaml in the source code folder so when devops is setup, Jenkins should pick up this yaml and create/update the configs.
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleapp
data:
username: usernameTest
password: passwordTest
However, I could not find the command that would create/update if the config already exist (similar to kubectl apply command). Can you help with the correct command which would create the Resource if the job is run for the first time and update if otherwise.
I also want to create/update the Services,Routes from the yaml files in the src repository.
Thanks.
you can use "oc apply" command to update the resources already exists.
Like below Example:
#oc process -f openjdk-basic-template.yml -p APPLICATION_NAME=spring-rest -p SOURCE_REPOSITORY_URL=https://github.com/rest.git -p CONTEXT_DIR='' | oc apply -f-
service "spring-rest" configured
route "spring-rest" created
imagestream "spring-rest" configured
buildconfig "spring-rest" configured
deploymentconfig "spring-rest" configured
If you have configmap in yaml file or you store in some place
you can do replace it.
oc replace --force -f config-map.yaml this will update the existing configmap (it actually deletes and creates a new one)
After this - I executed:
oc set env --from=configmap/example-cm dc/example-dc