helm provide values file from variable - parameter-passing

I'm having an ci/cd pipeline with has a yaml file, containing secrets in memory. I don't want to store the file on drive, since I have not guarantee that the file will be cleaned or is safe on the drive.
I would like to install a helm chart using helm install. Normally I would just provide the file using -f filename.yaml. But as I said, I don't have the file stored on the drive. Is there any alternative to pass a whole yaml file as string to a helm install command?

To inline values.yaml in your command line, you can use the following:
helm install <chart-name> -f - <<EOF
<your-inlined-values-yaml>
EOF
For example:
helm install --name my-release hazelcast/hazelcast -f - <<EOF
service:
type: LoadBalancer
EOF

Related

Reduce build time in GitOps by using Docker image layers from previous build with Azure Registry

I'm building a Docker Image on Pull Requests in my Github Actions setup. The images are built and pushed to Azure Container Registry. Often, it's only a small update in the code, and if I could reuse the layers from the previous build (pushed to ACR), I could save a lot of time.
As shown in Dockerfile, yarn install could be skipped, since new changes occur in the COPY statement below it only:
FROM node:16
# create dirs and chown
RUN mkdir -p /usr/src/node-app && chown -R node:node /usr/src/node-app
WORKDIR /usr/src/node-app
COPY package.json yarn.lock tsconfig.json ./
USER node
# install node modules
RUN yarn install --pure-lockfile
# ensure ownership
COPY --chown=node:node . .
# set default env
RUN mv .env.example .env
EXPOSE 3001
# entrypoint is node
# see https://github.com/nodejs/docker-node/blob/main/docker-entrypoint.sh
# default command: prod start
CMD ["yarn", "start"]
How can I download the previous image from ACR and use the layers there? Simply downloading the previous image (with different tag) does not seem to work.
You need to provide the --cache-from flag to the docker build command if you want to use the downloaded image as cache source.
https://docs.docker.com/engine/reference/commandline/build/#options

Use case of OpenShift + buildConfig + ConfigMaps

I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.

Procedure to install an Ingress controller

Unable to install ingress-nginx for kubernetes on Docker desktop
I was using the following in cmd line to install ingress nginx so far:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
as shown in the web page: https://che.eclipse.org/running-eclipse-che-on-kubernetes-using-docker-desktop-for-mac-5d972ed511e1
I seems like the installatio procedure has changed. Can anyone let me know step by step instructions to install ingress-nginx? I coudnt install it by following the procedure described here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
Installation via helm works perfectly for me. Assuming you have kubectl binary installed and configured to use for your k8s cluster you can follow below steps one by one to achieve installation of nginx-ingress controller
1.Install helm binary (if doesn't exist)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/get_helm.sh | bash
2.Install helm for your cluster (if not installed yet)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/install.sh | bash
You should see output like
...
Waiting for tiller install...
Helm install complete
3.Then install nginx-ingress via helm
helm install stable/nginx-ingress --name nginx-ingress

Docker: Building an image that depends on another image to be running

My objective is to build a Docker image that includes MySQL prefilled with the tables and data produced by an Alembic migration. Unfortunately, Alembic can't produce the necessary data without an active MySQL instance, nor can it independently create a SQL dump to be loaded by MySQL on first run.
I've attempted to use multi-stage builds to use both the mysql and python containers for this, but the MySQL daemon is brought down again as soon as the Python stage begins.
# start MySQL daemon
FROM mysql:5.6
RUN docker-entrypoint.sh
# install and run Alembic
FROM python:2.7-alpine
# [install Alembic]
COPY ./alembic-migrations /alembic-migrations
# [run migrations]
I'm not hung up on this particular solution, but it seemed like the simplest option. Is there a way to do what I'm attempting? Should I resign myself to installing Python and Alembic in the MySQL container?
It'll probably make some Docker evangelist's eyes bleed, but this is how I was able to accomplish the behaviour I was looking for. It was actually simpler and runs faster than I'd expected.
FROM python:2.7-alpine as python
FROM mysql:5.6
# set up a functional chroot of the Python image at /python
COPY --from=python / /python
RUN set -ex; \
cp /etc/resolv.conf /python/etc/resolv.conf; \
mknod -m 0644 /python/dev/random c 1 8; \
mknod -m 0644 /python/dev/urandom c 1 9;
# install software depedencies in chroot jail
RUN set -ex; \
chroot /python apk --no-cache --virtual add [some dependencies]
# install Python libraries
COPY ./requirements.txt /python/tmp/requirements.txt
RUN chroot /python pip install -r /tmp/requirements.txt;
# apply Alembic migrations and remove the Python chroot jail
COPY ./usr/local/bin/build.sh /usr/local/bin/
COPY ./alembic /python/alembic
RUN build.sh && rm -rf /python;
ENTRYPOINT ["docker-entrypoint.sh", "--datadir=/var/lib/mysql-persist"]
EXPOSE 3306
CMD ["mysqld"]
The build.sh script simply forks the docker-entrypoint.sh script from the MySQL container, then invokes the Alembic-specific code within the Python chroot.
#!/bin/sh
docker-entrypoint.sh --datadir=/var/lib/mysql-persist 2>/dev/null &
chroot /python build.sh
Note that I'm setting a custom data directory (/var/lib/mysql-persist) because the upstream mysql container defines VOLUME /var/lib/mysql, which I can't override.
The result is a built image that contains MySQL, complete with database, but does not contain any traces of the Python container or Alembic scripts. It can now be distributed via a registry and fetched by docker-compose, avoiding the need for all users to execute the Alembic migrations independently.

Where does mysql_ssl_rsa_setup get OpenSSL files?

Getting "openssl not installed on this system" when running mysql_ssl_rsa_setup.
I installed openssl and mysql from source, both times keeping the default paths for installation (/usr/local/openssl for openssl [I actually renamed it to openssl from ssl to see if that was the problem], /usr/local/mysql for mysql).
The docs say it gets the path from the PATH environment variable, but there's no option to specify it in the command line. What is the default? How to change it? I have seen that you can modify /etc/environment to add PATH there, but the file is empty by default.
According to 4.4.5 mysql_ssl_rsa_setup — Create SSL/RSA Files, mysql_ssl_rsa_setup uses the openssl command line tool:
Note
mysql_ssl_rsa_setup uses the openssl command, so its use is contingent
on having OpenSSL installed on your machine.
What is the default?
OpenSSL's default installation location is /usr/local/ssl
How to change it?
Use --openssldir when you configure the library. Also see Compilation and Installation on the OpenSSL wiki.
You should not install OpenSSL in /usr/bin (and the libraries in /usr/lib). Its creates too many problems.
Instead, let the library install itself in /usr/local/ssl. Then you should be able to create a shell script located at /usr/local/bin/openssl that performs the following:
$ cat /usr/local/bin/openssl
#!/usr/bin/env bash
LD_LIBRARY_PATH=/usr/local/ssl/lib:$LD_LIBRARY_PATH; /usr/local/ssl/bin/openssl "$#"
Be sure to chmod a+x /usr/local/bin/openssl.
You can verify the OpenSSL tool being used with:
$ which openssl
/usr/local/bin/openssl
If needed, add /usr/local/bin to your PATH:
$ cat ~/.bash_profile
export PS1="\\h:\\W$ "
export UMASK=0022
export EDITOR=emacs
export PATH="/usr/local/bin:/usr/local/sbin:/bin:/sbin:/usr/bin:/usr/sbin"
...