I would like to pass in the git tag as an ENV variable during the Docker build process using the VScode extension.
"dockerBuild": {
...
"buildArgs": {
"TAG": "$(git describe --tags)"
}
}
Which results in the following command being run:
docker build ... --build-arg "TAG=$(git describe --tags)" ...
But that doesn't run the git command because of the double quotes. If I manually type it in like this it works:
docker build ... --build-arg TAG=$(git describe --tags) ...
How can I pass in the result of git describe --tags into the buildArgs?
Related
I'm trying to trigger a workflow event in Github.
for some reason, I'm able to GET information about my organization repository workflow but can not use '/dispatches'
Work is based on: https://docs.github.com/en/rest/actions/workflows#create-a-workflow-dispatch-event
Here is the curl code:
curl -X POST \
-H "Accept:application/vnd.github.v3+json" \
-H 'Authorization:token ${{ github.token }}' \
'https://api.github.com/repos/[owner/org]/[repo]/actions/workflows/9999999/dispatches' \
-d '{"event_type":"semantic-release"}'
Getting error:
422 Unprocessable Entity
"message": "Invalid request.\n\nFor 'links/0/schema', nil is not an object.",
"documentation_url": "https://docs.github.com/rest/reference/repos#create-a-repository-dispatch-event"
Am I missing some basic information for this to work and trigger an event?
Instead of trying to call the GitHub API directly, try and use the GitHub CLI gh (that you can install first to test locally).
You can also use GitHub CLI in workflows.
GitHub CLI is preinstalled on all GitHub-hosted runners.
For each step that uses GitHub CLI, you must set an environment variable called GITHUB_TOKEN to a token with the required scopes
It has a gh workflow run, which does create a workflow_dispatch event for a given workflow.
Authenticates first (gh auth login, if you are doing a local test):
# authenticate against github.com by reading the token from a file
$ gh auth login --with-token < mytoken.txt
Examples:
# Run the workflow file 'triage.yml' at the remote's default branch
$ gh workflow run triage.yml
# Run the workflow file 'triage.yml' at a specified ref
$ gh workflow run triage.yml --ref my-branch
# Run the workflow file 'triage.yml' with command line inputs
$ gh workflow run triage.yml -f name=scully -f greeting=hello
# Run the workflow file 'triage.yml' with JSON via standard input
$ echo '{"name":"scully", "greeting":"hello"}' | gh workflow run triage.yml --json
In your case (GitHub Action):
jobs:
push:
runs-on: ubuntu-latest
steps:
- run: gh workflow run triage.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
As explained by hanayama in the comments:
Found out the secrets. GITHUB_TOKEN doesn't work, even with permissions edited for the entire workflow.
Using a personal access token worked.
Using GitHub Actions, I'm trying to install j2:
jobs:
install-packages:
runs-on: ubuntu-latest
steps:
- run: |
sudo apt-get install -y jq
pip3 install --user --upgrade j2cli
j2 --version
This successfully installs j2cli, but the last j2 --version produces Error: Process completed with exit code 127. (logs).
Why is this happening?
When you execute your script using a run step it is executed in a bash shell by default. The error code 127 is emitted by shell when the given command is not found within your PATH environment variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the j2 command you're trying to call. When we know what the error means we can fix it by adding pip3 package installation directory to the PATH. We can do it manually by locating the path by calling pip3 show j2cli or we can set up a Python environment to do it automatically using a dedicated setup-python action before calling pip3 installer. Having that in mind the script should be adjusted:
jobs:
install-packages:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-python#v2
with:
python-version: 3.x
- run: |
pip3 install --user --upgrade j2cli
j2 --version
It should fix the error.
Please note we don't need to install jq binary as it comes pre-installed with the GitHub-hosted runner. That's why you don't need the:
sudo apt-get install -y jq
If we look at the log included we can see it clearly
jq is already the newest version (1.5+dfsg-2).
You can find the software included with the GitHub-hosted runner here.
I'm having an ci/cd pipeline with has a yaml file, containing secrets in memory. I don't want to store the file on drive, since I have not guarantee that the file will be cleaned or is safe on the drive.
I would like to install a helm chart using helm install. Normally I would just provide the file using -f filename.yaml. But as I said, I don't have the file stored on the drive. Is there any alternative to pass a whole yaml file as string to a helm install command?
To inline values.yaml in your command line, you can use the following:
helm install <chart-name> -f - <<EOF
<your-inlined-values-yaml>
EOF
For example:
helm install --name my-release hazelcast/hazelcast -f - <<EOF
service:
type: LoadBalancer
EOF
I have a template that I have uploaded to openshift.
$ oc get templates | grep jenkins
jenkins-mycompany Jenkins persistent image 9 (all set) 9
When I get the template, you can see the parameters that are set:
$ oc get template jenkins-mycompany -o json
...
{
"description": "Name of the ImageStreamTag to be used for the Jenkins image.",
"displayName": "Jenkins ImageStreamTag",
"name": "JENKINS_IMAGE_STREAM_TAG",
"value": "jenkins-mycompany:2.0.0-18"
}
I am creating a CI process to build a new Jenkins image and update the template that is uploaded into OpenShift.
I want all params set...
I have tried
oc process -f deploy.yml --param-file=my-param-file | oc create -f-
cat mydeploy.json | oc create -f-
The only way I can get this to work is to do an oc delete templates jenkins-mycompany and then oc create -f deploy.yml.
I want to just patch the value of that one parameter so when I build 2.0.0-19, I just patch the template.
Openshift CLI Reference
You want to use the patch command like so:
oc patch <object_type> <object_name> -p <changes>
For example,
oc patch template jenkins-mycompany -p '{"spec":{"unschedulable":true}}'
I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?
This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)
In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched