How to avoid setting ACTIONS_ALLOW_UNSECURE_COMMANDS for setup-python? - github-actions

Github Actions' actions/setup-python step doesn't succeed due to the use of ::set-env: and ::add-path:, that are blocked because considered as insecure (https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands).
Succeed by setting ACTIONS_ALLOW_UNSECURE_COMMANDS=true environment variable for this step.
How to avoid allowing unsecure commands to successfully run a Python build?
Self-hosted runner in a custom-built docker with Ubuntu 20.04 base image. Linked to an GHE server v2.22.4.
Tried with both Python 3.9.0 and 3.9.1
Workflow file:
name: CD testing
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
runs-on: [ Linux ]
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.9.0
- name: execute py script
run: |
python -V
##[debug]Found tool in cache Python 3.9.1 x64
::set-env name=pythonLocation::/opt/hostedtoolcache/Python/3.9.1/x64
##[error]Unable to process command '::set-env name=pythonLocation::/opt/hostedtoolcache/Python/3.9.1/x64' successfully.
##[error]The `set-env` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug]System.Exception: The `set-env` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug] at GitHub.Runner.Worker.SetEnvCommandExtension.ProcessCommand(IExecutionContext context, String line, ActionCommand command, ContainerInfo container)
##[debug] at GitHub.Runner.Worker.ActionCommandManager.TryProcessCommand(IExecutionContext context, String input, ContainerInfo container)
::add-path::/opt/hostedtoolcache/Python/3.9.1/x64
##[error]Unable to process command '::add-path::/opt/hostedtoolcache/Python/3.9.1/x64' successfully.
##[error]The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug]System.Exception: The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug] at GitHub.Runner.Worker.AddPathCommandExtension.ProcessCommand(IExecutionContext context, String line, ActionCommand command, ContainerInfo container)
##[debug] at GitHub.Runner.Worker.ActionCommandManager.TryProcessCommand(IExecutionContext context, String input, ContainerInfo container)
::add-path::/opt/hostedtoolcache/Python/3.9.1/x64/bin
##[error]Unable to process command '::add-path::/opt/hostedtoolcache/Python/3.9.1/x64/bin' successfully.
##[error]The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug]System.Exception: The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug] at GitHub.Runner.Worker.AddPathCommandExtension.ProcessCommand(IExecutionContext context, String line, ActionCommand command, ContainerInfo container)
##[debug] at GitHub.Runner.Worker.ActionCommandManager.TryProcessCommand(IExecutionContext context, String input, ContainerInfo container)
::set-output name=python-version::3.9.1
##[debug]='3.9.1'
Successfully setup CPython (3.9.1)

Update actions/setup-python which uses set-env in previous versions from
- uses: actions/setup-python#v2
to the latest version
- uses: actions/setup-python#v2.2.1
There's a bug - v2 does not use the latest version.

Related

OC cluster UP on Fedora not started correctly

I am trying to run openshift on Fedora 36 using Origin-Client or OC.
I have updated fedora to the latest version.
I have installed oc .
whenever I tried to do oc cluster up
it shows below error :
[root#fedora ridhoswasta]# oc cluster up
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I0825 12:11:14.411027 50887 flags.go:30] Running "create-kubelet-flags"
I0825 12:11:16.391985 50887 run_kubelet.go:49] Running "start-kubelet"
I0825 12:11:17.200056 50887 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
E0825 12:16:17.201364 50887 run_self_hosted.go:571] API server error: Get "https://127.0.0.1:8443/healthz?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused ()
Error: timed out waiting for the condition
Then I checked the logs for kubelet container it shows :
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-cipher-suites has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-min-version has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --tls-private-key-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --file-check-frequency has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0825 05:13:19.249680 51788 server.go:417] Version: v1.11.0+d4cacc0
I0825 05:13:19.249928 51788 plugins.go:97] No cloud provider specified.
F0825 05:13:19.253892 51788 server.go:261] failed to run Kubelet: mountpoint for cpu not found
I have tried to reinstall docker with latest version but still I face this issue.
Could someone give me another thing to try?
Thanks!
oc cluster up is using the deprecated version of OpenShift, this has been superseded by OpenShift Local now: https://developers.redhat.com/products/openshift-local/overview. Although OpenShift Local uses a good deal more resources than oc cluster up ever did. There's a spiritual successor that might be worth checking out, and that's MicroShift: https://microshift.io/

GitHub Actions:secret env is empty

I encountered issue that env defined by secret env is empty. I want to use secret env in run syntax. I defined env of secret env like here.
- name: Deploy
env:
GCP_PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
run: |
date_time=`date +%Y%m%d%H%M%S`
IMAGE=gcr.io/$GCP_PROJECT_ID/web-api-server:$date_time
but $GCP_PROJECT_ID is empty.
invalid argument "gcr.io//web-api-server:20200718163842" for "-t, --tag" flag: invalid reference format
See 'docker build --help'
of course. I confirmed that $GCP_PROJECT_ID is defined at a secret.
The reason is here.
Organization secrets can only be used by public repositories on your plan.
If you would like to use organization secrets in a private repository, you will need to upgrade your plan.

Cloud function deployment issue

When I deploy cloud function I get the following error.
I am using go mod and I am able to build and run all the integration test from my sandbox,
One of the cloud function dependency uses private github repo,
When I deploy cloud function
go: github.com/myrepo/ptrie#v0.1.: git fetch -f origin refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /builder/pkg/mod/cache/vcs/41e03711c0ecff6d0de8588fa6de21a2c351c59fd4b0a1b685eaaa5868c5892e: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
You might want to create a personal access token within Github and then configure git to use that token.
That command would look like this:
git config --global url."https://{YOUR TOKEN}:x-oauth-basic#github.com/".insteadOf "https://github.com/"
After that, git should be able to read from your private repo
How about using endly to automate your cloud function build, in this case you would
use go mod with vendor, where you private repo would be added to vendor folder,
Make sure that you add .gcloudignore to not incliude go.mod, go.sum
#.gcloudignore
go.mod
go.sum
The automation workflow with endly that uses private repo with credentials may look like the following
#deploy.yaml
init:
appPath: $WorkingDirectory(.)
target:
URL: ssh://127.0.0.1/
credentials: localhost
myGitSecret: ${secrets.private-git}
pipeline:
secretInfo:
action: print
comments: print git credentials (debuging only_
message: $AsJSON($myGitSecret)
package:
action: exec:run
comments: vendor build for deployment speedup
target: $target
checkError: true
terminators:
- Password
- Username
secrets:
#secret var alias: secret file i.e ~/.secret/private-git.json
gitSecrets: private-git
commands:
- export GIT_TERMINAL_PROMPT=1
- export GO111MODULE=on
- unset GOPATH
- cd ${appPath}/
- go mod vendor
- '${cmd[3].stdout}:/Username/? $gitSecrets.Username'
- '${output}:/Password/? $gitSecrets.Password'
deploy:
action: gcp/cloudfunctions:deploy
'#name': MyFn
timeout: 540s
availableMemoryMb: 2048
entryPoint: MyFn
runtime: go111
eventTrigger:
eventType: google.storage.object.finalize
resource: projects/_/buckets/${matcherConfig.Bucket}
source:
URL: ${appPath}/
Finally check out cloud function e2e testing and deployment automation

Kubernetes google cloud composer with gitlab ci yaml file

I am working on the deployment of a gitlab CI pipeline to trigger a google cloud composer DAG
Below is the .yaml I wrote :
stages:
- deploy
deploy:
stage: deploy
image: google/cloud-sdk
script:
- apt-get update && apt-get --only-upgrade install kubectl google-cloud-sdk
- gcloud config set project $GCP_PROJECT_ID
- gsutil cp plugins/*.py ${PLUGINS_BUCKET}
- gsutil cp dags/*.py ${DAGS_BUCKET}
- kubectl get pods
- gcloud composer environments run ${COMPOSER_ENVIRONMENT} --location ${ENVIRONMENT_LOCATION} trigger_dag -- ${DAG_NAME}
Unfortunately, the execution of the pipleine fails with the error below :
$ gcloud config set project $GCP_PROJECT_ID
Updated property [core/project].
$ gsutil cp plugins/*.py ${PLUGINS_BUCKET}
Copying file://plugins/dataproc_custom_operators.py [Content-Type=text/x-python]...
/ [0 files][ 0.0 B/ 2.3 KiB]
/ [1 files][ 2.3 KiB/ 2.3 KiB]
Operation completed over 1 objects/2.3 KiB.
$ gsutil cp dags/*.py ${DAGS_BUCKET}
copying file://dags/frrm_infdeos_workflow.py [Content-Type=text/x-python]...
/ [0 files][ 0.0 B/ 3.3 KiB]
/ [1 files][ 3.3 KiB/ 3.3 KiB]
Operation completed over 1 objects/3.3 KiB.
$ gcloud composer environments run ${COMPOSER_ENVIRONMENT} --location ${ENVIRONMENT_LOCATION} trigger_dag -- ${DAG_NAME}
kubeconfig entry generated for europe-west1-nameenvironment-a5456e0c-gke.
ERROR: (gcloud.composer.environments.run) No running GKE pods found. If the environment was recently started, please wait and retry.
ERROR: Job failed: command terminated with exit code 1
Do you have any idea about how to fix this please ?
Best regards
I had the same problem as #scalacode. For me, the solution was that the gitlab-runner was running in a different GCP Project than the Composer Environment, so it failed without specifying that error. Running a gitlab-runner in the same project as the Composer Environment fixed the issue.
It seems Composer is unable to retrieve information about the pods/GKE cluster. This could be for a number of reasons ranging from the GKE cluster not creating the nodes to the pods being in a crash loop.
I notice in the script you did not “get-credentials” to authenticate to the cluster. When running commands on a GKE cluster through CLI, traditionally you would first have to authenticate to the cluster first with command. To do this with composer:
gcloud composer environments describe ${COMPOSER_ENVIRONMENT} --location ${ENVIRONMENT_LOCATION} --format="get(config.gkeCluster)"
This will return something of the form: projects/PROJECT/zones/ZONE/clusters/CLUSTER Then run:
gcloud container clusters get-credentials ${CLUSTER} --zone ${ZONE}
Once you have authenticated to the cluster in the script, see if it is now able to complete. If not, try running kubectl get pods to see what is happening with the pods/if they exist.
If you see many pods restarting or generally not in the “running/completed” state, the issue could be with the pod configuration.
If you don’t see pods at all, the deployment may have failed. Check the deployment with command kubectl get deployments.
The deployments airflow-scheduler, airflow-sqlproxy, & airflow-worker should be present. If those three deployments are not present, the environment was likely tampered with, & it would be easiest to make a new environment.

Using environment properties with files in elastic beanstalk config files

Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.