When I deploy cloud function I get the following error.
I am using go mod and I am able to build and run all the integration test from my sandbox,
One of the cloud function dependency uses private github repo,
When I deploy cloud function
go: github.com/myrepo/ptrie#v0.1.: git fetch -f origin refs/heads/:refs/heads/ refs/tags/:refs/tags/ in /builder/pkg/mod/cache/vcs/41e03711c0ecff6d0de8588fa6de21a2c351c59fd4b0a1b685eaaa5868c5892e: exit status 128:
fatal: could not read Username for 'https://github.com': terminal prompts disabled
You might want to create a personal access token within Github and then configure git to use that token.
That command would look like this:
git config --global url."https://{YOUR TOKEN}:x-oauth-basic#github.com/".insteadOf "https://github.com/"
After that, git should be able to read from your private repo
How about using endly to automate your cloud function build, in this case you would
use go mod with vendor, where you private repo would be added to vendor folder,
Make sure that you add .gcloudignore to not incliude go.mod, go.sum
#.gcloudignore
go.mod
go.sum
The automation workflow with endly that uses private repo with credentials may look like the following
#deploy.yaml
init:
appPath: $WorkingDirectory(.)
target:
URL: ssh://127.0.0.1/
credentials: localhost
myGitSecret: ${secrets.private-git}
pipeline:
secretInfo:
action: print
comments: print git credentials (debuging only_
message: $AsJSON($myGitSecret)
package:
action: exec:run
comments: vendor build for deployment speedup
target: $target
checkError: true
terminators:
- Password
- Username
secrets:
#secret var alias: secret file i.e ~/.secret/private-git.json
gitSecrets: private-git
commands:
- export GIT_TERMINAL_PROMPT=1
- export GO111MODULE=on
- unset GOPATH
- cd ${appPath}/
- go mod vendor
- '${cmd[3].stdout}:/Username/? $gitSecrets.Username'
- '${output}:/Password/? $gitSecrets.Password'
deploy:
action: gcp/cloudfunctions:deploy
'#name': MyFn
timeout: 540s
availableMemoryMb: 2048
entryPoint: MyFn
runtime: go111
eventTrigger:
eventType: google.storage.object.finalize
resource: projects/_/buckets/${matcherConfig.Bucket}
source:
URL: ${appPath}/
Finally check out cloud function e2e testing and deployment automation
Related
I'm trying to deploy the GitLab Runner (15.7.1) onto an on-premise Kubernetes cluster and getting the following error:
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
This is occurring with both the 15.7.1 image (Ubuntu?) and the alpine3.13-v15.7.1 image. Looking at the deployment, it looks likes it should be trying to use /home/gitlab-runner, but for some reason it is trying to use root (/), which is a protected directory.
Anyone else experience this issue or have a suggestion as to what to look at?
I am using the Helm chart (0.48.0) using a copy of the images from dockerhub (simply moved into a local repository as internet access is not available from the cluster). Connectivity to GitLab appears to be working, but the error causes the overall startup to fail. Full logs are:
Registration attempt 4 of 30
Runtime platform arch=amd64 os=linux pid=33 revision=6d480948 version=15.7.1
WARNING: Running in user-mode.
WARNING: The user-mode requires you to manually start builds processing:
WARNING: $ gitlab-runner run
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Created missing unique system ID system_id=r_Of5q3G0yFEVe
PANIC: loading system ID file: saving system ID state file: creating directory: mkdir /.gitlab-runner: permission denied
I have tried the 15.7.1 image, the alpine3.13-v15.7.1 image, and the gitlab-runner-ocp:amd64-v15.7.1 image and searched the values.yaml for anything relevant to the path. Looking at the deployment template, it appears that it ought to be using /home/gitlab-runner as the directory (instead of /) [though the docs suggested it was /home].
As for "what was I expecting", of course I was expecting that it would "just work" :)
So, resolved this (and other) issues with:
Updated helm deployment template to mount an empty volume at /.gitlab-runner
[separate issue] explicitly added builds_dir and environment [per gitlab-org/gitlab-runner#3511 (comment 114281106)].
These two steps appeared to be sufficient to get the Helm chart deployment working.
You can easily create and mount the emptyDir (in case you are creating gitlab-runner with kubernetes manifest *.yml file):
volumes:
- emptyDir: {}
name: gitlab-runner
volumeMounts:
- name: gitlab-runner
mountPath: /.gitlab-runner
-------------------- OR --------------------
volumeMounts:
- name: root-gitlab-runner
mountPath: /.gitlab-runner
volumes:
- name: root-gitlab-runner
emptyDir:
medium: "Memory"
Following this tutorial on installing Vault with Helm on OpenShift, I encountered the following error after executing the command:
bash
helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f my_values.yaml
For the config:
values.yaml
bash
echo '# Custom values for the Vault chart
global:
# If deploying to OpenShift
openshift: true
server:
dev:
enabled: true
serviceAccount:
create: true
name: vault-sa
injector:
enabled: true
authDelegator:
enabled: true' > my_values.yaml
The error:
$ helm install vault hashicorp/vault -n $VAULT_NAMESPACE -f values.yaml
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
What exactly is happening, or how can I reset this specific name space to point to the right release namespace?
Have you by chance tried the exact same thing before, because that is what the error is hinting.
If we dissect the error, we get the to the root of the problem:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists.
So something on the cluster already exists that you were trying to deploy via the helm chart.
Unable to continue with install:
Helm is aborting due to this failure
ClusterRole "vault-agent-injector-clusterrole" in namespace "" exists
So the cluster role vault-agent-injector-clusterrole that the helm chart is supposed to put onto the cluster already exsits. ClusterRoles aren't namespace specific, hence the "namespace" is blank.
and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "my-vault-1": current value is "my-vault-0"
The default behavior is to try to import existing resources that this chart requires, but it is not possible, because the owner of that ClusterRole is different from the deployment.
To fix this, you can remove the existing deployment of your chart and then give it an other try and it should work as expected.
Make sure all resources are gone. For this particular one you can check with kubectl get clusterroles
I am trying to implement automatic deployments for my Windows Kubernetes container app. I'm following instructions from the Google's windows-builder, but the trigger quickly fails with this error at about 1.5 minutes in:
2021/12/16 19:30:06 Set ingress firewall rule successfully
2021/12/16 19:30:06 Failed to get external IP address: Could not get external NAT IP from list
ERROR
ERROR: build step 0 "gcr.io/[my-project-id]/windows-builder" failed: step exited with non-zero status: 1
The container, gcr.io/[my-project-id]/windows-builder, definitely exists and it's located in the same GCP project as the Cloud Build trigger just as the windows-builder documentation commanded.
I structured my code based off of Google's docker-windows example. Here is my repository file structure:
repository
cloudbuild.yaml
builder.ps1
worker
Dockerfile
Here is my cloudbuild.yaml:
steps:
# WORKER
- name: 'gcr.io/[my-project-id]/windows-builder'
args: [ '--command', 'powershell.exe -file build.ps1' ]
# OPTIONS
options:
logging: CLOUD_LOGGING_ONLY
Here is my builder.ps1:
docker build -t gcr.io/[my-project-id]/test-worker ./worker;
if ($?) {
docker push gcr.io/[my-project-id]/test-worker;
}
Here is my Dockerfile:
FROM gcr.io/[my-project-id]/test-windows-node-base:onbuild
Does anybody know what I'm doing wrong here? Any help would be appreciated.
Replicated the steps from GitHub and got the same error. It is throwing Failed to get external IP address... error because the External IP address of the VM is disabled by default in the source code. I was able to build it successfully by adding '--create-external-ip', 'true' in cloudbuild.yaml.
Here is my cloudbuild.yaml:
steps:
- name: 'gcr.io/$PROJECT_ID/windows-builder'
args: [ '--create-external-ip', 'true',
'--command', 'powershell.exe -file build.ps1' ]
Github Actions' actions/setup-python step doesn't succeed due to the use of ::set-env: and ::add-path:, that are blocked because considered as insecure (https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands).
Succeed by setting ACTIONS_ALLOW_UNSECURE_COMMANDS=true environment variable for this step.
How to avoid allowing unsecure commands to successfully run a Python build?
Self-hosted runner in a custom-built docker with Ubuntu 20.04 base image. Linked to an GHE server v2.22.4.
Tried with both Python 3.9.0 and 3.9.1
Workflow file:
name: CD testing
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
runs-on: [ Linux ]
- uses: actions/checkout#v2
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.9.0
- name: execute py script
run: |
python -V
##[debug]Found tool in cache Python 3.9.1 x64
::set-env name=pythonLocation::/opt/hostedtoolcache/Python/3.9.1/x64
##[error]Unable to process command '::set-env name=pythonLocation::/opt/hostedtoolcache/Python/3.9.1/x64' successfully.
##[error]The `set-env` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug]System.Exception: The `set-env` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug] at GitHub.Runner.Worker.SetEnvCommandExtension.ProcessCommand(IExecutionContext context, String line, ActionCommand command, ContainerInfo container)
##[debug] at GitHub.Runner.Worker.ActionCommandManager.TryProcessCommand(IExecutionContext context, String input, ContainerInfo container)
::add-path::/opt/hostedtoolcache/Python/3.9.1/x64
##[error]Unable to process command '::add-path::/opt/hostedtoolcache/Python/3.9.1/x64' successfully.
##[error]The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug]System.Exception: The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug] at GitHub.Runner.Worker.AddPathCommandExtension.ProcessCommand(IExecutionContext context, String line, ActionCommand command, ContainerInfo container)
##[debug] at GitHub.Runner.Worker.ActionCommandManager.TryProcessCommand(IExecutionContext context, String input, ContainerInfo container)
::add-path::/opt/hostedtoolcache/Python/3.9.1/x64/bin
##[error]Unable to process command '::add-path::/opt/hostedtoolcache/Python/3.9.1/x64/bin' successfully.
##[error]The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug]System.Exception: The `add-path` command is disabled. Please upgrade to using Environment Files or opt into unsecure command execution by setting the `ACTIONS_ALLOW_UNSECURE_COMMANDS` environment variable to `true`. For more information see: https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/
##[debug] at GitHub.Runner.Worker.AddPathCommandExtension.ProcessCommand(IExecutionContext context, String line, ActionCommand command, ContainerInfo container)
##[debug] at GitHub.Runner.Worker.ActionCommandManager.TryProcessCommand(IExecutionContext context, String input, ContainerInfo container)
::set-output name=python-version::3.9.1
##[debug]='3.9.1'
Successfully setup CPython (3.9.1)
Update actions/setup-python which uses set-env in previous versions from
- uses: actions/setup-python#v2
to the latest version
- uses: actions/setup-python#v2.2.1
There's a bug - v2 does not use the latest version.
Working with Elastic Beanstalk .config files is kinda... interesting. I'm trying to use environment properties with the files: configuration option in an Elastc Beanstalk .config file. What I'd like to do is something like:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
content: |
${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY}
To create an /etc/passwd-s3fs file with content something like:
ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd
I.e. use the environment properties defined in the AWS Console (Elastic Beanstalk/Configuration/Software Configuration/Environment Properties) to initialize system configuration files and such.
I've found that it is possible to use environment properties in container-command:s, like so:
container_commands:
000-create-file:
command: echo ${AWS_ACCESS_KEY_ID}:${AWS_SECRET_KEY} > /etc/passwd-s3fs
However, doing so will require me to manually set owner, group, file permissions etc. It's also much more of a hassle when dealing with larger configuration files than the Files: configuration option...
Anyone got any tips on this?
How about something like this. I will use the word "context" for dev vs. qa.
Create one file per context:
dev-envvars
export MYAPP_IP_ADDR=111.222.0.1
export MYAPP_BUCKET=dev
qa-envvars
export MYAPP_IP_ADDR=111.222.1.1
export MYAPP_BUCKET=qa
Upload those files to a private S3 folder, S3://myapp/config.
In IAM, add a policy to the aws-elasticbeanstalk-ec2-role role that allows reading S3://myapp/config.
Add the following file to your .ebextensions directory:
envvars.config
files:
"/opt/myapp_envvars" :
mode: "000644"
owner: root
group: root
# change the source when you need a different context
#source: https://s3-us-west-2.amazonaws.com/myapp/dev-envvars
source: https://s3-us-west-2.amazonaws.com/myapp/qa-envvars
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: myapp
commands:
# commands executes after files per
# http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
10-load-env-vars:
command: . /opt/myapp_envvars
Per the AWS Developer's Guide, commands "run before the application and web server are set up and the application version file is extracted," and before container-commands. I guess the question will be whether that is early enough in the boot process to make the environment variables available when you need them. I actually wound up writing an init.d script to start and stop things in my EC2 instance. I used the technique above to deploy the script.
Credit for the “Resources” section that allows downloading from secured S3 goes to the May 7, 2014 post that Joshua#AWS made to this thread.
I am gravedigging but since I stumbled across this in the course of my travels, there is a "clever" way to do what you describe–at least in 2018, and at least since 2016. You can retrieve an environment variable by key with get-config:
/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY
And likewise all environment variables with (as JSON or --output YAML)
/opt/elasticbeanstalk/bin/get-config environment
Example usage in a container command:
container_commands:
00_store_env_var_in_file_and_chmod:
command: "/opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_KEY | install -D /dev/stdin /etc/somefile && chmod 640 /etc/somefile"
Example usage in a file:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/00_do_stuff.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
YOUR_ENV_VAR=$(source /opt/elasticbeanstalk/bin/get-config environment --key YOUR_ENV_VAR_KEY)
echo "Hello $YOUR_ENV_VAR"
I was introduced to get-config by Thomas Reggi in https://serverfault.com/a/771067.
I assume that AWS_ACCESS_KEY_ID and AWS_SECRET_KEY are known to you prior to the app deployment.
You can create the file on your workstation and submit it to Elastic Beanstalk instance with the code on $ git aws.push
$ cd .ebextensions
$ echo 'ABAC73E92DEEWEDS3FG4E:aiDSuhr8eg4fHHGEMes44zdkIJD0wkmd' > passwd-s3fs
In .config:
files:
"/etc/passwd-s3fs" :
mode: "000640"
owner: root
group: root
container_commands:
10-copy-passwords-file:
command: "cat .ebextensions/passwd-s3fs > /etc/passwd-s3fs"
You might have to play with the permissions or execute cat as sudo. Also, I put the file into .ebextensions for example, it can be anywhere in your project.
Hope it helps.