Google Apps Script and Cloudbuild CI Login - google-apps-script

So I am pretty stuck yet so close to getting a Google Apps Script project to push and deploy with Clasp through Googles Cloudbuild. So the push and deploy commands come from Googles Clasp cli which requires you to log in with your Google credentials with clasp login. The login will create a file in your home dir called ~/.clasprc.json with your credentials. This is needed to push and deploy. In the cloudbuild.yaml I created a substitution called _CLASPRC to hold the contents of this file and used my own custom image to write it to the container while running the build.
Now for the issue, I get the error below when the push command runs which is basically a not very useful way of saying I'm not logged in or any other error with the .clasprc.json. Since this is the only error I ever get no matter what the problem is, the issue is a bit hard to debug.
Could not read API credentials. Are you logged in globally?
I have tried putting the .clasprc.json in the home dir and the project dir but get the same issue both ways. I'm pretty sure the file is getting written to the projects dir because when I try to run on my local without the .clasp.json it complains it's missing before complaining I'm not logged in. When the .clasp.json is there it only complains I'm not logged in.
The project is just a personal project of mine and it is all open source on Github so here is the link to the actual project if you want some reference to the actual code. My Lil Admin and the builder I used My Builders. However you really don't need the project, to reproduce follow the steps below on your local.
make sure to have a GCP project created and the gcloud cli with Apps Script API enabled
have the clasp cli with npm install -g #google/clasp
clasp login to get a .clasprc.json and auth with GCP
clasp create --title "My Script" --type webapp and take note of the Scripts ID
associate the apps script project with your GCP project
The following steps are the files which lead to the problem. Simply add them to the clasp project created.
6. Here is the entrypoint for my Clasp Builder Image:
builder/clasp_ci.sh
#!/bin/bash
# if there is a _CLASPRC var and no .clasprc.json file
if [ ! -z "${_CLASPRC}" -a ! -f "${HOME}/.clasprc.json" ]; then
echo $_CLASPRC > "$HOME/.clasprc.json"
fi
# if there is a _SCRIPT_ID and PROJECT_ID and no .clasp.json file
if [ ! -z "${_SCRIPT_ID}" -a ! -z "$PROJECT_ID" -a ! -f ".clasp.json" ]; then
cat > '.clasp.json' << EOF
{"scriptId":"$_SCRIPT_ID","projectId": "$PROJECT_ID"}
EOF
fi
# pass args to clasp
clasp "$#"
The builders dockerfile
builder/Dockerfile
# use Node LTS (Boron)
FROM node:8.16.1
COPY clasp_ci.sh /usr/local/bin/clasp_ci
# install Clasp CLI
RUN npm install -g #google/clasp && \
chmod +x /usr/local/bin/clasp_ci
ENTRYPOINT ["/usr/local/bin/clasp_ci"]
now the cloudbuild to push the clasp builder
builder/cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/clasp', '.' ]
images:
- 'gcr.io/$PROJECT_ID/clasp'
my cloudbuild ci for an apps script project. If you're making a new project to follow along you don't need the build steps nor the dir key in the push and deploy steps. This is pretty specific to the project in the links to my project above.
cloudbuild.yaml
steps:
- id: install
name: 'gcr.io/cloud-builders/npm'
args: ['install']
- id: build-server
name: 'gcr.io/cloud-builders/npm'
args: ['run','gas']
env:
- 'NODE_ENV=production'
- id: build-client
name: 'gcr.io/cloud-builders/npm'
args: ['run','prod']
env:
- 'NODE_ENV=production'
- id: push
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['push','-f']
- id: deploy
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['deploy','$TAG_NAME']
substitutions:
_CLASPRC: 'your clasp rc file in your home dir after logging in locally'
_SCRIPT_ID: 'your script id of the apps script project to deploy to'
Here is the command to load the builder. Make sure to replace yourproject with your actual project ID.
cd builder && gcloud builds submit --project yourproject --config=cloudbuild.yaml .
the command to finally get the error. Make sure to replace yourproject with your actual project ID and your_script_id with your actual script ID you took note of in step 4.
gcloud builds submit --project yourproject --config=cloudbuild.yaml . \
--substitutions=_CLASPRC="$(cat $HOME/.clasprc.json)" \
--substitutions=_SCRIPT_ID="your_script_id"
I have also tried using the credentials created from logging in with OAuth but I got the same exact error. However this may be useful in solving the issue. Docs for Clasp Run with OAuth
Hopefully someone can help me get this working. If so, this would be the first documentation online for a Cloudbuild CI with Apps Script and Clasp since I can't find anyone doing this anywhere. I have found some links using travis and jenkins but what they are doing for some reason does not work. Does anyone see what something that I'm not? What am I missing here?!?!
Some other somewhat related or never solved issues:
https://github.com/google/clasp/issues/524
https://github.com/google/clasp/blob/master/tests/README.md
https://github.com/google/clasp/issues/225
https://github.com/gazf/google-apps-script-ci-starter

OK, so after a bunch of debugging I find out the cloudbuild substitution variables do not translate to environment variables in the container. You have to manually set the environment variables to the substitution variables and then the container will get the variables it needs.
Here is the updated CI Entry point:
builder/clasp_si.sh
#!/bin/bash
if [ ! -z "${CLASPRC}" -a ! -f "${HOME}/.clasprc.json" ]; then
echo $CLASPRC > "${HOME}/.clasprc.json"
fi
if [ ! -z "${SCRIPT_ID}" -a ! -z "$PROJECT_ID" -a ! -f ".clasp.json" ]; then
cat > '.clasp.json' << EOF
{"scriptId":"$SCRIPT_ID","projectId": "$PROJECT_ID"}
EOF
fi
clasp "$#"
and then the updated cloudbuild config:
cloudbuild.yaml
steps:
- id: install
name: 'gcr.io/cloud-builders/npm'
args: ['install']
- id: build-server
name: 'gcr.io/cloud-builders/npm'
args: ['run','gas']
env:
- 'NODE_ENV=production'
- id: build-client
name: 'gcr.io/cloud-builders/npm'
args: ['run','prod']
env:
- 'NODE_ENV=production'
- id: push
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['push','-f']
env:
- 'CLASPRC=$_CLASPRC'
- 'SCRIPT_ID=$_SCRIPT_ID'
- 'PROJECT_ID=$PROJECT_ID'
- id: deploy
name: 'gcr.io/$PROJECT_ID/clasp'
dir: './dist/gas'
args: ['deploy','$TAG_NAME']
env:
- 'CLASPRC=$_CLASPRC'
- 'SCRIPT_ID=$_SCRIPT_ID'
- 'PROJECT_ID=$PROJECT_ID'
substitutions:
_CLASPRC: 'your clasp rc file in your home dir after logging in locally'
_SCRIPT_ID: 'your script id of the apps script project to deploy to'

Related

Github dispatches workflow Invalid request

I'm trying to trigger a workflow event in Github.
for some reason, I'm able to GET information about my organization repository workflow but can not use '/dispatches'
Work is based on: https://docs.github.com/en/rest/actions/workflows#create-a-workflow-dispatch-event
Here is the curl code:
curl -X POST \
-H "Accept:application/vnd.github.v3+json" \
-H 'Authorization:token ${{ github.token }}' \
'https://api.github.com/repos/[owner/org]/[repo]/actions/workflows/9999999/dispatches' \
-d '{"event_type":"semantic-release"}'
Getting error:
422 Unprocessable Entity
"message": "Invalid request.\n\nFor 'links/0/schema', nil is not an object.",
"documentation_url": "https://docs.github.com/rest/reference/repos#create-a-repository-dispatch-event"
Am I missing some basic information for this to work and trigger an event?
Instead of trying to call the GitHub API directly, try and use the GitHub CLI gh (that you can install first to test locally).
You can also use GitHub CLI in workflows.
GitHub CLI is preinstalled on all GitHub-hosted runners.
For each step that uses GitHub CLI, you must set an environment variable called GITHUB_TOKEN to a token with the required scopes
It has a gh workflow run, which does create a workflow_dispatch event for a given workflow.
Authenticates first (gh auth login, if you are doing a local test):
# authenticate against github.com by reading the token from a file
$ gh auth login --with-token < mytoken.txt
Examples:
# Run the workflow file 'triage.yml' at the remote's default branch
$ gh workflow run triage.yml
# Run the workflow file 'triage.yml' at a specified ref
$ gh workflow run triage.yml --ref my-branch
# Run the workflow file 'triage.yml' with command line inputs
$ gh workflow run triage.yml -f name=scully -f greeting=hello
# Run the workflow file 'triage.yml' with JSON via standard input
$ echo '{"name":"scully", "greeting":"hello"}' | gh workflow run triage.yml --json
In your case (GitHub Action):
jobs:
push:
runs-on: ubuntu-latest
steps:
- run: gh workflow run triage.yml
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
As explained by hanayama in the comments:
Found out the secrets. GITHUB_TOKEN doesn't work, even with permissions edited for the entire workflow.
Using a personal access token worked.

Use case of OpenShift + buildConfig + ConfigMaps

I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.

How to add Chrome to a container to over come the error 'Failed to launch chrome' in circleCI

I'm trying to run Codecept.js on circleCI but I keep running into the same issue where it says Failed to launch chrome.
I believe it is a problem with puppeteer but I cannot find the issue online.
I've tried adding the following to my codecept.conf.js file.
helpers: {
Puppeteer: {
url: process.env.CODECEPT_URL || 'http://localhost:3030'
},
chrome: {
args: ["--headless", "--no-sandbox"]
}
},
I've tried to install chrome onto the container that I'm running:
docker-compose exec aubisque npx codeceptjs run --steps
As I thought it might be that chrome didn't exist. I couldn't figure out how to do this though. I have also read that puppeteer uses its own type of chrome :S.
acceptance:
working_directory: ~/aubisque-api
docker:
- image: circleci/node:latest-browsers
environment:
NODE_ENV: development
steps:
- checkout
- setup_remote_docker
- restore_cache:
name: Restore NPM Cache
keys:
- package-lock-cache-{{ checksum "package-lock.json" }}
- run:
name: Install git-crypt
command: |
curl -L https://github.com/AGWA/git-crypt/archive/debian/0.6.0.tar.gz | tar zxv &&
(cd git-crypt-debian && sudo make && sudo make install)
- run:
name: decrypt files
command: |
echo $DECRYPT_KEY | base64 -d >> keyfile
git-crypt unlock keyfile
rm keyfile
- run:
name: Build and run acceptance tests
command: |
docker-compose -f docker-compose-ci.yml build --no-cache
docker-compose -f docker-compose-ci.yml up -d
docker-compose exec aubisque npx codeceptjs run --steps
This is my circle/config.yml file where I run my acceptance tests. I am running the code in workflows and before I run this job I am running a job that installs the npm modules.

Asp.net core + Aws Elastic Beanstalk + Bitbucket pipeline

How can we use bitbucket pipelines to update an asp.net core website on aws elastic beanstalk?
i know this is late answer but i did same thing few days ago so here is example how i did it
firstly you have to enable pipeline in bitbucket choose .NET CORE
in bitbucket-pipelines.yml you need yo write something like this:
image: microsoft/dotnet:sdk
pipelines:
branches:
staging:
- step:
name: build publish prepare and zip
caches:
- dotnetcore
script:
- apt-get update && apt-get install --yes zip
- export PROJECT_NAME=<your-project-name>
- dotnet restore
- dotnet build $PROJECT_NAME
- dotnet publish --self-contained --runtime win-x64 --configuration Release
- zip -j site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/* -x aws-windows-deployment-manifest.json
- zip -r -j application.zip site.zip /opt/atlassian/pipelines/agent/build/<your-project-name>/bin/Release/netcoreapp2.0/win-x64/publish/aws-windows-deployment-manifest.json
artifacts:
- application.zip
- step:
name: upload to elasticbeanstalk
script:
- pipe: atlassian/aws-elasticbeanstalk-deploy:0.5.0
variables:
APPLICATION_NAME: '<application-name>'
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
#COMMAND: 'upload-only'
ZIP_FILE: 'application.zip'
ENVIRONMENT_NAME: '<environment-name>'
WAIT: 'true'
in settings -> pipelines -> variables you have to set aws keys: access secret and region that will used by $ ($AWS_SECRET_ACCESS_KEY)
additionally you will have to create s3bucket "-elsticbeanstalk-deployments" (if you dont create it, when the environment will try to upload your zip it will show you error with name of bucket "not found" so just copy the name and create it in s3)

Install input secret into OpenShift build configuration

I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?
This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)
In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched