Google Cloud build always deploys my cloud-function with previous commit - google-cloud-functions

I have a problem with my automated cloud function deployment
I have a cloud function stored in a Google Cloud repository
Git code includes a cloudbuild.yaml file with this content :
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1"]
timeout: "1600s"
I only have a branch Master.
When i push my commit, cloudbuild triggers and deploys the cloud function
The problem is that it always deploys the previous commit, not the last
For example :
2:23 : I push my source code to Google Source repository
Here is the result :
At 2:23:33, cloudbuild triggers and deploys successfully the cloud function
Here is the log of Cloudbuild :
starting build "e3a0e735-50fc-4315-bafd-03128156d69f"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/myproject/r/myrepo
* branch 1b67729b8498c35fc19a45b14b8d674635300594 -> FETCH_HEAD
HEAD is now at 1b67729 PrayingforCommit
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Deploying function (may take a while - up to 2 minutes)...
...............................................done.
availableMemoryMb: 256
entryPoint: process_gcs
eventTrigger:
eventType: google.storage.object.finalize
failurePolicy: {}
resource: projects/_/buckets/mybucket
service: storage.googleapis.com
ingressSettings: ALLOW_ALL
labels:
deployment-tool: cli-gcloud
name: projects/myproject/locations/europe-west1/functions/myfunction
runtime: python37
serviceAccountEmail: myproject#appspot.gserviceaccount.com
sourceRepository:
deployedUrl: https://source.developers.google.com/projects/myproject/repos/myrepo/revisions/2ed14c3225e7fcc089f2bc6a0ae29c7564ec12b9/paths/
url: https://source.developers.google.com/projects/myproject/repos/myrepo/moveable-aliases/master/paths/
status: ACTIVE
timeout: 60s
updateTime: '2020-04-15T00:24:55.184Z'
versionId: '2'
PUSH
DONE
As you can see, the commit that triggers is the 1b67729, but the DeployedUrl line says 2ed14c3 which is the previous commit
Operation ended at 2:24:55, i see the same time in my cloud function source tab
If i just click the edit button, then deploy button, to manually force the cloud function rebuild, it deploys the correct commit (1b67729)
Here are the parameters of the cloud-function :
Where is my mistake with cloudbuild, and how to always deploy the last commit ???
Thanks for your help

I have run into this same issue (though I was using GitHub mirrors rather than native Cloud Source Repositories).
Cloud Functions does not check for updates to source repos when the --source flag is omitted
Your function was previously deployed directly from a source repository and you are not passing a --source flag to gcloud. Under these circumstances gcloud ignores the code in the local directory.
The easiest fix is to explicitly specify the source in your cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1", "--source=."]
timeout: "1600s"
You would not hit this if the function had never been configured to fetch its source directly from the repository.
Your Cloud Function has previously been deployed from a source repository
When configuring a Cloud Function, there is an option to fetch source code from a Cloud Source Repository. You are prompted with this as an option when creating a function through the console, but could also have used gcloud:
gcloud functions deploy NAME \
--source https://source.developers.google.com/projects/PROJECT_ID/repos/REPOSITORY_ID/moveable-aliases/master/paths/SOURCE \
[... other gcloud options ...]
This is the 'sourceRepository' setting that you can see in the build output.
However, as you can see in the console (resolved to 1b67729b) Cloud Functions is not updating its code from there.
Omitting the --source flag to gcloud leads to potentially confusing behaviour
When you leave out the --source flag to gcloud, if the function was previously deployed directly from a repository, specific behaviour applies. From the documentation for gcloud functions deploy:
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the
current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source
code will not be updated.
You are hitting the third option.
When deploying through Cloud Build, it is better not to link Cloud Functions directly to the source repository
Cloud Build is configured to run gcloud in a directory containing a copy of your source code. You can therefore deploy directly from the local filesystem - this packages up the function in a zip file, uploads it to Cloud Storage, and tells Cloud Functions to fetch it.
If you tell Cloud Functions to fetch its code from a repository, then a second checkout of the repo is made and a zip file created from there, which will be slightly slower (and potentially prone to a race condition if the branch has updated in the background).

Related

Github Workflow correctly loads env with manually triggered job, but not with automated job

My group has a project with a GitHub publishing workflow that has two versions: one which is automated as soon as the builds are finished compiling and one that can be manually triggered from the GitHub Actions interface. Aside from the triggering event, the jobs in the yaml files are identical.
The issue is that in the case of the automated trigger, the env is not being completely loaded:
Whereas the manual trigger does correctly load the env:
We have tried multiple different "solutions" ranging from resetting our repository secrets to explicitly instantiating the env at the job level (in addition to the workflow level). The result has been the same every time.

Why does gitlab CI does not find my junit report artifact?

I am trying to upload J-unit reports on Gitlab CI(these are test results from my Cypress automation framework). I am using Junit-merge. Due the architecture of Cypress (each test in isolation), it requires an extra 'merge' for the reports to get them into one file. Locally evertything works fine:
Junit generates single reports of each test with a hashcode
After all reports have been generated I run a script (shown below) that mixed all the reports into one single .xml file and outputs it below the 'results' package.
Tried to debug it locally, but locally everything just works fine. Possiblities I could think of: Either the merge script is not handled properly or Gitlab does not accept the relative path to the .xml file.
{
"baseUrl": "https://www-acc.anwb.nl/",
"reporter": "mocha-junit-reporter",
"reporterOptions": {
"mochaFile": "results/resultsreport.[hash].xml",
"testsuiteTitle": "true"
}
}
This is the Cypress.json file, where I configured the Junit reporter and let it output the single testfiles in the results package.
cypress-e2e:
image: cypress/base:10
stage: test
script:
- npm run cy:run:staging
- npx junit-merge -d results -o results/results.xml
artifacts:
paths:
- results/results.xml
reports:
junit: results/results.xml
expire_in: 1 week
This is part of the yml file. The npx junit-merge command makes sure all .xml files in the results package are being merged into results.xml.
Again, locally everything works as expected. The error I get from gitlab Ci is:
Uploading artifacts...
WARNING: results/results.xml: no matching files
ERROR: No files to upload
ERROR: Job failed: exit code 1
Artifacts can only exist in
directories relative to the build directory and specifying paths which don't
comply to this rule trigger an unintuitive and illogical error message (an
enhancement is discussed at
gitlab-ce#15530
). Artifacts need to be uploaded to the GitLab instance (not only the GitLab
runner) before the next stage job(s) can start, so you need to evaluate
carefully whether your bandwidth allows you to profit from parallelization
with stages and shared artifacts before investing time in changes to the
setup.
https://gitlab.com/gitlab-org/gitlab-ee/tree/master/doc/ci/caching
which means next configuration should fix the problem:
artifacts:
reports:
junit: <testing-repo>/results/results.xml
expire_in: 1 week

SHA Commit number from Pipelinec w/ Openshift Plugin

I'm using the OpenShift plugin with Jenkins Pipelines to run builds in OpenShift when Github gets a new commit.
I'd also like to be able to report the status of the build back to github.
However in order to do this, I need to know what the commit was that just got built. I'm using the following pipeline config
node() {
stage 'build'
def builder = openshiftBuild(buildConfig: 'my-web', showBuildLogs: 'true')
stage 'deploy'
openshiftDeploy(deploymentConfig: 'my-web')
openshiftScale(deploymentConfig: 'my-web',replicaCount: '3')
}
However I have zero idea how to get the commit SHA from the openshiftBuild step since this does the git pull.
According to https://wiki.jenkins.io/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables, you get it from the GIT_COMMIT environment variable.
If the checkout happens later, you can get it with the following code:
def gitCommitId = sh(returnStdout: true, script: 'git rev-parse HEAD')
It's hard to tell without seeing the rest of your pipeline, but it looks like you are just triggering an OpenShift S2I build, which is not what is recommended for Pipeline builds. You should have your pipeline build the artifact(s) for the application, then use an S2I binary build to have OpenShift put the artifacts into a runtime container. For an example, see HERE.

Azure AppService deploy.cmd using the wrong file

I am trying to configure continuous deployment to a test server on Azure. The app is an ASP.Net application, but in this case that shouldn't really matter.
My build process (team city) produces a folder that has everything needed to deploy (minus some connection string info). If you point IIS at that directory it works great. If you FTP that directory up to Azure it also works.
I am tracking each of these builds in git and pushing them up to Github. So I am trying to use Azure deployment option to deploy from github. Everything is in git. The /bin folder included.
Kudu shouldn't need to do anything but a pull from git and copy all the files to wwwroot.
So I've set my .deployment file to be this:
[config]
project = .
Every time I do that, though, the deployment gives me the message:
Using cached version of deployment script (command: 'azure -y --no-dot-deployment -r "D:\home\site\repository" -o "D:\home\site\deployments\tools" --aspWAP "D:\home\site\repository\MyProj.csproj" --no-solution').
And it runs some generic autogenerated deploy.cmd.
If I delete the deploy.cmd from the cache, it regenerates some generic one.
And, most importantly, in doing all this, the WRONG ASSEMBLY IS BEING DEPLOYED!!
My app depends on System.Web.Helpers.dll. The correct version of this DLL is in github. I've verified this multiple times.
Kudu, however, is grabbing an OLDER one from NuGet and deploying that. And, of course, I get the dreaded YSOD error about not being able to load that file.
What do I need to do to make Kudu just copy the files from my github repository to wwwroot and nothing else?
I wound up getting it to deploy by hand editing the autogenerated deploy.cmd file that lives at \home\site\deployments\tools\deploy.cmd in kudu.
I commented out the 2 autogenerated lines of:
:: 1. Restore NuGet packages
:: 2. Build to the temporary path
(commented out all the code underneath them, too)
And then hand-edited the 3rd section to run kudu sync from the DEPLOYMENT_SOURCE instead of the temp file like this:
:: 3. KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_SOURCE%" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
IF !ERRORLEVEL! NEQ 0 goto error
)

Deploying multiple Google Cloud Functions from same repo

The documentation for Google Cloud Functions is a little vague - I understand how to deploy a single function that is contained within index.js - even in a specific directory, but how does one deploy multiple cloud functions which are located within the same repository?
AWS Lambda allows you to specify a specific file and function name:
/my/path/my-file.myHandler
Lambda also allows you to deploy a zip file containing only the files required to run, omitting all of the optional transitive npm dependencies and their resources. For some libraries (eg Oracle DB) including node-modules/** would significantly increase the deployment time, and possibly exceed storage limits (it does on AWS Lambda).
The best that I can manage with Google Cloud Function deployment is:
$ gcloud alpha functions deploy my-function \
--trigger-http
--source-url https://github.com/user-name/my-repo.git \
--source-branch master \
--source-path lib/foo/bar
--entry-point myHandler
...but my understanding is that it deploys lib/foo/bar/index.js which contains function myHandler(req, res) {} ...and all dependencies concatenated in the same file? That doesn't make sense at all - like I said, the documentation is a little vague.
The current deployment tool takes a simple approach. It zips the directory and uploads it. This means you (currently) should move or delete node_modules before executing the command if you don't wish for them to be included in the deployment package. Note that, like lambda, GCF will resolve dependencies automatically.
As to deployment, please see: gcloud alpha functions deploy --help
Specifically:
--entry-point=ENTRY_POINT
The name of the function (as defined in source code) that will be
executed.
You might opt to use the --source flags to upload the file once, then deploy the functions sans upload. You can also instruct google to pull functions from a repo in the same manner. I suggest you write a quick deployment script to help you deploy a list of functions in a single command.