CloudBuild --cache-from for dependencies of CloudFunctions - google-cloud-functions

Several cloud functions use the same requirements: a few libraries and a utility module in localpackage. All of those functions are built and deployed by CloudBuild.
Is there any way to use CloudBuild '--cache-from' feature to use the same base for all those cloud functions?
Here are the steps in yaml file to build and deploy a cloud function:
steps:
# create gcf source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcf_source directory for ${_GCF_NAME}'
mkdir _gcf_source
cp -r cloudfuncs/${_GCF_NAME}/. _gcf_source
rm -f _gcf_source/readme.md
mkdir _gcf_source/localpackage
touch _gcf_source/localpackage/__init__.py
cp cloudfuncs/localpackage/gcf_utils.py _gcf_source/localpackage
echo "" >> _gcf_source/requirements.txt
cat cloudfuncs/localpackage/requirements.txt >> _gcf_source/requirements.txt
- name: 'gcr.io/cloud-builders/gcloud'
args:
- functions
- deploy
- ${_GCF_NAME}

Cloud Build is serverless and you can't keep the data from one execution to another one.
However, you have 2 solutions
Build a container with your localpackage and use it in your subsequent deployment as the container of the step.
Store the file in Google Cloud Storage and load them in your Cloud Build workspace as first step of your deployments. The artifact feature could help you in this task (to save files).

Related

Update Task Definition for ECS Fargate

I have an ECS Fargate cluster that is being deployed to through BitBucket Pipelines. I have my docker image being stored in ECR. Within BitBucket pipelines I am utilizing pipes in order to push my docker image to ECR and a second pipe to deploy to Fargate.
I'm facing a blocker when it comes to Fargate deploying the correct image on the deployment. The way I have the pipeline is setup is below. The docker image gets tagged with the BitBucket Build Number for each deployment. Below is the pipe for the Docker image that gets built and pushed to ECR:
name: Push Docker Image to ECR
script:
- ECR_PASSWORD=`aws ecr get-login-password --region $AWS_DEFAULT_REGION`
- AWS_REGISTRY=$ACCT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
- docker login --username AWS --password $ECR_PASSWORD $AWS_REGISTRY
- docker build -t $DOCKER_IMAGE .
- pipe: atlassian/aws-ecr-push-image:1.6.2
variables:
IMAGE_NAME: $DOCKER_IMAGE
TAGS: $BITBUCKET_BUILD_NUMBER
The next part of the pipeline is to deploy the image, that was pushed to ECR, to Fargate. The pipe associated with the push to Fargate is below:
name: Deploy to Fargate
script:
- pipe: atlassian/aws-ecs-deploy:1.6.2
variables:
CLUSTER_NAME: $CLUSTER_NAME
SERVICE_NAME: $SERVICE_NAME
TASK_DEFINITION: $TASK_DEFINITION
FORCE_NEW_DEPLOYMENT: 'true'
DEBUG: 'true'
Within this pipe, the attribute for TASK_DEFINITION specifies a file in the repo that ECS runs its tasks off. This file which is a JSON file, has a key pair for the image ECS is to use. Below is an example of the key pair:
"image": "XXXXXXXXXXXX.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$DOCKER_IMAGE:latest",
The problem with this line is that the tag of the image is changing with each deployment.
What I would like to do is have this entire deployment process be automated, but am having this step prevent me from doing that. I had came across this link that shows how to change the tag in the task definition in the build environment of the pipeline. The article utilizes envsubst. I've seen how envsubst works, but not sure how to use it for a JSON file.
Any recs on how I can change the tag in the task definition from latest to the Bitbucket Build Number using envsubst would be appreciated.
Thank you.

generate a json within bitbucket pipeline

Running this dbt docs generatecommand generates a catalog.json file in the target folder. The process works well locally.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
After generating the catalog.json file, I want to upload it to s3 in the next step. I copy it from the target folder to the root folder and then I upload it somewhat like this:
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
However, I get an error that:
+ aws s3 cp catalog.json s3://testunzipping/
The user-provided path catalog.json does not exist.
Although the copy command works well locally, it seems to not generate the file properly within the bitbucket pipeline. Is there any other way that I can save the content of catalog.json in some variable in the first step and then later upload it to S3?
In bitbucket pipelines, each step has its own build environment. To be able to share things between steps, you should use artifacts.
You may want to try the steps below.
feature/dbt-docs:
- step:
name: 'setup dbt and generate docs'
image: fishtownanalytics/dbt:1.0.0
script:
- cd dbt_folder
- dbt docs generate
- cp target/catalog.json ../catalog.json
artifacts:
- catalog.json
- step:
name: 'Upload to S3'
image: python:3.7.2
script:
- aws s3 cp catalog.json s3://testunzipping/
Reference : https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/

Google Cloud build always deploys my cloud-function with previous commit

I have a problem with my automated cloud function deployment
I have a cloud function stored in a Google Cloud repository
Git code includes a cloudbuild.yaml file with this content :
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1"]
timeout: "1600s"
I only have a branch Master.
When i push my commit, cloudbuild triggers and deploys the cloud function
The problem is that it always deploys the previous commit, not the last
For example :
2:23 : I push my source code to Google Source repository
Here is the result :
At 2:23:33, cloudbuild triggers and deploys successfully the cloud function
Here is the log of Cloudbuild :
starting build "e3a0e735-50fc-4315-bafd-03128156d69f"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/myproject/r/myrepo
* branch 1b67729b8498c35fc19a45b14b8d674635300594 -> FETCH_HEAD
HEAD is now at 1b67729 PrayingforCommit
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Deploying function (may take a while - up to 2 minutes)...
...............................................done.
availableMemoryMb: 256
entryPoint: process_gcs
eventTrigger:
eventType: google.storage.object.finalize
failurePolicy: {}
resource: projects/_/buckets/mybucket
service: storage.googleapis.com
ingressSettings: ALLOW_ALL
labels:
deployment-tool: cli-gcloud
name: projects/myproject/locations/europe-west1/functions/myfunction
runtime: python37
serviceAccountEmail: myproject#appspot.gserviceaccount.com
sourceRepository:
deployedUrl: https://source.developers.google.com/projects/myproject/repos/myrepo/revisions/2ed14c3225e7fcc089f2bc6a0ae29c7564ec12b9/paths/
url: https://source.developers.google.com/projects/myproject/repos/myrepo/moveable-aliases/master/paths/
status: ACTIVE
timeout: 60s
updateTime: '2020-04-15T00:24:55.184Z'
versionId: '2'
PUSH
DONE
As you can see, the commit that triggers is the 1b67729, but the DeployedUrl line says 2ed14c3 which is the previous commit
Operation ended at 2:24:55, i see the same time in my cloud function source tab
If i just click the edit button, then deploy button, to manually force the cloud function rebuild, it deploys the correct commit (1b67729)
Here are the parameters of the cloud-function :
Where is my mistake with cloudbuild, and how to always deploy the last commit ???
Thanks for your help
I have run into this same issue (though I was using GitHub mirrors rather than native Cloud Source Repositories).
Cloud Functions does not check for updates to source repos when the --source flag is omitted
Your function was previously deployed directly from a source repository and you are not passing a --source flag to gcloud. Under these circumstances gcloud ignores the code in the local directory.
The easiest fix is to explicitly specify the source in your cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1", "--source=."]
timeout: "1600s"
You would not hit this if the function had never been configured to fetch its source directly from the repository.
Your Cloud Function has previously been deployed from a source repository
When configuring a Cloud Function, there is an option to fetch source code from a Cloud Source Repository. You are prompted with this as an option when creating a function through the console, but could also have used gcloud:
gcloud functions deploy NAME \
--source https://source.developers.google.com/projects/PROJECT_ID/repos/REPOSITORY_ID/moveable-aliases/master/paths/SOURCE \
[... other gcloud options ...]
This is the 'sourceRepository' setting that you can see in the build output.
However, as you can see in the console (resolved to 1b67729b) Cloud Functions is not updating its code from there.
Omitting the --source flag to gcloud leads to potentially confusing behaviour
When you leave out the --source flag to gcloud, if the function was previously deployed directly from a repository, specific behaviour applies. From the documentation for gcloud functions deploy:
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the
current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source
code will not be updated.
You are hitting the third option.
When deploying through Cloud Build, it is better not to link Cloud Functions directly to the source repository
Cloud Build is configured to run gcloud in a directory containing a copy of your source code. You can therefore deploy directly from the local filesystem - this packages up the function in a zip file, uploads it to Cloud Storage, and tells Cloud Functions to fetch it.
If you tell Cloud Functions to fetch its code from a repository, then a second checkout of the repo is made and a zip file created from there, which will be slightly slower (and potentially prone to a race condition if the branch has updated in the background).

Openshift 3 - Overriding .s2i/bin files - assemble & run scripts

I wanted clarification on the possible scripts that can be added in the .s2i/bin directory in my project repo.
The docs say when you add these files they will override the default files of the same name when the project is built. For example, if I place my own "assemble" file in the .s2i/bin directory will the default assemble file run also or be totally replaced by my script? What If I want some of the behavior of the default file? Do I have to copy the default "assemble" contents into my file so both will be executed?
you will need to call out the original "assemble" script from your own. Similar to this
#!/bin/bash -e
# The assemble script builds the application artifacts from a source and
# places them into appropriate directories inside the image.
# Execute the default S2I script
source ${STI_SCRIPTS_PATH}/assemble
# You can write S2I scripts in any programming language, as long as the
# scripts are executable inside the builder image.
Using OpenShift, I want to execute my own run script (run).
So, I added in the src of my application a file in ./s2i/run
that slightly changes the default run file
https://github.com/sclorg/nginx-container/blob/master/1.20/s2i/bin/run
Here is my run file
#!/bin/bash
source /opt/app-root/etc/generate_container_user
set -e
source ${NGINX_CONTAINER_SCRIPTS_PATH}/common.sh
process_extending_files ${NGINX_APP_ROOT}/src/nginx-start ${NGINX_CONTAINER_SCRIPTS_PATH}/nginx-start
if [ ! -v NGINX_LOG_TO_VOLUME -a -v NGINX_LOG_PATH ]; then
/bin/ln -sf /dev/stdout ${NGINX_LOG_PATH}/access.log
/bin/ln -sf /dev/stderr ${NGINX_LOG_PATH}/error.log
fi
#nginx will start using the custom nginx.conf from configmap
exec nginx -c /opt/mycompany/mycustomnginx/nginx-conf/nginx.conf -g "daemon off;"
Then, changed the dockerfile to execute my run script as follows
The CMD command can be called once and dictates where is the script located that is executed when the Deployment pod starts.
FROM registry.access.redhat.com/rhscl/nginx-120
# Add application sources to a directory that the assemble script expects them
# and set permissions so that the container runs without root access
USER 0
COPY dist/my-portal /tmp/src
COPY --chmod=0755 s2i /tmp/
RUN ls -la /tmp
USER 1001
# Let the assemble script to install the dependencies
RUN /usr/libexec/s2i/assemble
# Run script uses standard ways to run the application
#CMD /usr/libexec/s2i/run
# here we override the script that will be executed when the deployment pod starts
CMD /tmp/run

How to move from gitalb source base to gitlab omnibus?

I am trying to move gitlab-ce 8.5 source base to gitlab-ce 8.15 omnibus. We were using MySQL in source base but now we have to use thepsql with gitlab-ce omnibus`. When I was trying to take a backup so it was failing due to some empty repo.
Question: Is it any alternative way to move source base to omnibus with full backup?
I have moved gitlab from source base to the omnibus. You can use below link to convert db dump from MySQL to psql.
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/mysql_to_postgresql.md
I have created a zip file of repos manually & copied to the gitlab omnibus server & restore it on /var/opt/gitlab/git-data/repository/.
After these steps, copy the below script on /var/opt/gitlab/git-data/xyz.sh & executed for updating the hooks.
#!/bin/bash
for i in repositories/* ; do
if [ -d "$i" ]; then
for o in $i/* ; do
if [ -d "$i" ]; then
rm "$o/hooks"
# change the paths if required
ln -s "/opt/gitlab/embedded/service/gitlab-shell/hooks" /var/opt/gitlab/git-data/"$o"/hooks
echo "HOOKS CHANGED ($i/$o)"
fi
done
fi
done
Note: Repos permission should be git:git
Some useful commands during the migration:
sudo gitlab-ctl start postgres **to start the Postgres service only**
sudo gitlab-psql **to use the gitlab bundle postgres.**
Feel free to comment if you face 5xx errors code on gitlab page.