The documentation for Google Cloud Functions is a little vague - I understand how to deploy a single function that is contained within index.js - even in a specific directory, but how does one deploy multiple cloud functions which are located within the same repository?
AWS Lambda allows you to specify a specific file and function name:
/my/path/my-file.myHandler
Lambda also allows you to deploy a zip file containing only the files required to run, omitting all of the optional transitive npm dependencies and their resources. For some libraries (eg Oracle DB) including node-modules/** would significantly increase the deployment time, and possibly exceed storage limits (it does on AWS Lambda).
The best that I can manage with Google Cloud Function deployment is:
$ gcloud alpha functions deploy my-function \
--trigger-http
--source-url https://github.com/user-name/my-repo.git \
--source-branch master \
--source-path lib/foo/bar
--entry-point myHandler
...but my understanding is that it deploys lib/foo/bar/index.js which contains function myHandler(req, res) {} ...and all dependencies concatenated in the same file? That doesn't make sense at all - like I said, the documentation is a little vague.
The current deployment tool takes a simple approach. It zips the directory and uploads it. This means you (currently) should move or delete node_modules before executing the command if you don't wish for them to be included in the deployment package. Note that, like lambda, GCF will resolve dependencies automatically.
As to deployment, please see: gcloud alpha functions deploy --help
Specifically:
--entry-point=ENTRY_POINT
The name of the function (as defined in source code) that will be
executed.
You might opt to use the --source flags to upload the file once, then deploy the functions sans upload. You can also instruct google to pull functions from a repo in the same manner. I suggest you write a quick deployment script to help you deploy a list of functions in a single command.
Related
I have a file I need to get into the Google Cloud Function's directory for a multi-step problem. Matplotlib: Custom fonts in cloud functions using Python 3.9
I'm not sure how to do it. Do I do it as a function in cloud functions? or use the console terminal for the project? I tried that and looked in the root directory and there was nothing there. I can only change projects and not change to a specific function directory.
Can someone please show me how to put this file https://www.1001freefonts.com/balthazar.font into the function's file system so it can be called during execution?
When you deploy a Cloud Function to GCP, you can supply a ZIP file or a directory that contains your source code and additional artifacts/files that you may need.
To perform the deployment of the ZIP or directory, you will want to use the gcloud command. A good article on this is Deploying from Your Local Machine.
The detailed documentation on the CLI can be found at gcloud functions deploy.
In your example, you could create a directory that contains your source and your font file and both will be present in the context of the Cloud Function. I believe that if you want to reference the files, you will want to use the local current directory in your code. For example, instead of coding /myfontfile.font you might code ./myfontfile.font.
Here are some references to this technique:
Cloud Functions: how to upload additional file for use in code?
I have a problem with my automated cloud function deployment
I have a cloud function stored in a Google Cloud repository
Git code includes a cloudbuild.yaml file with this content :
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1"]
timeout: "1600s"
I only have a branch Master.
When i push my commit, cloudbuild triggers and deploys the cloud function
The problem is that it always deploys the previous commit, not the last
For example :
2:23 : I push my source code to Google Source repository
Here is the result :
At 2:23:33, cloudbuild triggers and deploys successfully the cloud function
Here is the log of Cloudbuild :
starting build "e3a0e735-50fc-4315-bafd-03128156d69f"
FETCHSOURCE
Initialized empty Git repository in /workspace/.git/
From https://source.developers.google.com/p/myproject/r/myrepo
* branch 1b67729b8498c35fc19a45b14b8d674635300594 -> FETCH_HEAD
HEAD is now at 1b67729 PrayingforCommit
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Deploying function (may take a while - up to 2 minutes)...
...............................................done.
availableMemoryMb: 256
entryPoint: process_gcs
eventTrigger:
eventType: google.storage.object.finalize
failurePolicy: {}
resource: projects/_/buckets/mybucket
service: storage.googleapis.com
ingressSettings: ALLOW_ALL
labels:
deployment-tool: cli-gcloud
name: projects/myproject/locations/europe-west1/functions/myfunction
runtime: python37
serviceAccountEmail: myproject#appspot.gserviceaccount.com
sourceRepository:
deployedUrl: https://source.developers.google.com/projects/myproject/repos/myrepo/revisions/2ed14c3225e7fcc089f2bc6a0ae29c7564ec12b9/paths/
url: https://source.developers.google.com/projects/myproject/repos/myrepo/moveable-aliases/master/paths/
status: ACTIVE
timeout: 60s
updateTime: '2020-04-15T00:24:55.184Z'
versionId: '2'
PUSH
DONE
As you can see, the commit that triggers is the 1b67729, but the DeployedUrl line says 2ed14c3 which is the previous commit
Operation ended at 2:24:55, i see the same time in my cloud function source tab
If i just click the edit button, then deploy button, to manually force the cloud function rebuild, it deploys the correct commit (1b67729)
Here are the parameters of the cloud-function :
Where is my mistake with cloudbuild, and how to always deploy the last commit ???
Thanks for your help
I have run into this same issue (though I was using GitHub mirrors rather than native Cloud Source Repositories).
Cloud Functions does not check for updates to source repos when the --source flag is omitted
Your function was previously deployed directly from a source repository and you are not passing a --source flag to gcloud. Under these circumstances gcloud ignores the code in the local directory.
The easiest fix is to explicitly specify the source in your cloudbuild.yaml:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["functions", "deploy", "myfunction", "--region=europe-west1", "--source=."]
timeout: "1600s"
You would not hit this if the function had never been configured to fetch its source directly from the repository.
Your Cloud Function has previously been deployed from a source repository
When configuring a Cloud Function, there is an option to fetch source code from a Cloud Source Repository. You are prompted with this as an option when creating a function through the console, but could also have used gcloud:
gcloud functions deploy NAME \
--source https://source.developers.google.com/projects/PROJECT_ID/repos/REPOSITORY_ID/moveable-aliases/master/paths/SOURCE \
[... other gcloud options ...]
This is the 'sourceRepository' setting that you can see in the build output.
However, as you can see in the console (resolved to 1b67729b) Cloud Functions is not updating its code from there.
Omitting the --source flag to gcloud leads to potentially confusing behaviour
When you leave out the --source flag to gcloud, if the function was previously deployed directly from a repository, specific behaviour applies. From the documentation for gcloud functions deploy:
If you do not specify the --source flag:
The current directory will be used for new function deployments.
If the function was previously deployed using a local filesystem path, then the function's source code will be updated using the
current directory.
If the function was previously deployed using a Google Cloud Storage location or a source repository, then the function's source
code will not be updated.
You are hitting the third option.
When deploying through Cloud Build, it is better not to link Cloud Functions directly to the source repository
Cloud Build is configured to run gcloud in a directory containing a copy of your source code. You can therefore deploy directly from the local filesystem - this packages up the function in a zip file, uploads it to Cloud Storage, and tells Cloud Functions to fetch it.
If you tell Cloud Functions to fetch its code from a repository, then a second checkout of the repo is made and a zip file created from there, which will be slightly slower (and potentially prone to a race condition if the branch has updated in the background).
What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)
profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.
I have a Node server and multiple controllers that perform DB operations and helpers (For e-mail, for example) within that directory.
I'd like to use source from that directory within my functions. Assuming the following directory structure:
src/
server/
/app/controllers/email_helper.js
fns/
send-confirm/
What's the best way to use email_helper within the send-confirm function?
I've tried:
Symbolically linking the 'server' directory
Adding a local repo to send-confirm/package.json
Neither of the above work.
In principle, your Cloud Functions can use any other Node.js module, the same way any standard Node.js server would. However, since Cloud Functions needs to build your module in the cloud, it needs to be able to locate those dependency modules from the cloud. This is where the issue lies.
Cloud Functions can load modules from any one of these places:
Any public npm repository.
Any web-visible URL.
Anywhere in the functions/ directory that firebase init generates for you, and which gets uploaded on firebase deploy.
In your case, from the perspective of functions/package.json, the ../server/ directory doesn't fall under any of those categories, and so Cloud Functions can't use your module. Unfortunately, firebase deploy doesn't follow symlinks, which is why that solution doesn't work.
I see two possible immediate fixes:
Move your server/ directory to be under functions/. I realize this isn't the prettiest directory layout, but it's the easiest fix while hacking. In functions/package.json you can then have a local dependency on ./server.
Expose your code behind a URL somewhere. For example, you could package up a .tar and put that on Google Drive, or on Firebase Cloud Storage. Alternatively, you can use a public git repository.
In the future, I'd love it if firebase deploy followed symlinks. I've filed a feature request for that in Firebase's internal bug tracker.