Is there an "official" solution for passing sensitive information, such as API keys, to Google Cloud Functions? In particular it would be nice to avoid passing this information as arguments to the function since it will be the same for every invocation. AWS Lambda has a built-in solution using encrypted environment variables for this. Is there some similar approach for Google Cloud Functions?
I could imagine using a cloud storage bucket or cloud datastore for this, but that feels very manual.
If you're using Cloud Functions with Firebase, you're looking for environment configuration.
With that, you deploy configuration data from the Firebase CLI:
firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID"
And then read it in your functions with:
functions.config().someservice.id
You can use Google Secret Manager.
https://cloud.google.com/secret-manager/docs
See this article for an example:
https://dev.to/googlecloud/using-secrets-in-google-cloud-functions-5aem
The other answers are outdated, since firebase-functions v3.18.0 the recommended way is to use secrets (like env but passed explicitly to specific functions and without .env but a remote config) : https://firebase.google.com/docs/functions/config-env?hl=en#secret-manager
Before environment variable support was released in firebase-functions v3.18.0, using functions.config() was the recommended approach for environment configuration. This approach is still supported, but we recommend all new projects use environment variables instead, as they are simpler to use and improve the portability of your code.
You can use it like this:
firebase functions:secrets:set MY_SECRET
And answer a value to the CLI.
And then in your function:
exports.processPayment = functions
// Make the secret available to this function
.runWith({ secrets: ["MY_SECRET"] })
.onCall((data, context) => {
// Now you have access to process.env.MY_SECRET
});
Few options that you can go about this:
GCP Runtime Configuration. Set up a schema that can be only accessed by your service account and put your secret there. You do that prior to your deployment. Your app should be able to use Runtime Configuration API to access these. You can use this nifty library: https://www.npmjs.com/package/#google-cloud/rcloadenv
Generate a JS on the fly including that information as part of your cloud function build/deploy and include that file as part of you cloud function.
Use Google KMS to store the keys and access it with KMS's API
As for now, there's no way to do this.
https://issuetracker.google.com/issues/35907643
Related
Looking to pass an alchemy rpc endpoint url that contains its api key and is stored as an environment variable in a solidity test file executed using forge test. I want to do so in order to use it to fork goerli testnet and potentially be able to manage different forks in the same test context. The vm cheatcode below creates a local fork of goerli at block_number:
vm.createSelectFork($GOERLI_ALCHEMY_SECRET_URL, block_number);
How do I pass it $GOERLI_ALCHEMY_SECRET_URL from the environment?
In order to pass an environment variable into a test file you can make use of vm.envString():
vm.createSelectFork(vm.envString("GOERLI_ALCHEMY_SECRET_URL"), block_number);
I'm running a plugin in Design Automation platform on forge however I do run it locally as well for testing. I'd like a way to check if the code is running on forge or not.
Searching I came across this example:
https://forge.autodesk.com/blog/how-generate-dynamic-number-output-design-automation-revit-v3
which use if (RuntimeValue.RunOnCloud) however I didn't manage to get it work (nor to find any references for it in forge documentation).
How can I check if I run on forge?
Design automation service sets a special environmental variable DAS_WORKITEM_ID for your appbundle code to make use of it should you need. Given that, you should be able to check if this variable is set to determine if your code is running in DA.
public static string GetWorkitemId()
{
return Environment.GetEnvironmentVariable("DAS_WORKITEM_ID");
}
public static bool IsRunningInDA()
{
return !String.IsNullOrEmpty(GetWorkitemId());
}
Please note that we recommend using same code for your DA appbundle and Desktop Revit DB addin. Use such tactics with caution and try to minimize the differences between your DB addin and DA appbundle.
The startup method of your application differs: OnApplicationInitialized versus OnDesignAutomationReadyEvent. You can set a flag in these and check it from you plugin code, cf. e.g., Preparing a Revit Add-in for Design Automation.
We're using Spring Cloud Contract for testing our services. I was wondering if there is some way to set the stubsMode at runtime, as opposed to being an option on the annotation:
#AutoConfigureStubRunner(ids = {...}, stubsMode = StubRunnerProperties.StubsMode.LOCAL)
If the annotation is the only way to set this option, we'll need to have two separate classes, one for local and one for remote.
You can use the properties. We describe it in the docs. Just pass the system prop stubrunner.stubs-mode=local or an env var STUBRUNNER_STUBS_MODE=LOCAL
I want to create a Lambda function that runs through S3 files and if needed triggers other Lambda functions to parse the files in parallel.
Is this possible?
Yes it's possible. You would use the AWS SDK (which is included in the Lambda runtime environment for you) to invoke other Lambda functions, just like you would do in code running anywhere else.
You'll have to specify which language you are writing the Lambda function in if you want a more detailed answer.
If I understand your problem correctly you want one lambda that goes through a list of files in a S3-bucket. Some condition will decide whether a file should be parsed or not. For the files that should be parsed you want another 'file-parsing' lambda to parse those files.
To do this you will need two lambdas - one 'S3 reader' and one 'S3 file parser'.
For triggering the 'S3 file parser' lambda you have many few different options. Here are a two:
Trigger it using a SNS topic. (Here is an article on how to do that). If you have a very long list of files this might be an issue, as you most likely will surpass the number of instances of a lambda that can run in parallel.
Trigger it by invoking it with the AWS SDK. (See the article 'Leon' posted as comment to see how to do that.) What you need to consider here is that a long list of files might cause the 'S3 reader' lambda that controls the invocation to timeout since there is a 5 min runtime limit for a lambda.
Depending on the actual use case another potential solution is to just have one lambda that gets triggered when a file gets uploaded to the S3 bucket and let it decide whether it should get parsed or not and then parse it if needed. More info about how to do that can be found in this article and this tutorial.
I'd like to use the NativeJSON class from a Rhino shell script. The only things I can find about how to use it on the Web are from Java.
// Load the configuration file
load(arguments[0]);
// Extract the configuration for the target environment
print(NativeJSON.stringify(environments[arguments[1]]));
Any clue how I'd get at it from a Rhino shell script?
The NativeJSON class is an implementation of the JSON object from ECMAScript 5, so you shouldn't need to do anything special. You can access it by calling JSON.stringify(object) or JSON.parse(jsonString).