I am trying to do some kind of parameterizatiim inside one of the pkrvars.hcl files. I would like to have urls pointing to some resource to be using some other variables, like:
Lib_url = "https://lib-name-${version}`
Where version comes from other packer variables file. I can see that using variables is not possible this way.
The question is - is it possible to use a variable/local value of a valie of some other variable in packer variables file?
What you can do is have a variable which is configurable at runtime (either with a var-file, or -var or an env var PKG_VAR_var, see https://www.packer.io/guides/hcl/variables) and have other "variables" named locals which derive from this variable. See https://www.packer.io/docs/templates/hcl_templates/locals
an example
variables {
version {
type = string
description = "OS version"
default = "bullseye"
}
}
locals {
apt_url = "http://domain.tld/${var.version}"
apt_key = "http://domain.tld/${var.version}.key"
}
Then in your build you use those variables with ${local.apt_url}
Related
Currently, I'm specifying lifecycle_rule under my s3 resource:
resource "aws_s3_bucket" "bucket-name" {
bucket = "bucket-name"
lifecycle_rule {
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration {
days = 30
}
}
}
...but I imagine there must be a way to make this more modular, like putting the lifecycle rule into a separate JSON so I can reference it for multiple s3 buckets and reduce the need to edit each resource. I know how to do this in general and have done this with other resources as seen here:
resource "aws_iam_policy" "devops-admin-write" {
name = "devops-admin-s3"
description = "Devops-Admin group s3 policy."
policy = file("iam_policies/devops-admin-write.json")
}
...the difference is that "lifecycle_rule" is an argument and not an attribute - and it's not clear to me how to make it work. Google-Fu has not yielded any clear answers either.
You can use dynamic blocks that you execute with a generic local variable.
So you just need to change the local variable and changes will reflect in all places where this variable is used.
To make it more maintainable I would suggest building a module and reusing the module or using an exiting module.
But the locals + dynamic implementation could look like this:
locals {
lifecycle_rules = [
{
id = "Expiration Rule"
enabled = true
prefix = "reports/"
expiration = {
days = 30
}
}
]
}
resource "aws_s3_bucket" "bucket-name" {
bucket = "bucket-name"
dynamic "lifecycle_rule" {
for_each = local.lifecycle_rules
content {
id = lifecycle_rule.each.id
enabled = lifecycle_rule.each.enabled
prefix = lifecycle_rule.each.prefix
expiration {
days = lifecycle_rule.each.expiration.days
}
}
}
}
This does not check for errors and is not complete of course - it just implements your example.
See a more complete generic example in our terraform s3-bucket module: find code here
I am dealing with creating AWS API Gateway. I am trying to create CloudWatch Log group and name it API-Gateway-Execution-Logs_${restApiId}/${stageName}. I have no problem in Rest API creation.
My issue is in converting restApi.id which is of type pulumi.Outout to string.
I have tried these 2 versions which are proposed in their PR#2496
const restApiId = apiGatewayToSqsQueueRestApi.id.apply((v) => `${v}`);
const restApiId = pulumi.interpolate `${apiGatewayToSqsQueueRestApi.id}`
here is the code where it is used
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
`API-Gateway-Execution-Logs_${restApiId}/${stageName}`,
{},
);
stageName is just a string.
I have also tried to apply again like
const restApiIdStrign = restApiId.apply((v) => v);
I always got this error from pulumi up
aws:cloudwatch:LogGroup API-Gateway-Execution-Logs_Calling [toString] on an [Output<T>] is not supported.
Please help me convert Output to string
#Cameron answered the naming question, I want to answer your question in the title.
It's not possible to convert an Output<string> to string, or any Output<T> to T.
Output<T> is a container for a future value T which may not be resolved even after the program execution is over. Maybe, your restApiId is generated by AWS at deployment time, so if you run your program in preview, there's no value for restApiId.
Output<T> is like a Promise<T> which will be eventually resolved, potentially after some resources are created in the cloud.
Therefore, the only operations with Output<T> are:
Convert it to another Output<U> with apply(f), where f: T -> U
Assign it to an Input<T> to pass it to another resource constructor
Export it from the stack
Any value manipulation has to happen within an apply call.
So long as the Output is resolvable while the Pulumi script is still running, you can use an approach like the below:
import {Output} from "#pulumi/pulumi";
import * as fs from "fs";
// create a GCP registry
const registry = new gcp.container.Registry("my-registry");
const registryUrl = registry.id.apply(_=>gcp.container.getRegistryRepository().then(reg=>reg.repositoryUrl));
// create a GCP storage bucket
const bucket = new gcp.storage.Bucket("my-bucket");
const bucketURL = bucket.url;
function GetValue<T>(output: Output<T>) {
return new Promise<T>((resolve, reject)=>{
output.apply(value=>{
resolve(value);
});
});
}
(async()=>{
fs.writeFileSync("./PulumiOutput_Public.json", JSON.stringify({
registryURL: await GetValue(registryUrl),
bucketURL: await GetValue(bucketURL),
}, null, "\t"));
})();
To clarify, this approach only works when you're doing an actual deployment (ie. pulumi up), not merely a preview. (as explained here)
That's good enough for my use-case though, as I just want a way to store the registry-url and such after each deployment, for other scripts in my project to know where to find the latest version.
Short Answer
You can specify the physical name of your LogGroup by specifying the name input and you can construct this from the API Gateway id output using pulumi.interpolate. You must use a static string as the first argument to your resource. I would recommend using the same name you're providing to your API Gateway resource as the name for your Log Group. Here's an example:
const apiGatewayToSqsQueueRestApi = new aws.apigateway.RestApi("API-Gateway-Execution");
const cloudWatchLogGroup = new aws.cloudwatch.LogGroup(
"API-Gateway-Execution", // this is the logical name and must be a static string
{
name: pulumi.interpolate`API-Gateway-Execution-Logs_${apiGatewayToSqsQueueRestApi.id}/${stageName}` // this the physical name and can be constructed from other resource outputs
},
);
Longer Answer
The first argument to every resource type in Pulumi is the logical name and is used for Pulumi to track the resource internally from one deployment to the next. By default, Pulumi auto-names the physical resources from this logical name. You can override this behavior by specifying your own physical name, typically via a name input to the resource. More information on resource names and auto-naming is here.
The specific issue here is that logical names cannot be constructed from other resource outputs. They must be static strings. Resource inputs (such as name) can be constructed from other resource outputs.
Encountered a similar issue recently. Adding this for anyone that comes looking.
For pulumi python, some policies requires the input to be stringified json. Say you're writing an sqs queue and a dlq for it, you may initially write something like this:
import pulumi_aws
dlq = aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=json.dumps({
"deadLetterTargetArn": dlq.arn,
"maxReceiveCount": "3"
})
)
The issue we see here is that the json lib errors out stating type Output cannot be parsed. When you print() dlq.arn, you'd see a memory address for it like <pulumi.output.Output object at 0x10e074b80>
In order to work around this, we have to leverage the Outputs lib and write a callback function
import pulumi_aws
def render_redrive_policy(arn):
return json.dumps({
"deadLetterTargetArn": arn,
"maxReceiveCount": "3"
})
dlq = pulumi_aws.sqs.Queue()
queue = pulumi_aws.sqs.Queue(
redrive_policy=Output.all(arn=dlq.arn).apply(
lambda args: render_redrive_policy(args["arn"])
)
)
I am creating functional tests dynamically using Intern v4 and dojo 1.7. To accomplish this I am assigning registerSuite to a variable and attaching each test to the Tests property in registerSuite:
var registerSuite = intern.getInterface('object').registerSuite;
var assert = intern.getPlugin('chai').assert;
// ...........a bunch more code .........
registerSuite.tests['test_name'] = function() {
// READ JSON FILE HERE
var JSON = 'filename.json';
// ....... a bunch more code ........
}
That part is working great. The challenge I am having is that I need to read information from a different JSON file for each test I am dynamically creating. I cannot seem to find a way to read a JSON file while the dojo javascript is running (I want to call it in the registerSuite.tests function where it says // READ JSON FILE HERE). I have tried dojo's xhr.get, node's fs, intern's this.remote.get, nothing seems to work.
I can get a static JSON file with define(['dojo/text!./generated_tests.json']) but this does not help me because there are an unknown number of JSON files with unknown filenames, so I don't have the information I would need to call them in the declare block.
Please let me know if my description is unclear. Any help would be greatly appreciated!
Since you're creating functional tests, they'll always run in Node, so you have access to the Node environment. That means you could do something like:
var registerSuite = intern.getPlugin('interface.object').registerSuite;
var assert = intern.getPlugin('chai').assert;
var tests = {};
tests['test_name'] = function () {
var JSON = require('filename.json');
// or require.nodeRequire('filename.json')
// or JSON.parse(require('fs').readFileSync('filename.json', {
// encoding: 'utf8'
// }))
}
registerSuite('my suite', tests);
Another thing to keep in mind is assigning values to registerSuite.tests won't (or shouldn't) actually do anything. You'll need to call registerSuite, passing it your suite name and tests object, to actually register tests.
I am writing an Azure function in VS 2017. I need to set up a few custom configuration parameters. I added them in local.settings.json under Values.
{
"IsEncrypted":false,
"Values" : {
"CustomUrl" : "www.google.com",
"Keys": {
"Value1":"1",
"Value2" :"2"
}
}
}
Now, ConfigurationManager.AppSettings["CustomUrl"] returns null.
I'm using:
.NET Framework 4.7
Microsoft.NET.Sdk.Functions 1.0.5
System.Configuration.ConfigurationManager 4.4.0
Azure.Functions.Cli 1.0.4
Am I missing something?
Environment.GetEnvironmentVariable("key")
I was able to read values from local.settings.json using the above line of code.
Firstly, I create a sample and do a test with your local.settings.json data, as Mikhail and ahmelsayed said, it works fine.
Besides, as far as I know, Values collection is expected to be a Dictionary, if it contains any non-string values, it can cause Azure function can not read values from local.settings.json.
My Test:
ConfigurationManager.AppSettings["CustomUrl"] returns null with the following local.settings.json.
{
"IsEncrypted": false,
"Values": {
"CustomUrl": "www.google.com",
"testkey": {
"name": "kname1",
"value": "kval1"
}
}
}
If you are using TimeTrigger based Azure function than you can access your key (created in local.settings.json) from Azure Function as below.
[FunctionName("BackupTableStorageFunction")]
public static void Run([TimerTrigger("%BackUpTableStorageTriggerTime%")]TimerInfo myTimer, TraceWriter log, CancellationToken cancellationToken)
Using .Net 6 (and probably some earlier versions) it is possible to inject IConfiguration into the constructor of the function.
public Function1(IConfiguration configuration)
{
string setting = _configuration.GetValue<string>("MySetting");
}
MySetting must be in the Values section of local.settings.json:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
"MySetting": "value"
}
}
It works with Application settings in Azure Function App as well.
Azure function copies the binaries to the bin folder and runs using the azure function cli, so it searches for the local.settings.json, so make sure you have set the "Copy to Output Directory" to "Copy Always"
Hey you mmight be able to read the properties while debugging, but once you go and try to deploy that in azure, those properties are not going to work anymore. Azure functions does not allow nested properties, you must use all of them inline in the "Values" option or in "ConnectionStrings".
Look at this documentation as reference:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-how-to-use-azure-function-app-settings
var value = Environment.GetEnvironmentVariable("key", EnvironmentVariableTarget.Process); should be the more appropriate answer, though EnvironmentVariableTarget.Process is the default value but it's more meaningful here.
Look at its EnvironmentVariableTarget declaration.
//
// Summary:
// Specifies the location where an environment variable is stored or retrieved in
// a set or get operation.
public enum EnvironmentVariableTarget
{
//
// Summary:
// The environment variable is stored or retrieved from the environment block associated
// with the current process.
Process = 0,
//
// Summary:
// The environment variable is stored or retrieved from the HKEY_CURRENT_USER\Environment
// key in the Windows operating system registry. This value should be used on .NET
// implementations running on Windows systems only.
User = 1,
//
// Summary:
// The environment variable is stored or retrieved from the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
// Manager\Environment key in the Windows operating system registry. This value
// should be used on .NET implementations running on Windows systems only.
Machine = 2
}
I have two environment variables. One is TF_VAR_UN and another is TF_VAR_PW. Then I have a terraform file that looks like this.
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
zone = "us-central1-a"
initial_node_count = 3
master_auth {
username = ${env.TF_VAR_UN}
password = ${env.TF_VAR_PW}
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
The two values I'd like to replace with the environment variables TF_VAR_UN and TF_VAR_PW are the values username and password. I tried what is shown above, with no success, and I've toyed around with a few other things but always get syntax issues.
I would try something more like this, which seems closer to the documentation.
variable "UN" {
type = string
}
variable "PW" {
type = string
}
resource "google_container_cluster" "primary" {
name = "marcellus-wallace"
zone = "us-central1-a"
initial_node_count = 3
master_auth {
username = var.UN
password = var.PW
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
With the CLI command being the below.
TF_VAR_UN=foo TF_VAR_PW=bar terraform apply
The use of interpolation syntax throws warning with terraform v0.12.18. Now you don't need to use the interpolation syntax. You can just reference it as var.hello.
Caution :
One important thing to understand from a language standpoint is that, you cannot declare variables using environment variables. You can only assign values for declared variables in the script using environment varibles. For example, let's say you have the following .tf script
variable "hello" {
type=string
}
Now if the environment has a variable TF_VAR_hello="foobar", during runtime the variable hello will have the value "foobar". If you assign the variable without the declaration of the variable there will not be any effect.
You can do the following to get this working.
Declare the variable in terraform configuration that you want to use as environment Variable.
variable "db_password" { type= string }
In the resource section where you want to use this variable change it as
"db_password":"${var.db_password}"
Export the environment variable.
export TF_VAR_db_password="##password##"
terraform plan or terraform apply
Use a null_resource to execute a terminal command (read an environment variable), redirect output to a file, then read the file content:
resource "null_resource" "read_environment_var_value_via_cli" {
triggers = { always_run = "${timestamp()}" }
provisioner "local-exec" {
command = "echo $TF_VAR_UN > TF_VAR_UN.txt" # add gitignore
}
}
data "local_file" "temp_file" {
depends_on = [ null_resource.read_environment_var_value_via_cli]
filename = "${path.module}/TF_VAR_UN.txt"
}
# use value as desired
resource "google_container_cluster" "primary" {
master_auth {
username = data.local_file.temp_file.content # value of $TF_VAR_UN
..
}
}
Most of the providers use:
DefaultFunc: schema.EnvDefaultFunc("
https://github.com/terraform-providers/terraform-provider-infoblox/blob/master/infoblox/provider.go
https://github.com/terraform-providers/terraform-provider-openstack/blob/master/openstack/provider.go
...
Alternatively, you can replace the variables in the file itself using the envsubst utility in bash:
$ envsubst < main.tf > main.tf
Or using an intermediate file with variables and the final config on the output:
$ envsubst < main.txt > main.tf
! Variables for envsubst must be declared using export:
$ export MYVAR=1729
The variables in the source file must be of the form: $VARIABLE or ${VARIABLE}.
in order to use a variable it needs to be wrapped with ""
for example:
username = "${var.UN}"