How to convert string to number in terraform template file - json

I have a terraform template file source.tpl - it's a json and it has to be JSON, because it's produced by python json library. This file has the following entry
[
{
"data": {
"address": "${NETWORK}",
"netmask": "${NETMASK}",
}
}
]
In my tf module, I render this template:
data "template_file" "source" {
template = "${file("${path.module}/source.tpl")}"
vars = {
NETWORK = element(split("/", "${var.cidr}"),0)
NETMASK = tonumber(element(split("/", "${var.cidr}"),1))
}
}
where cidr is a string - something like 10.1.1.0/24
In the rendered output I need NETMASK to be a number and NETWORK to be a string. I.e. it has to be something like:
data = {
address = "10.1.1.0"
netmask = 24
}
But I'm getting:
data = {
address = "10.1.1.0"
netmask = "24"
}
I.e. netmask is a string. How can I get rid of those quotes in terraform? Initial source.tpl should still have those quotes, because if I remove them - it becomes invalid JSON.

I understand the problem here, you're generating the template using a JSON library that cannot produce something like the following since it's invalid JSON, though this is what you want for the template to be
[
{
"data": {
"address": "${NETWORK}",
"netmask": ${NETMASK}
}
}
]
Might I recommend a little bit of preprocessing? For example
template = "${replace(file("${path.module}/source.tpl"), "\"$${NETMASK}\"", "$${NETMASK}")}"

Related

Is this iam json policy the same in terraform?

I have this json template file that I would like to convert into data "aws_iam_policy_document" "example" and im not sure if i'm converting the json correctly
JSON template im trying to convert (iam-file.json):
"Condition": {
"ArnLike": {
"kms:EncryptionContext:aws:logs:arn": "arn:aws:logs:${region}:${account_id}:*"
}
}
What i've done so far in iam.tf:
condition {
test = "ArnLike"
values = [
"arn:aws:logs:${var.region}:${data.aws_caller_identity.current.account_id}:*"
]
variable = "kms:EncryptionContext:arn:aws:logs"
}
Are they both the same thing? Im not sure on how the variable works in the iam.tf
When i try to plan-apply these changes i get this:
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place
~ Condition = {
~ ArnLike = {
+ kms:EncryptionContext:arn:aws:logs = [
+ "arn:aws:logs:eu-west-1:123456789:*",
]
- kms:EncryptionContext:aws:logs:arn = "arn:aws:logs:eu-west-1:123456789:*" -> null
}

How add terraform variables in a JSON file

Hey team I’m having trouble finding in the documentation on how to add terraform variables in a JSON file,
I need inject this variable in this JSON,
In this JSON of this shape but not it works,
I did try with var and locals, I tried it with var and locals, but it does not work, it is by default
You could use templatefile function [1]:
locals {
mystring = "Test"
}
resource "grafana_dashboard" "metrics" {
config_json = templatefile("${path.root}/EC2.json.tpl", {
mystring = local.mystring
})
}
For this to work, you would have to change the JSON to be:
"datasource": {
"type": "CloudWatch"
"uid": "${mystring}"
}
The file with JSON data should also be renamed to EC2.json.tpl.
[1] https://www.terraform.io/language/functions/templatefile

Terraform 0.12 AWS resource containing JSON built from variable

To provision tag policies in an AWS organization, I need to build the JSON content from variables. Management of tag policies, scp, etc. shall be centralized, so changes can be applied everywhere: Renaming, adding, removing tags, etc.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
profile = "default"
region = "us-west-1"
}
The problem at hand I am facing is: How would I build the JSON object?
Example variable/ tag map:
# tag_policies.tf
variable "resource_tags" {
description = "Central resource tags"
type = list( object( {
name = string
tags = map(string)
} ) )
default = [
{
name = "Environment"
tags = {
prod = "crn::env:prod"
lab = "crn::env:lab"
dev = "crn::env:dev"
}
}
]
}
What I have tried so far is to use HCL template tags, but I end up with one , comma too much when iterating through the map of tag names. This works fine for the join() with the sub-map of tag names, but does not workout if I try to wrap the template markup. Why did I try this? Because I ran out of ideas.
# vars.tf
resource "aws_organizations_policy" "root-tag-policy" {
name = "RootTagPolicy"
type = "TAG_POLICY"
content = <<CONTENT
{
"tags": {
%{ for tag in var.resource_tags_env ~}
"${tag.name}": {
"tag_key": {
"##assign": "${tag.name}",
"##operators_allowed_for_child_policies": [ "##none" ]
},
"tag_value": { "##assign": [ "${join( ", ", values( tag.tags ) )}" ] }
},
%{ endfor ~}
}
}
CONTENT
}
The solution actually was quite simple: Iterate of the tags using a for expression and enclose it with curly braces { … } to return an object (=> returns tuples).
Finally jsonencode() cares about converting the HCL key = value syntax to proper JSON.
resource "aws_organizations_policy" "root-tag-policy" {
name = "RootTagPolicy"
type = "TAG_POLICY"
content = jsonencode( [ for key, tag in var.resource_tags: {
"${tag.name}" = {
"tag_key" = {
"##assign" = tag.name,
"##operators_allowed_for_child_policies" = [ "##none" ]
},
"tag_value" = { "##assign" = [ join( ", ", values( tag.tags ) ) ] }
}
} ] )
}
EDIT This still does not work, as I forgot that the whole JSON object needs to get wrapped inside a tags: {}.
kaiser's answer shows a good general approach: build a suitable data structure and then pass it to jsonencode to get a valid JSON string from it.
Here's an example that I think matches what the string template in the original question would've produced:
content = jsonencode({
tags = {
for tag in var.resource_tags_env : tag.name => {
tag_key = {
"##assign" = tag.name
"##operators_allowed_for_child_policies" = ["##none"]
}
tag_value = {
"##assign" = values(tag.tags)
}
}
}
})
I'm not familiar with the aws_organizations_policy resource type so I'm sorry if I got some details wrong here, but hopefully you can adapt the above example to generate the JSON data structure you need.
After reading #martin-atkins answer, I finally understood how the for works for objects and maps. The var before the => arrow actually is part of the resulting object. (This highly confused me as I compared it to other languages arrow functions and arguments.)
The first part of the process is to build a map of maps. The main reason is that I don't want to have a convention of a name key in a map of variables. This might lead to handling of conventions later on, what should be avoided at all costs as it is a possible trap if one does not pay close attention or is aware of it. So the key actually is the name now.
Data Structure
variable "resource_tags" {
description = "Central resource tags"
type = map(
map(string)
)
default = {
Environment = {
common = "grn::env:common"
prod = "grn::env:prod"
stage = "grn::env:stage"
dev = "grn::env:dev"
demo = "grn::env:demo"
lab = "grn::env:lab"
},
Foo = {
bar = "baz"
}
}
}
The content as JSON
After understanding that the key in { "tags": { … } } is just the part before the =>, I could reduce the final resource to the following block.
resource "aws_organizations_policy" "root-tag-policy" {
name = "RootTagPolicy"
description = "Tag policies, assigned to the root org."
type = "TAG_POLICY"
content = jsonencode({
tags = {
for key, tags in var.resource_tags : key => {
tag_key = {
"##assign" = key
"##operators_allowed_for_child_policies" = ["##none"]
}
tag_value = {
"##assign" = values( tags )
}
}
}
})
}
Quick test:
Add the following output statement after the resource block:
output "debug" {
value = aws_organizations_policy.tp_root-tag-policy.content
}
Now apply (or plan or refresh) just this resource. It's faster this way. Then output the built debug from the apply or refresh run.
$ terraform apply -target=aws_organizations_policy.root-tag-policy
…things happening…
$ terraform output debug | json_pp
ProTips:
Pipe the output of the output directly into json_pp or jq so you can read it.
Use jq . if you want validation on top. If you see the output, it means it's valid. Else you should receive 0 as response.

Create Terraform resources out of JSON values

I am looking for a way to generate Terraform code based on JSON values.
Imagine I have a JSON file with the following structure:
{
"settings": [
{
"conf": [
{
"setting": "DeploymentPolicy",
"namespace": "aws:elasticbeanstalk:command",
"value": "AllAtOnce"
},
{
"setting": "BatchSize",
"namespace": "aws:elasticbeanstalk:command",
"value": "30"
},
{
"setting": "BatchSizeType",
"namespace": "aws:elasticbeanstalk:command",
"value": "Percentage"
}
]
}
]
}
What I want to do is the following:
Creating a working Terraform resource based on the JSON file values, e.g. a beanstalk environment like this:
resource "aws_elastic_beanstalk_environment" "app_prod" {
name = "${aws_elastic_beanstalk_application_version.app.name}-prod"
application = aws_elastic_beanstalk_application.app.name
solution_stack_name = data.aws_elastic_beanstalk_solution_stack.latest_linux_java.name
wait_for_ready_timeout = "10m"
version_label = aws_elastic_beanstalk_application_version.app.name
# Elastic beanstalk configuration
setting {
name = "DeploymentPolicy"
namespace = "aws:elasticbeanstalk:command"
value = "AllAtOnce"
}
setting {
name = "BatchSize"
namespace = "aws:elasticbeanstalk:command"
value = "30"
}
...
}
Therefore I have to create the settings block in HCL (Terraform configuration) based on the JSON values.
This means the JSON file above should result in:
setting {
name = "DeploymentPolicy"
namespace = "aws:elasticbeanstalk:command"
value = "AllAtOnce"
}
setting {
name = "BatchSize"
namespace = "aws:elasticbeanstalk:command"
value = "30"
}
setting {
name = "BatchSizeType"
namespace = "aws:elasticbeanstalk:command"
value = "Percentage"
}
As you can see, the structure of JSON and HCL is very similar, but not identical. See e.g. settings, conf, or setting instead of name in the JSON.
A possible approach would be to read the JSON values and store them in an array or a map. But I have no idea how I could generate valid HCL and inject it in the desired part of the resource. Furthermore I tried to use a template but Terraform does not support the looping functionality that I need to iterate over the settings.
To sum up:
Input is a JSON file that must be read
JSON contains settings (besides other information)
The number of settings can differ
Somehow I have to generate a settings block
Somehow I have to inject this settings blok in the resource
Does anyone have an idea how to do that? Any other approaches?
Thanks a lot!
Assuming that your JSON object were in a file called settings.json inside your module directory, you could do something like this:
locals {
environment_settings = jsondecode(file("${path.module}/settings.json")).settings[0].conf[0]
}
resource "aws_elastic_beanstalk_environment" "app_prod" {
name = "${aws_elastic_beanstalk_application_version.app.name}-prod"
application = aws_elastic_beanstalk_application.app.name
solution_stack_name = data.aws_elastic_beanstalk_solution_stack.latest_linux_java.name
wait_for_ready_timeout = "10m"
version_label = aws_elastic_beanstalk_application_version.app.name
dynamic "setting" {
for_each = local.environment_settings
content {
namespace = setting.value.namespace
name = setting.value.setting
value = setting.value.value
}
}
}
This special dynamic block is a sort of macro to create repeated setting blocks, each one correlating with one element of the collection given in for_each.
You can do whatever transformations of the input you need using Terraform's expression language in the locals block to ensure that the local.environment_settings value contains one element for each setting block you will generate, and then in the content nested block tell Terraform how to populate the setting arguments based on those element values.

Parsing JSON objects with arbitrary keys in Logstash

Consider a subset of a sample output from http://demo.nginx.com/status:
{
"timestamp": 1516053885198,
"server_zones": {
"hg.nginx.org": {
... // Data for "hg.nginx.org"
},
"trac.nginx.org": {
... // Data for "trac.nginx.org"
}
}
}
The keys "hg.nginx.org" and "track.nginx.org" are quite arbitrary, and I would like to parse them into something meaningful for Elasticsearch. In other words, each key under "server_zones" should be transformed into a separate event. Logstash should thus emit the following events:
[
{
"timestamp": 1516053885198,
"server_zone": "hg.nginx.org",
... // Data for "hg.nginx.org"
},
{
"timestamp": 1516053885198,
"server_zone": "trac.nginx.org",
... // Data for "trac.nginx.org"
}
]
What is the best way to go about doing this?
You can try using the ruby filter. Get the server zones and create a new object using the key value pairs you want to include. From the top of my head, something like below should work. Obviously you then need to map the object to your field in the index. Change the snipped based on your custom format i.e. build the array or object as you want.
filter {
ruby {
code => " time = event.get('timestamp')
myArr = []
event.to_hash.select {|k,v| ['server_zones'].include?(k)}.each do |key,value|
myCustomObject = {}
#map the key value pairs into myCustomObject
myCustomObject[timestamp] = time
myCustomObject[key] = value
myArr.push(myCustomObject) #you'd probably move this out based on nesting level
end
map['my_indexed_field'] = myArr
"
}
}
In the output section use rubydebug for error debugging
output {
stdout { codec => rubydebug }
}