here's my first bootle to the sea.
I want to create one single secret manager that contains a map of 3 passwords with Terraform IAC.
To do that, I have tried to create a aws_secretmanager_version with
resource "aws_secretsmanager_secret" "secret_master" {
name = "secret-master"
}
resource "aws_secretsmanager_secret_version" "sversion" {
secret_id = aws_secretsmanager_secret.secret_master.id
secret_string = <<EOF
{
"dbPassword": "${random_password.db_password.result}",
"awsSecretAccess": "${random_password.aws_access_key_id.result}",
"secretAccessKey": "${random_password.sec_access_key.result}"
}
EOF
}
data "aws_secretsmanager_secret" "secret_master" {
arn = aws_secretsmanager_secret.secret_master.arn
}
data "aws_secretsmanager_secret_version" "secrets" {
secret_id = data.aws_secretsmanager_secret.secret_master.id
}
locals {
secrets = jsondecode(data.aws_secretsmanager_secret_version.secrets.secret_string)
}
In fact, i followed this tutorial to understand : https://automateinfra.com/2021/03/24/how-to-create-secrets-in-aws-secrets-manager-using-terraform-in-amazon-account/
The problem is the error resulting :
│
│ on sm.tf line 60, in locals:
│ 60: secrets = jsondecode(data.aws_secretsmanager_secret_version.secrets.secret_string)
│ ├────────────────
│ │ data.aws_secretsmanager_secret_version.secrets.secret_string has a sensitive value
│
│ Call to function "jsondecode" failed: invalid character '"' after object key:value pair.
I replaced the random_passwords with strings "eeeee" and verified several times the json syntax. Nothing changed.
Could you help me to learn more about this error ?
This error is saying that your string in data.aws_secretsmanager_secret_version.secrets.secret_string does not have valid JSON syntax.
I have to assume that data.aws_secretsmanager_secret_version.secrets.secret_string is the same as aws_secretsmanager_secret_version.sversion.secret_string here, but I'm not 100% sure since I'm not an expert on AWS secrets manager.
If that is true then I expect what's happening is that one of the strings that you interpolated into the JSON string contains a " character which is therefore causing the resulting string to not be valid JSON.
To guarantee that your result will be valid JSON, you should use jsonencode to produce that string instead of string template, because then Terraform will guarantee to generate valid JSON escaping for you when needed:
resource "aws_secretsmanager_secret_version" "sversion" {
secret_id = aws_secretsmanager_secret.secret_master.id
secret_string = jsonencode({
dbPassword = random_password.db_password.result
awsSecretAccess = random_password.aws_access_key_id.result
secretAccessKey = random_password.sec_access_key.result
})
}
The above secret_string expression first constructs a Terraform object value, and then uses jsonencode to translate that into a string containing an equivalent JSON object, using the translation rules shown in the documentation for jsonencode.
This is tangential to your question, but note also that it's unnecessary and potentially problematic to read back the same object using a data block that this module is managing using a resource block.
In the example you showed is relatively harmless because Terraform can clearly see the dependency relationship between the data blocks and the resource blocks -- but unless these resource types are designed in a very unusual way all this is achieving is telling Terraform to read the same object it already had in memory anyway, which may cause you to hit rate limits faster or make your terraform apply slower.
You can refer directly to jsondecode(aws_secretsmanager_secret_version.sversion.secret_string) to use the value you assigned to that argument in other parts of the configuration, and so I would recommend doing that and not using the data blocks, unless you know that the data sources are doing some kind of transformation on that value that is important to your downstream use of it.
Related
I’m loading a json file with jsondecode() in terraform, and I need to dynamically lookup a path in the json tree. Eg say I have the following json in file.json:
{
"some1": {
"path1": {
"key1": value1
"key2": value2
}
}
}
If I load this into a local called myjson then I could write local.myjson.some1.path1.key1 to get value 1.
But I need the path to be an input. The following does not work:
locals {
tree = jsondecode("file.json")
path = ["some1", "path1", "key1"]
value = local.tree[local.path]
}
I looked at all the builtin functions in terraform, such as lookup, flatten, etc, I could not see any combination that would allow me to loop over elements of local.path2 to extract successively deeper elements of local.tree. Except try, works nicely but the max depth is hardcoded:
locals {
level1 = try(local.json[local.path[0]], null)
level2 = try(local.level1[local.path[1]], local.level1)
level3 = try(local.level2[local.path[2]], local.level2)
level4 = try(local.level3[local.path[3]], local.level3)
...
result = try(local.levelN[local.path[N]], local.levelN)
}
so regardless of how many levels there actually are in the local.tree, result will contain it.
I can live with hardcoded N, but is there a better way, that does not have that limitation? (short of creating a custom provider that defines a data source that does this)
The Terraform language has no built-in functionality for this sort of arbitrary dynamic traversal.
As you noted in your question, it is possible in principle for a provider to offer this functionality. It wasn't clear to me whether you didn't want to use a provider at all or if you just didn't want to be the one to write it, and so just in case it was the latter I can at least offer a provider I already wrote and published which can potentially address this need, which is called apparentlymart/javascript and exposes a JavaScript interpreter into the Terraform language which you can use for arbitrary complex data manipulation:
terraform {
required_providers {
javascript = {
source = "apparentlymart/javascript"
version = "0.0.1"
}
}
}
variable "traversal_path" {
type = list(string)
}
data "javascript" "example" {
source = <<-EOT
for (var i = 0; i < path.length; i++) {
data = data[path[i]]
}
data
EOT
vars = {
data = jsondecode(file("${path.module}/file.json"))
path = var.traversal_path
}
}
output "result" {
value = data.javascript.example.result
}
I can run this with different values of var.traversal_path to select different parts of the data structure in the JSON file:
$ terraform apply -var='traversal_path=["some1", "path1", "key1"]' -auto-approve
data.javascript.example: Reading...
data.javascript.example: Read complete after 0s
Changes to Outputs:
+ result = "value1"
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
result = "value1"
$ terraform apply -var='traversal_path=["some1", "path1", "key2"]' -auto-approve
data.javascript.example: Reading...
data.javascript.example: Read complete after 0s
Changes to Outputs:
~ result = "value1" -> "value2"
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
result = "value2"
$ terraform apply -var='traversal_path=["some1", "path1", "key3"]' -auto-approve
data.javascript.example: Reading...
data.javascript.example: Read complete after 0s
Changes to Outputs:
- result = "value2" -> null
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
I included the final example above to be explicit that escaping into JavaScript for this problem means adopting some of JavaScript's behaviors rather than Terraform's, and JavaScript handles looking up a non-existing object property by returning undefined rather than returning an error as Terraform would, and the javascript data source translates that undefined into a Terraform null. If you want to treat that as an error as Terraform would then you'd need to write some logic into the loop to test whether data is defined after each step. You can use the JavaScript throw statement to raise an error from inside the given script.
Of course it's not ideal to embed one language inside another like this, but since the Terraform language is intended for relatively straightforward declarations rather than general computation I think it's reasonable to use an escape-hatch like this if the overall problem fits within the Terraform language but there is one small part of it that would benefit from the generality of a general-purpose language.
Bonus chatter: if you prefer a more functional style to the for loop I used above then you can alternatively make use of the copy of Underscore.js that's embedded inside the provider, using _.propertyOf to handle the traversal in a single statement:
source = <<-EOT
_.propertyOf(data)(path)
EOT
Currently struggling writing a Terraform module to deploy a Helm chart, I was getting:
│ Error: YAML parse error on external-dns/templates/serviceaccount.yaml: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal object into Go struct field .metadata.annotations of type string
with a resource definition like this one:
resource "helm_release" "external_dns" {
name = "externaldns"
namespace = var.external_dns_namespace
repository = "https://charts.bitnami.com/bitnami"
chart = "external-dns"
version = "5.3.0"
set {
name = "serviceAccount.annotations.eks.amazonaws.com/role-arn"
value = resource.aws_iam_role.external_dns_role.arn
}
}
When I found a public repository with a similar module: https://github.com/lablabs/terraform-aws-eks-external-dns/blob/master/main.tf and see that it has the last parameter defined as
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = aws_iam_role.external_dns[0].arn
}
I tried adding those double slashes (\) and everything works! Now I would like to understand... why are these double slash required before the last two "." but not in the other two?
I understand that, in Terraform, the double slash means literally a slash... but I cannot understand why would it be required there.
This is what I am trying to put into the Terraform module.
Any help with an explanation for this issue will be appreciated :)
in name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn" you want to define 3 groups, that are separated by dots:
serviceAccount -> annotations -> eks.amazonaws.com/role-arn
Since your third group happens to contain dots, you successfully found out that you must escape the dot characters in order to preserve proper structure.
Without escaping, the string would somehow mean
serviceAccount -> annotations -> eks -> amazonaws-> com/role-arn, which makes no sense here
I'm storing a config file in version control (GitLab) which contains information to be read by my ruby app. This info is stored as an object containing objects containing objects.
(Update adding more detail and examples for clarity as requested...)
From within my app I can successfully GET the file (which returns the following JSON Object (some bits trimmed with ... for readability):
{"file_name"=>"approval_config.json", "file_path"=>"approval_config.json", "size"=>1331, "encoding"=>"base64", "content_sha256"=>"1c21cbb...fa453fe", "ref"=>"master", "blob_id"=>"de...915", "commit_id"=>"07e...4ff", "last_commit_id"=>"07e...942f", "content"=>"ogICAg...AgICB"}
I can JSON parse the above object and access the contents property on that object. The value of the contents property is a base64Encoded string containing the actual contents of my file in GitLab. I can successfully decode this and see the JSON string stored in GitLab:
"{"G000":{"1":{"max":"4000","name":"Matthew Lewis","id":"ord-matthewl","email":"matthew.lewis#companyx.com"},"2":{"max":"4000","name":"Brendan Jones","id":"ord-brendanj","email":"brendan.jones#companyx.com"},"3":{"max":"20000","name":"Henry Orson","id":"ord-henryo","email":"henry.orson#companyx.com"},"4":{"max":"10000000","name":"Chris Adams","id":"ord-chrisa","email":"chris.adams#companyx.com"}},"G15":{"1":{"max":"4000","name":"Mike Butak","id":"ord-mikebu","email":"mike.butak#companyx.com"},"2":{"max":"4000","name":"Joseph Lister","id":"ord-josephl","email":"joseph.lister#companyx.com"},"3":{"max":"20000","name":"Mike Geisler","id":"ord-mikeg","email":"mike.geisler#companyx.com"},"4":{"max":"10000000","name":"Samuel Ahn","id":"ord-samuela","email":"samuel.ahn#companyx.com"}}}"
THIS string (above), I cannot JSON parse. I get an "unexpected token at '{ (JSON::ParserError)" error.
While writing this update it occurs to me that this "un-parsable" string is simply what I put in the file to begin with. Perhaps the method I used to stringify the file's contents in the first place is the issue. I simply pasted a valid javascript object in my browser's console, JSON.stringify'd it, copied the result from the console, and pasted it in my file in GitLab. Perhaps I need to use Ruby's JSON.stringify method to stringify it?
Based on feedback from #ToddA.Jacobs, I tried the following in my ruby script:
require 'rest-client'
require 'json'
require 'base64'
data = RestClient.get 'https://gitlab.companyx.net/api/v4/projects/3895/repository/files/approval_config.json?ref=master', {'PRIVATE-TOKEN':'*********'}
# get the encoded data stored on the 'content' key:
content = JSON.parse(data)['content']
# decode it:
config = Base64.decode64(content)
# print some logs
$evm.log(:info, config)
$evm.log(:info, "config is a Hash? :" + config.is_a?(Hash).to_s) #prints false
$evm.log(:info, "config is a string? :" + config.is_a?(String).to_s) #prints true
hash = JSON.parse(config)
example = hash.dig "G000" "4" "id"
$evm.log(:info, "print exmaple on next line")
$evm.log(:info, example)
That last line prints:
The following error occurred during method evaluation: NoMethodError: undefined method 'gsub' for nil:NilClass (drbunix:///tmp/automation_engine20200903-3826-1nbuvl) /usr/local/ lib/ruby/gems/2.5.0/gems/manageiq-password-0.3.0/lib/manageiq/password.rb:89:in 'sanitize_string'
Remove Outer Quotes
Your input format is invalid: you're nesting unescaped double quotes, and somehow expecting that to work. Just leave off the outer quotes. For example:
require 'json'
json = <<~'EOF'
{"G000":{"1":{"max":"4000","name":"Matthew Lewis","id":"ord-matthewl","email":"matthew.lewis#companyx.com"},"2":{"max":"4000","name":"Brendan Jones","id":"ord-brendanj","email":"brendan.jones#companyx.com"},"3":{"max":"20000","name":"Henry Orson","id":"ord-henryo","email":"henry.orson#companyx.com"},"4":{"max":"10000000","name":"Chris Adams","id":"ord-chrisa","email":"chris.adams#companyx.com"}},"G15":{"1":{"max":"4000","name":"Mike Butak","id":"ord-mikebu","email":"mike.butak#companyx.com"},"2":{"max":"4000","name":"Joseph Lister","id":"ord-josephl","email":"joseph.lister#companyx.com"},"3":{"max":"20000","name":"Mike Geisler","id":"ord-mikeg","email":"mike.geisler#companyx.com"},"4":{"max":"10000000","name":"Samuel Ahn","id":"ord-samuela","email":"samuel.ahn#companyx.com"}}}
EOF
hash = JSON.parse(json)
hash.dig "G000", "4", "id"
#=> "ord-chrisa"
hash.dig "G15", "4", "id"
#=> "ord-samuela"
This question was answered by users on another post I opened: Why can Ruby not parse local JSON file?
Ultimately the issue was not Ruby failing to parse my JSON. Rather it was the logging function being unable to log the hash.
We are building a service. It has to read config from a file. We are currently using YAML and Jackson for deserializing the YAML. We have a situation where our YAML file needs to inherit/extend another YAML file(s). E.g., something like:
extends: base.yaml
appName: my-awesome-app
...
thus part of the config is stored in base.yaml. Is there any library that has support for this? Bonus points if it allows to inherit from more than one file. We could change to using JSON instead of YAML.
Neither JSON nor YAML have the ability to include files. Whatever you do will be a pre-processing step where you will be putting the base.yaml and your actual file together.
A crude way of doing this would be:
#include base.yaml
appName: my-awesome-app
Let this be your file. Upon loading, you first read the first line, and if it starts with #include, you replace it with the content of the included file. You need to do this recursively. This is basically what the C preprocessor does with C files and includes.
Drawbacks are:
even if both files are valid YAML, the result may not.
if either files includes a directive end or document end marker (--- or ...), you will end up with two separate documents in one file.
you cannot replace any values from base.yaml inside your file.
So an alternative would be to actually operate on the YAML structure. For this, you need the API of the YAML parser (SnakeYAML in your case) and parse your file with that. You should use the compose API:
private Node preprocess(final Reader myInput) {
final Yaml yaml = new Yaml();
final Node node = yaml.compose(myInput);
processIncludes(node);
return node;
}
private void processIncludes(final Node node) {
if (node instanceof MappingNode) {
final List<NodeTuple> values = ((MappingNode) node).getValue();
for (final NodeTuple tuple: values) {
if ("!include".equals(tuple.getKeyNode().getTag().getValue())) {
final String includedFilePath =
((ScalarNode) tuple.getValueNode()).getValue();
final Node content = preprocess(new FileReader(includedFilePath));
// now merge the content in your preferred way into the values list.
// that will change the content of the node.
}
}
}
}
public String executePreprocessor(final Reader source) {
final Node node = preprocess(source);
final StringWriter writer = new StringWriter();
final DumperOptions dOptions = new DumperOptions()
Serializer ser = new Serializer(new Emitter(writer, dOptions),
new Resolver(), dOptions, null);
ser.open();
ser.serialize(node);
ser.close();
return writer.toString();
}
This code would parse includes like this:
!include : base.yaml
appName: my-awesome-app
I used the private tag !include so that there will not be name clashes with any normal mapping key. Mind the space behind !include. I didn't give code to merge the included file because I did not know how you want to handle duplicate mapping keys. It should not be hard to implement though. Be aware of bugs, I have not tested this code.
The resulting String can be the input to Jackson.
Probably for the same desire, I have created this tool: jq-front.
You can do it by following syntax and combinating with yq command.
extends: [ base.yaml ]
appName: my-awesome-app
...
$ yq -j . your.yaml | jq-front | yq -y .
Note that you need to place file names to be extended in an array since the tool supports multiple inheritance.
Points potentially you don't like are
It's quite a bit slow. (But for configuration information, it might be ok since you can convert it to an expanded file once and you will never not the original one after that for your system)
Objects inside an array cannot behave as expected since the tool relies on * operator of jq.
I am using lua in asterisk pbx. I encounter following problem while processing json string.
json "null" value converted to function type in lua. why?
how to handle this scenario? because i am expecting nil because no value means null in json and nil means nothing in lua.
local json = require( "json" )
local inspect = require("inspect")
local myjson_str='{"Sms":{"key":"xxxxxxxxxxxxxxxxxxxxx","to":"{caller}","senderid":null,"type":"Simple","content":"Your request has been accepted in Previous Miss call. We get back to you very soon."}}'
local myjson_table = json.decode(myjson_str)
print(type(myjson_table["Sms"]["senderid"]))
print(myjson_table)
print(inspect(myjson_table))
print(json.encode(myjson_table))
out put for above is
function
table: 0xf5e770
{
Sms = {
content = "Your request has been accepted in Previous Miss call. We get back to you very soon.",
key = "xxxxxxxxxxxxxxxxxxxxx",
senderid = <function 1>,
to = "{caller}",
type = "Simple"
}
}
{"Sms":{"type":"Simple","key":"xxxxxxxxxxxxxxxxxxxxx","senderid":null,"content":"Your request has been accepted in Previous Miss call. We get back to you very soon.","to":"{caller}"}}
It is up to specific library to decide how to represent null value.
Using nil has its own problem because its not possible find either
original JSON has key with null value or there no such key at all.
So some libraries just return some unique value. Some provide
a way to pass this value like json.deconde(str, NULL_VALUE).
So answer is just read the doc/source of library you use.
Most likely it provide something like json.null value to check
either value is null. But function is really strange choice because
they have some undetermined rules of uniqueness.
Or try another library.
First of all, #moteus is right:
It is up to specific library to decide how to represent null value
If you're using the JSON library by Jeffrey Friedl the solution is to use a placeholder instead of null and serializing the table structure to a json string using designated encode options:
-- define a placeholder
NullPlaceholder = "\0"
-- use it in an internal table
tableStructure = {}
tableStructure['someNullValue'] = NullPlaceholder
-- pass the placeholder to the encode methode
encode_options = { null = NullPlaceholder }
jsonString = JSON:encode(tableStructure, nil, encode_options)
which leads to
{"someNullValue": null}