I have two template files in terraform.
The first template file looks like this:
script.sh.tpl
------------------
echo "some_content" > config.json
consul --config config.json
The second template file needs to take the content of first template file. Here is my second template file:
task-definition.json.tpl
----------------------
[
...
"command":[${consul_script}]
"image": "some_docker_image:latest",
"name": "test-app"
}
]
here is what main.tf looks like:
main.tf
-----------------------
data "template_file" "task_definition_template" {
template = file("task-definition.json.tpl")
vars = {
consul_script = data.template_file.consul_script.rendered
}
}
data "template_file" "consul_script" {
template = file("script.sh.tpl")
vars = {
var1 = "test"
}
}
I tried using this, but it's giving me error like this:
ECS Task Definition is invalid: Error decoding JSON: invalid character '\n' in string literal terraform
How can I get rid of this issue and successfully pass the first .tpl in to the second template file?
Related
I have key:value JSON object that is used in my JavaScript project. Value is a string and this object looks like this
{
key1:{
someKey: "Some text",
someKey2: "Some text2"
},
key2:{
someKey3:{
someKey4: "Some text3",
someKey5: "Some text4"
}
}
}
I use it in the project like this: key1.someKey and key2.someKey3.someKey4. Do you have idea how to delete unused properties? Let's say we don't use key2.someKey3.someKey5 in any file in a project, so i want it to be deleted from a JSON file. To people in the comments. I did't say i want to use JavaScript for this. I don't want to use it in browser or server. I just want the script that can do that on my local computer.
If you live within javascript and node, you can use something like this to get all the paths:
Using some modified code from here: https://stackoverflow.com/a/70763473/999943
var lodash=require('lodash') // use this if calling from the node REPL
// import lodash from 'lodash'; // use this if calling from a script
const allPaths = (o, prefix = '', out = []) => {
if (lodash.isObject(o) || lodash.isArray(o)) Object.entries(o).forEach(([k, v]) => allPaths(v, prefix === '' ? k : `${prefix}.${k}`, out));
else out.push(prefix);
return out;
};
let j = {
key1: { someKey: 'Some text', someKey2: 'Some text2' },
key2: { someKey3: { someKey4: 'Some text3', someKey5: 'Some text4' } }
}
allPaths(j)
[
'key1.someKey',
'key1.someKey2',
'key2.someKey3.someKey4',
'key2.someKey3.someKey5'
]
That's all well and good, but now you want to take that list and look through your codebase for usage.
The main choices available are text searching with grep or awk or ag, or parse the language and look through the symbolic representation of the language after it's loaded into your project. Tree-shaking can do this for libraries... I haven't looked into how to do tree-shaking for dictionary keys, or some other undefined reference check like a linter may do for a language.
Then once you have all the instances found, then you either manually modify your list or use a json library to modify it.
My weapons of choice in this instance are:
jq and bash and grep
It's not infallible. But it's a start. (use with caution).
setup_test.sh
#!/usr/bin/env bash
mkdir src
echo "key2.someKey3.someKey4" > src/a.js
echo "key1.someKey2" > src/b.js
echo "key3.otherKey" > src/c.js
test.json
{
"key1":{
"someKey": "Some text",
"someKey2": "Some text2"
},
"key2":{
"someKey3":{
"someKey4": "Some text3",
"someKey5": "Some text4"
}
}
}
check_for_dict_references.sh
#!/usr/bin/env bash
json_input=$1
code_path=$2
cat << HEREDOC
json_input=$json_input
code_path=$code_path
HEREDOC
echo "Paths found in json"
paths="$(cat "$json_input" | jq -r 'paths | join(".")')"
no_refs=
for path in $paths; do
escaped_path=$(echo "$path" | sed -e "s|\.|\\\\.|g")
if ! grep -r "$escaped_path" "$code_path" ; then
no_refs="$no_refs $path"
fi
done
echo "Missing paths..."
echo "$no_refs"
echo "Creating a new json file without the unused paths"
del_paths_list=
for path in $no_refs; do
del_paths_list+=".$path, "
done
del_paths_list=${del_paths_list:0:-2} # remove trailing comma space
cat "$json_input" | jq -r 'del('$del_paths_list')' > ${json_input}.new.json
Running the setup_test.sh, then we can test the jq + grep solution
$ ./check_for_dict_references.sh test.json src
json_input=test.json
code_path=src
Paths found in json
src/b.js:key1.someKey2
src/b.js:key1.someKey2
src/b.js:key1.someKey2
src/a.js:key2.someKey3.someKey4
src/a.js:key2.someKey3.someKey4
src/a.js:key2.someKey3.someKey4
Missing paths...
key2.someKey3.someKey5
Creating a new json file without the unused paths
If you look closely you would want it to also print key1.someKey, but this got "found" in the middle of the name key1.someKey2. There are some more fancy regex things you can do, but for the purpose of this script it may be enough.
Now look in your directory for the new json file:
$ cat test.json.new.json
{
"key1": {
"someKey": "Some text",
"someKey2": "Some text2"
},
"key2": {
"someKey3": {
"someKey4": "Some text3"
}
}
}
Hope that helps.
I need to replace the value of "JaegerAgentHost" with a variable that I already have.
I have multiple formats of JSON on each app.
APP1 JSON file:
{
"Settings": {
"JaegerServiceSettings": {
"JaegerAgentHost": "jaeger.apps.internal",
"JaegerAgentPort": "6831"
} } }
APP2 JSON file:
{
"JaegerServiceSettings": {
"JaegerAgentHost": "jaeger.apps.internal",
"JaegerAgentPort": "6831",
} }
App3 JSON file:
{
"JaegerAgentHost": "jaeger.apps.internal",
"JaegerAgentPort": "6831"
}
irrespective of the path of key-value of JaegerAgentHost, I should be able to replace the value of it with my variable that ultimately should become as below
expected output::
APP1 JSON file:
{
"Settings": {
"JaegerServiceSettings": {
"JaegerAgentHost": "jaeger.app",
"JaegerAgentPort": "6831"
} } }
APP2 JSON file:
{
"JaegerServiceSettings": {
"JaegerAgentHost": "jaeger.app",
"JaegerAgentPort": "6831",
}}
App3 JSON file:
{
"JaegerAgentHost": "jaeger.app",
"JaegerAgentPort": "6831"
}
Please advice how we can do it, when I have multiple JSON files like to find and replace the perticular key-value with jq and bash
As of now I have multiple command for each json file to replace, which is not best practice.
This is blocking me from making a common script for all.
I can use sed but worried about the structure changes that may happen to any of the JSON file as they were not uniform and would like to prefer jq.
One way would be to use walk.
Assuming $host holds the desired value, the jq filter would be:
walk(if type == "object" and has("JaegerAgentHost")
then .JaegerAgentHost = $host else . end)
An alternative would be to use .. and |=:
(..|objects|select(.JaegerAgentHost).JaegerAgentHost) |= $host
You could pass in the value using the --arg command-line option, e.g.
jq --arg host jaeger.app .....
I am currently using terraform to provision Infrastructure in the cloud. On Top of Terraform I run GitHub Actions to automate even more Steps.
In this case, after provisioning the infrastructure I generate an inventory file (for ansible) with a bash script using the generated cluster.tfstate to parse names and ips.
However the Script cant run, as it throws following error
Run bash ./generate-inventory.sh cluster.tfstate > ../hosts.ini
parse error: Invalid numeric literal at line 1, column 9
Error: Process completed with exit code 4.
Running it locally however works. When i do a cat on the cluster.tfstate inside the workflow the following is the case
Run cat cluster.tfstate
***
"version": 4,
"terraform_version": "1.0.1",
"serial": 386,
"lineage": "3d16a659-b093-551c-b3ab-a1cf8aa5031c",
"outputs": ***
"master_ip_addresses": ***
"value": ***
Does GitHub Actions modify the json that is evaluated by my script because of Secrets i have created? Or are the stars only in the output in the shell?
The Code of the workflow can be seen here https://github.com/eco-bench/eco-bench/blob/main/.github/workflows/terraform.yml
Thanks!
Following did the trick
source "local_file" "AnsibleInventory" {
content = templatefile("inventory.tmpl",
{
worker = {
for key, instance in google_compute_instance.worker :
instance.name => instance.network_interface.0.access_config.0.nat_ip
}
master = {
for key, instance in google_compute_instance.master :
instance.name => instance.network_interface.0.access_config.0.nat_ip
}
}
)
filename = "./inventory.ini"
}
The Template looks like this
[all:vars]
ansible_connection=ssh
ansible_user=lucas
[cloud]
%{ for ip in master ~}
${name} ${ip}
%{ endfor ~}
[edge]
%{ for ip in worker ~}
${ip}
%{ endfor ~}
[cloud:vars]
kubernetes_role=master
[edge:vars]
kubernetes_role=edge
I'm starting to learn Powershell and I'm currently trying to read in a JSON file.
Here is my JSON file (named 'versions.json'):
{
"versions": {
"1.0.0": {
"Component1": "1.0.0",
"Component2": "1.0.0",
"Component3": "1.0.0",
},
"2.0.0": {
"Component1": "2.0.0",
"Component2": "2.0.0",
"Component3": "2.0.0"
}
}
}
I would like to read in this JSON file and print out the versions and what they consist of. For example, 1.0.0 consists of Component 1 at 1.0.0, Component 2 at 1.0.0, and Component 3 at 1.0.0.
I'm currently reading in the JSON file with this Powershell line:
$json = (Get-Content "versions.json" -Raw) | ConvertFrom-Json
Now, I want to iterate through $json and print out its data. I'm currently using this:
foreach($v in $json.versions) {
echo "Data: $v"
}
But, when I run my Powershell script, it prints:
Data: #{1.0.0=; 2.0.0=}
Is this the proper output? I was expecting to at least see two entries for 1.0.0 and 2.0.0. This feels like it may be a syntax issue but I am unsure. I am using Powershell version 5.
After using ConvertFrom-Json you have a PowerShell object which is a single item that has a versions property which has two sub-properties 1.0.0 and 2.0.0. Your ForEach is attempting to iterate them like a collection, but its just a single object.
However you can iterate over the properties as follows to get the result I think you wanted:
($Json.versions.psobject.properties) | foreach-object { "Data: $($_.name)" }
I have a small bash script that scours through a directory and its subs (media/) and adds the output to a json file.
The line that outputs the json is as follows:
printf '{"id":%s,"foldername":"%s","path":"%s","date":"%s","filename":"%s"},\n' $num "$folder" "/media/$file2" "$fullDate" "$filename" >> /media/files.json
The json file looks like this:
{"id":1,"foldername":"5101","path":"/media/5101/Musicali10.mp3","date":"2015-08-09:13:16","filename":"Musicali10"},
{"id":2,"foldername":"5101","path":"/media/5101/RumCollora.mp3","date":"2015-08-09:13:16","filename":"RumCollora"}
I would like it group all files in a folder and output something like this
[ {
"id":1,
"foldername":"5101",
"files":[
{
"path":"/media/5101/Musicali10.mp3",
"date":"2015-08-09:13:16",
"filename":"Musicali10"
},
{
"path":"/media/5101/RumCollora.mp3",
"date":"2015-08-09:13:16",
"filename":"RumCollora"
}
] },
{
"id":2,
"foldername":"3120",
"files":[
{
"path":"/media/3120/Marimba4.mp3",
"date":"2015-08-04:10:15",
"filename":"Marimba4"
},
{
"path":"/media/3120/Rumbidzaishe6.mp3",
"date":"2015-08-04:09:10",
"filename":"Rumbidzaishe6"
}
]
}
]
My question is how to create a json file that has nested "files" objects? I want each "foldername" to have a nested list of of files. So far I am only able to output each file as an array using the printf statement above.