GitHub Actions Terraform State file cant be parsed - json

I am currently using terraform to provision Infrastructure in the cloud. On Top of Terraform I run GitHub Actions to automate even more Steps.
In this case, after provisioning the infrastructure I generate an inventory file (for ansible) with a bash script using the generated cluster.tfstate to parse names and ips.
However the Script cant run, as it throws following error
Run bash ./generate-inventory.sh cluster.tfstate > ../hosts.ini
parse error: Invalid numeric literal at line 1, column 9
Error: Process completed with exit code 4.
Running it locally however works. When i do a cat on the cluster.tfstate inside the workflow the following is the case
Run cat cluster.tfstate
***
"version": 4,
"terraform_version": "1.0.1",
"serial": 386,
"lineage": "3d16a659-b093-551c-b3ab-a1cf8aa5031c",
"outputs": ***
"master_ip_addresses": ***
"value": ***
Does GitHub Actions modify the json that is evaluated by my script because of Secrets i have created? Or are the stars only in the output in the shell?
The Code of the workflow can be seen here https://github.com/eco-bench/eco-bench/blob/main/.github/workflows/terraform.yml
Thanks!

Following did the trick
source "local_file" "AnsibleInventory" {
content = templatefile("inventory.tmpl",
{
worker = {
for key, instance in google_compute_instance.worker :
instance.name => instance.network_interface.0.access_config.0.nat_ip
}
master = {
for key, instance in google_compute_instance.master :
instance.name => instance.network_interface.0.access_config.0.nat_ip
}
}
)
filename = "./inventory.ini"
}
The Template looks like this
[all:vars]
ansible_connection=ssh
ansible_user=lucas
[cloud]
%{ for ip in master ~}
${name} ${ip}
%{ endfor ~}
[edge]
%{ for ip in worker ~}
${ip}
%{ endfor ~}
[cloud:vars]
kubernetes_role=master
[edge:vars]
kubernetes_role=edge

Related

Is there a way to use AWS Step Functions Input to assembly command string on System Manager block?

I am creating a Step Function State machine to everytime an instance starts it copy a file from S3 to an specific folder inside this instance. The origin folder inside S3 bucket has a folder named with this instance ID. The instance ID I am passing as input for the System manager block, but I need to use it to create the command string that will be performed inside the EC2.
For example:
My input is: $.detail.instance-id (lets assume the following ID i-11223344556677889)
The Systems Manager API parameters are:
"CloudWatchOutputConfig": {
"CloudWatchLogGroupName": "myloggroup",
"CloudWatchOutputEnabled": true
},
"DocumentName": "AWS-RunShellScript",
"DocumentVersion": "$DEFAULT",
"InstanceIds.$": "States.Array($)",
"MaxConcurrency": "50",
"MaxErrors": "10",
"Parameters": {
"commands": [
{
"runuser -l ec2-user -c \"aws s3 cp s3://my-bucket/**MY_INSTANCEID**/myfile.xyz /home/ec2-user/myfolder/myfile.xyz\""
}
},
"TimeoutSeconds": 6000
}```
Summing up, I want to turn the line with the command replacing the MY_INSTANCEID by my input $.detail.instance-id, and perform the following command:
"runuser -l ec2-user -c "aws s3 cp s3://my-bucket/i-11223344556677889/myfile.xyz /home/ec2-user/myfolder/myfile.xyz""
Is there a way? I already tried to use the Fn::join withou success.
Thank you in advance,
kind regards,
Winner
It was necessary to use State.Format inside the State.Array so the it worked, and State.Format inside the State.Array cannot have quotes:
"CloudWatchOutputConfig": {
"CloudWatchLogGroupName": "myloggroup",
"CloudWatchOutputEnabled": true
},
"DocumentName": "AWS-RunShellScript",
"DocumentVersion": "$DEFAULT",
"InstanceIds.$": "States.Array($)",
"MaxConcurrency": "50",
"MaxErrors": "10",
"Parameters": {
"commands.$": "States.Array(States.Format('runuser -l ec2-user -c \"aws s3 cp s3://my-bucket/**MY_INSTANCEID**/myfile.xyz /home/ec2-user/myfolder/myfile.xyz\"', $))"
},
"TimeoutSeconds": 6000
}```
Was also necessary to use .$ after command.

Nested template_file in terraform

I have two template files in terraform.
The first template file looks like this:
script.sh.tpl
------------------
echo "some_content" > config.json
consul --config config.json
The second template file needs to take the content of first template file. Here is my second template file:
task-definition.json.tpl
----------------------
[
...
"command":[${consul_script}]
"image": "some_docker_image:latest",
"name": "test-app"
}
]
here is what main.tf looks like:
main.tf
-----------------------
data "template_file" "task_definition_template" {
template = file("task-definition.json.tpl")
vars = {
consul_script = data.template_file.consul_script.rendered
}
}
data "template_file" "consul_script" {
template = file("script.sh.tpl")
vars = {
var1 = "test"
}
}
I tried using this, but it's giving me error like this:
ECS Task Definition is invalid: Error decoding JSON: invalid character '\n' in string literal terraform
How can I get rid of this issue and successfully pass the first .tpl in to the second template file?

aws cli lambda-Could not parse request body into json

I have created aws lambda function in .net core and deployed.
I have tried executing function in aws console with test case and its working. but i am not able achieve the same with cli command
aws lambda invoke --function-name "mylambda" --log-type Tail --payload file://D:/Files/lamdainputfile.json file://D:/Files/response.txt
i got getting error with cli command
An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unexpected character ((CTRL-CHAR, code 138)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: (byte[])"�zn�]t�zn�m�"; line: 1, column: 2]
I tried passing json
aws lambda invoke --function-name "mylambda" --log-type Tail --payload "{'input1':'100', 'input2':'200'}" file://D:/Files/response.txt
but it's not working
This lambda function is executing aws console with test case and giving correct result. I have added same input in local json file and tried with cli command.
Json input:
{
"input1": "100",
"input2": "200"
}
EDIT:
After correction in inline json i am getting error for output file
Unknown options: file://D:/Files/response.txt
is there any command to print output in cli only?
The documentation is not updated from the cli version 1. For the aws cli version 2 we need to base64 encode the payload.
Mac:
payload=`echo '{"input1": 100, "input2": 200 }' | openssl base64`
aws lambda invoke --function-name myfunction --payload "$payload" SomeOutFile &
Adding the option --cli-binary-format raw-in-base64-out will allow you to pass raw json in the invoke command.
aws lambda invoke \
--cli-binary-format raw-in-base64-out \
--function-name "mylambda" \
--payload '{"input1": "100", "input2": "200"}' \
file://D:/Files/response.txt
Based on ASW CLI invoke command options --payload only accepts inline blob arguments (i.e. JSON). In other words --payload parameter can not be used to read input from a file, so --payload file://D:/Files/lamdainputfile.json will not work.
In the example provided what probably happens is --payload is ignored, file://D:/Files/lamdainputfile.json is treated as <outfile>, and an error is raised for file://D:/Files/response.txt as it is an unexpected positional argument.
What is required is reading the contents of D:/Files/lamdainputfile.json with a separate command. How this can be done is different based on the type of shell used. Bash example:
aws lambda invoke --payload "$(cat /path/to/input.json)" ...
Original answer:
I don't know about the first case (--payload file:///...), however the second case is not a valid JSON, as JSON requires strings to be double quoted. Try the following JSON:
{
"input": "100",
"input2": "200"
}

How to pull the right keys and values from jq into an array in bash shell script

I have a json file that is formatted like so:
{
"ServerName1": {
"localip": "192.168.1.1",
"hostname": "server1"
},
"ServerName2": {
"localip": "192.168.1.2",
"hostname": "server2"
},
"ServerName3": {
"localip": "192.168.1.3",
"hostname": "server3"
}
}
And i am trying to write a shell script that uses Dialog to create a menu to run an ssh connection command. I'm parsing with jq, but can't get past the first object level. We have a lot of servers and this will make connecting to them a lot easier. I have the Dialog statement working fine with static data, but we are trying to populate it with a json file with the rest of the data. So i am killing myself trying to figure out how to get just the localip and hostname either into an array to loop into the Dialog command or something that will effectively do the same thing and al I get it it to do so far is spit out
Servername1 = {"localip":"192.168.1.1","hostname":"server1"}
on each line. I'm a shell script newbie but this is messing with sanity now.
This is the jq command that I've been working with so far:
jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" config.json
This is the Dialog command that works well with static data:
callssh(){
clear
ssh $1#$2
}
## Display Menu ##
dialog --clear --title "SSH Relayer"\
--menu "Please choose which server \n\
with which you would like to connect" 15 50 4 \
"Server 1" "192.168.1.1"\
"Server 2" "192.168.1.2"\
"Server 3" "192.168.1.3"\
Exit "Exit to shell" 2>"${INPUT}"
menuitem=$(<"${INPUT}")
case $menuitem in
"Server 1") callssh $sshuser 192.168.1.1;;
"Server 2") callssh $sshuser 192.168.1.2;;
"Server 3") callssh $sshuser 192.168.1.3;;
Exit) clear
echo "Bye!";;
esac
Thanks for any help or pointing in the right direction.
To create a bash array mapping hostnames to ip addresses based on config.json:
declare -A ip_of
# Emit lines of the form:
# hostname localip (without quotation marks)
function hostname_ip {
local json="$1"
jq -r '.[] | "\(.hostname) \(.localip)"' "$json"
}
while read -r hostname ip ; do
ip_of["$hostname"]="$ip"
done < <(hostname_ip config.json)
You can loop through this bash array like so:
for hostname in "${!ip_of[#]}" ; do
echo hostname=$hostname "=>" ${ip_of[$hostname]}
done
For example, assuming the "dialog" presents the hostnames,
you can replace the case statement by:
callssh "$sshuser" "${ip_of[$menuitem]}"

is there any way to import a json file(contains 100 documents) in elasticsearch server.?

Is there any way to import a JSON file (contains 100 documents) in elasticsearch server? I want to import a big json file into es-server..
As dadoonet already mentioned, the bulk API is probably the way to go. To transform your file for the bulk protocol, you can use jq.
Assuming the file contains just the documents itself:
$ echo '{"foo":"bar"}{"baz":"qux"}' |
jq -c '
{ index: { _index: "myindex", _type: "mytype" } },
. '
{"index":{"_index":"myindex","_type":"mytype"}}
{"foo":"bar"}
{"index":{"_index":"myindex","_type":"mytype"}}
{"baz":"qux"}
And if the file contains the documents in a top level list they have to be unwrapped first:
$ echo '[{"foo":"bar"},{"baz":"qux"}]' |
jq -c '
.[] |
{ index: { _index: "myindex", _type: "mytype" } },
. '
{"index":{"_index":"myindex","_type":"mytype"}}
{"foo":"bar"}
{"index":{"_index":"myindex","_type":"mytype"}}
{"baz":"qux"}
jq's -c flag makes sure that each document is on a line by itself.
If you want to pipe straight to curl, you'll want to use --data-binary #-, and not just -d, otherwise curl will strip the newlines again.
You should use Bulk API. Note that you will need to add a header line before each json document.
$ cat requests
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } }
{ "field1" : "value1" }
$ curl -s -XPOST localhost:9200/_bulk --data-binary #requests; echo
{"took":7,"items":[{"create":{"_index":"test","_type":"type1","_id":"1","_version":1,"ok":true}}]}
I'm sure someone wants this so I'll make it easy to find.
FYI - This is using Node.js (essentially as a batch script) on the same server as the brand new ES instance. Ran it on 2 files with 4000 items each and it only took about 12 seconds on my shared virtual server. YMMV
var elasticsearch = require('elasticsearch'),
fs = require('fs'),
pubs = JSON.parse(fs.readFileSync(__dirname + '/pubs.json')), // name of my first file to parse
forms = JSON.parse(fs.readFileSync(__dirname + '/forms.json')); // and the second set
var client = new elasticsearch.Client({ // default is fine for me, change as you see fit
host: 'localhost:9200',
log: 'trace'
});
for (var i = 0; i < pubs.length; i++ ) {
client.create({
index: "epubs", // name your index
type: "pub", // describe the data thats getting created
id: i, // increment ID every iteration - I already sorted mine but not a requirement
body: pubs[i] // *** THIS ASSUMES YOUR DATA FILE IS FORMATTED LIKE SO: [{prop: val, prop2: val2}, {prop:...}, {prop:...}] - I converted mine from a CSV so pubs[i] is the current object {prop:..., prop2:...}
}, function(error, response) {
if (error) {
console.error(error);
return;
}
else {
console.log(response); // I don't recommend this but I like having my console flooded with stuff. It looks cool. Like I'm compiling a kernel really fast.
}
});
}
for (var a = 0; a < forms.length; a++ ) { // Same stuff here, just slight changes in type and variables
client.create({
index: "epubs",
type: "form",
id: a,
body: forms[a]
}, function(error, response) {
if (error) {
console.error(error);
return;
}
else {
console.log(response);
}
});
}
Hope I can help more than just myself with this. Not rocket science but may save someone 10 minutes.
Cheers
jq is a lightweight and flexible command-line JSON processor.
Usage:
cat file.json | jq -c '.[] | {"index": {"_index": "bookmarks", "_type": "bookmark", "_id": .id}}, .' | curl -XPOST localhost:9200/_bulk --data-binary #-
We’re taking the file file.json and piping its contents to jq first with the -c flag to construct compact output. Here’s the nugget: We’re taking advantage of the fact that jq can construct not only one but multiple objects per line of input. For each line, we’re creating the control JSON Elasticsearch needs (with the ID from our original object) and creating a second line that is just our original JSON object (.).
At this point we have our JSON formatted the way Elasticsearch’s bulk API expects it, so we just pipe it to curl which POSTs it to Elasticsearch!
Credit goes to Kevin Marsh
Import no, but you can index the documents by using the ES API.
You can use the index api to load each line (using some kind of code to read the file and make the curl calls) or the index bulk api to load them all. Assuming your data file can be formatted to work with it.
Read more here : ES API
A simple shell script would do the trick if you comfortable with shell something like this maybe (not tested):
while read line
do
curl -XPOST 'http://localhost:9200/<indexname>/<typeofdoc>/' -d "$line"
done <myfile.json
Peronally, I would probably use Python either pyes or the elastic-search client.
pyes on github
elastic search python client
Stream2es is also very useful for quickly loading data into es and may have a way to simply stream a file in. (I have not tested a file but have used it to load wikipedia doc for es perf testing)
Stream2es is the easiest way IMO.
e.g. assuming a file "some.json" containing a list of JSON documents, one per line:
curl -O download.elasticsearch.org/stream2es/stream2es; chmod +x stream2es
cat some.json | ./stream2es stdin --target "http://localhost:9200/my_index/my_type
You can use esbulk, a fast and simple bulk indexer:
$ esbulk -index myindex file.ldj
Here's an asciicast showing it loading Project Gutenberg data into Elasticsearch in about 11s.
Disclaimer: I'm the author.
you can use Elasticsearch Gatherer Plugin
The gatherer plugin for Elasticsearch is a framework for scalable data fetching and indexing. Content adapters are implemented in gatherer zip archives which are a special kind of plugins distributable over Elasticsearch nodes. They can receive job requests and execute them in local queues. Job states are maintained in a special index.
This plugin is under development.
Milestone 1 - deploy gatherer zips to nodes
Milestone 2 - job specification and execution
Milestone 3 - porting JDBC river to JDBC gatherer
Milestone 4 - gatherer job distribution by load/queue length/node name, cron jobs
Milestone 5 - more gatherers, more content adapters
reference https://github.com/jprante/elasticsearch-gatherer
One way is to create a bash script that does a bulk insert:
curl -XPOST http://127.0.0.1:9200/myindexname/type/_bulk?pretty=true --data-binary #myjsonfile.json
After you run the insert, run this command to get the count:
curl http://127.0.0.1:9200/myindexname/type/_count