How to get public dns name from aws ec2 describe-instances - json

I'm trying to make simple script which queries my ec2 instances and gets public dns name of instances which matches my filter. There is my first shot:
#!/bin/bash
aws ec2 describe-instances \
--filters "Name=tag:app,Values=swarm-cluster" \
"Name=tag:role,Values=manager" \
--query "Reservations[*].Instances[*].PublicDnsName"
It almost works but I get something ugly:
[
[
""
],
[
"ec2-xxx-xxx-xxx-xxx.venus-central-1.compute.amazonaws.com"
]
]
I want just list of FQDNs, one per line. How to format output?
I know, I can do it with tr, sed and so on but I'd like use more sophisticated way. :)

You can just append --output text to your CLI call to get a text output.
Ref - https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html

By using jq you can parse the JSON response to get what you want.
Example:
#!/bin/bash
aws ec2 describe-instances \
--filters "Name=tag:app,Values=swarm-cluster" \
"Name=tag:role,Values=manager" \
--query "Reservations[*].Instances[*].PublicDnsName" | jq ".[0][1]"
That will give you:
"ec2-xxx-xxx-xxx-xxx.venus-central-1.compute.amazonaws.com"

for i in `cat kafka_instance_names`
do
aws ec2 describe-instances --filters "Name=tag:Name,Values=$i" --query "Reservations[*].Instances[*].PublicDnsName" | grep ec
done
"ec2-54-166-168-168.compute-1.amazonaws.com"
"ec2-52-72-30-88.compute-1.amazonaws.com"

Related

formatting json tags in ansible between aws commands

I need to replace my EBS volumes but need to keep the tags. I have to use aws cli.- I basically have problem to feed the tag information from one aws command output to the other aws command input due to differences of expected format.
I first loop through the volumes with describe-volumes command and collect the tags for each volumes. Something like this
- name: Tags of my EBS volumes
become: yes
shell: |
aws ec2 describe-volumes --volume-ids {{ item.stdout }} --query "Volumes[*].Tags" --output json
with_items: "{{ ebsvolumeids.results }}"
register: ebsvolumetags
This will give a similar formatted output:
"stdout": "[\n [\n {\n \"Key\": \"cost-center\",\n \"Value\": \"22222223222\"\n },\n {\n \"Key\": \"LastBackup\",\n \"Value\": \"2022.01.01\"\n }\n ]\n]",
When I want to create a new replacement volume from a snapshot and want to apply the tags the command would like this:
shell:
aws ec2 create-volume --snapshot-id <snap-xxxxxxxx> \
--volume-type gp2 --tag-specifications \
'ResourceType=volume,Tags={{ item.stdout }}'
with_items: "{{ ebsvolumetags.results }}"
where I would loop through the output of the previous command. However create-volume command expects a format for Tags like this:
[{Key=LastBackup,Value=2022.01.01},{Key=cost-center,Value=22222223222}]
So for example the correct syntax would be:
aws ec2 create-volume --snapshot-id <snap-xxxxxxxx> --volume-type gp2 --tag-specifications \
'ResourceType=volume,Tags=[{Key=LastBackup,Value=2022.01.01},{Key=cost-center,Value=22222223222}]'
No double quotes. No colons just equal signs. One less deep structure because output had too many [] brackets.
I tried to shape the output of the first command with different ways, for the second to accept but no luck:
chain of replace filters
using of from_json on the stdout but still didn't like it
have the output as text and replacing \n and \t
Anybody has an idea how to achieve this?
Thanks
Ansible shell module should only be used as a last resort. That being said, It appears this is a data formatting issue as "Tags" cannot be passed as json to the command "aws ec2 create-volume".
A workaround would be to convert the tags from json to the correct format (which follows the pattern: [{Key=key1,Value=value1},{Key=key2,Value=value2},{Key=key3,Value=value3},...]) and pass it as a string literal to the aws command. Please refer to the playbook below:
---
- hosts: localhost
gather_facts: no
vars:
tags: ''
tasks:
- name: Tags of my EBS volumes
become: yes
shell: |
aws ec2 describe-volumes --volume-ids {{ item.stdout }} --query "Volumes[*].Tags" \
--output json
register: ebsvolumetags
- name: Create tags variable
set_fact:
tags: "{{ tags + '{Key=' + item.Key + ',' + 'Value=' + item.Value + '},' }}"
loop: "{{ ebsvolumetags.stdout | from_json | first }}"
- name: Create EBS volume with the same tags
become: yes
shell: |
aws ec2 create-volume --snapshot-id <snap-xxxxxxx> --volume-type gp2 \
--tag-specifications 'ResourceType=volume,Tags=[{{ tags[:-1] }}]' \
--availability-zone <us-east-xx>
Using shell and the AWS CLI is a poor practice in Ansible, as there are already modules available to execute those operations:
the ec2_vol_info will allow you to retrieve the information of the volumes attached to EC2 instances, this will be similar to aws ec2 describe-volumes
the ec2_vol will allow you to create and configure volumes, this will be similar to aws ec2 create-volume
additionally, there is the ec2_tag module, which allows to add, update or delete tags for the EC2 artifacts.
Modules have a better error and idem-potency handling, as they already have the logic to verify if the change is needed before applying it, this will allow to execute several times the same playbook if needed.

Retrieve secrets from AWS Secrets Manager

I have a bunch of secrets (key/value) pairs stored in AWS Secrets Manager. I tried to parse the secrets using jq as:
aws secretsmanager get-secret-value --secret-id <secret_bucket_name> | jq --raw-output '.SecretString' | jq -r .PASSWORD
It retrieves the value stored in .PASSWORD, but the problem is I not only want to retrieve the value stored in key but also want to retrieve the key/value in the following manner:
KEY_1="1234"
KEY_2="0000"
.
.
.
so on...
By running the above command I am not able to parse in this format and also for every key/value I have to run this command many times which is tedious. Am I doing something wrong or is there a better way of doing this?
This isn't related to python, but more related to behaviour of aws cli and jq. I come up with something like this.
aws secretsmanager get-secret-value --secret-id <secret_name> --output text --query SecretString | jq ".[]"
There are literally hundred different ways to format something like this.
aws cli itself has lot of options to filter output using --query option https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-output.html
Exact conversion you are looking for would require somwthing like this:
aws secretsmanager get-secret-value --secret-id <secret_name> --output text --query SecretString \
| jq -r 'to_entries[] | [.key, "=", "\"", .value, "\"" ] | #tsv' \
| tr -d "\t"
There has to be some better way of doing this!!
Try the snippet below. I tend to put these little helper filters into their own shell function <3
tokv() {
jq -r 'to_entries|map("\(.key|ascii_upcase)=\"\(.value|tostring)\"")|.[]'
}
$ echo '{"foo":"bar","baz":"fee"}' | tokv
FOO="bar"
BAZ="fee"

Parsing json output using jq -jr

I am running a puppet bolt command query certain information from a set of servers in json format. I am piping it to jq.. Below is what I get
$ bolt command run "cat /blah/blah" -n #hname.txt -u uid --no-host-key-check --format json |jq -jr '.items[]|[.node],[.result.stdout]'
[
"node-name"
][
"stdout data\n"
]
What do I need to do to make it appear like below
["nodename":"stdout data"]
If you really want output that is not valid JSON, you will have to construct the output string, which can easily be done using string interpolation, e.g.:
jq -r '.items[] | "[\"\(.node)\",\"\(.result.stdout)\"]"'
#peak thank you.. that helped. Below is how it looks like
$ bolt command run "cat /blah/blah" -n #hname.txt -u UID --no-host-key-check --format json |jq -r '.items[] | "[\"\(.node)\",\"\(.result.stdout)\"]"'
["node name","stdout data
"]
I used a work around to get the data I needed by using the #csv flag to the command itself. Sharing with you below what worked.
$ bolt command run "cat /blah/blah" -n #hname.txt -u uid --no-host-key-check --format json |jq -jr '.items[]|[.node],[.result.stdout]|#csv'
""node-name""stdout.data
"

Script to disassociate and release all Elastic IP addresses from all regions using bash and AWS CLI

I'm learning how to use AWS infrastructure and CLI tools. I had many EC2 instances with public IP that I have terminated using another CLI script authored by Russell Jurney Source.
I tried to modify this to release all Public IPs, but as I'm very new to scripting and json I can't get my head around this one. How to address all Public IPs in this script and do correct loops so each IP is released?
for region in `aws ec2 describe-regions | jq -r .Regions[].RegionName`
do
echo "Releasing Elastic IPs in region $region..."
for address in 'aws ecs describe-regions | jq -r .Regions[].RegionName[]'
do
aws ec2 disassociate-address --region $region | \
jq -r .Reservations[].Instances[].PrivateIpAddress| \
xargs -L 1 -I {} aws ec2 modify-instance-attribute \
--region $region \
--allocation-id {}\
--public-ip {}
aws ec2 release-address --region $region | \
jq -r .Reservations[].Instances[].PrivateIpAddress | \
xargs -L 1 -I {} aws ec2 terminate-instances \
--region $region \
--allocation-id {}
--instance-id {}
done
done
So, your objective is to:
Disassociate all the Public IPs in all the regions and
Release all of them back to AWS pool.
Try the below script, I could not try this since I do not have an environment handy however, this should work. In case you face any error messages, please update (With the error message mentioned clearly)
for region in $(aws ec2 describe-regions --profile default --output text | cut -f4)
do
for address in $(aws ec2 describe-addresses --region $region --profile default --query 'Addresses[].AssociationId' --output text)
do
echo -e "Disassociating $address from $region now..."
aws ec2 disassociate-address --association-id $address --region $region --profile default
for pubip in $(aws ec2 describe-addresses --region $region --profile default --query 'Addresses[].PublicIp' --output text)
do
echo -e "Now Releasing the PublicIP $pubip from region $region"
aws ec2 release-address --public-ip $pubip --region $region --profile default
done
done
done

jq filter for AWS for description without calling jq twice?

I have this to filter out snapshots with Jenkins in the description. Is there a more efficient way to do the same thing?
aws --region eu-west-1 ec2 describe-snapshots | jq '.Snapshots[] |\ select(.Description | contains("Jenkins"))' | jq -r '.SnapshotId'
Maybe something like this, You can use JMESPath Query inside your cli statement.
aws --region eu-west-1 ec2 describe-snapshots --query 'Snapshots[?contains(Description, `Jenkins`) == `false`]'