Extracting volume_id from a ec2 creation register - json

I need to extract the EBS volume ids from the register return from an EC2 creation call. I've already got it down to a chuck which holds the data I want, but the last step eludes me.
I've tried to do it with:
- set_fact:
volume_id_list: "{{ devices | json_query('[*].volume_id') }}"
- debug: var=volume_id_list
And it returns an empty string.
"devices": {
"/dev/sdf": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-0b2c92cdcblah"
},
"/dev/xvda": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-086a722c4blah"
}
}
What I wanted to see was something like:
"vol-0b2c92cdcblah"
"vol-086a722c4blah"

Your jmespath expression in json_query does not match anything in your data-structure. So the empty string is a totally correct result :)
Now, to get what your want from your current data-structure, you need to change your query: json_query('*.volume_id')

Related

AWS CLI events put-targets EcsParameters (structure)

I am trying to get the current Task Definition of an ECS Cluster then update the revision in the cloud bridge event target.
This is what I have so far:
james#LAPTOP:/mnt/c/Users/james$ target_id="xxx_hourly_cron"
james#LAPTOP:/mnt/c/Users/james$ aws events list-targets-by-rule --rule `echo $target_id` > rule-target.json
james#LAPTOP:/mnt/c/Users/james$ cat rule-target.json
{
"Targets": [
{
"Id": "xxx_hourly_cron",
"Arn": "arn:aws:ecs:eu-west-2:000000000000:cluster/xxx-cluster",
"RoleArn": "arn:aws:iam::000000000000:role/ecsEventsRole",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:eu-west-2:000000000000:task-definition/xxx-cron:55",
"TaskCount": 1,
"EnableECSManagedTags": false,
"EnableExecuteCommand": false,
"PropagateTags": "TASK_DEFINITION"
}
}
]
}
james#LAPTOP:/mnt/c/Users/james$ aws events put-targets --rule `echo $target_id` --targets --EcsParameters jsonfile?
The last command is where I am struggling:
within the AWS docs I am not to sure what it means by structure I have tried json and I have tried to escape it.
Here is the docs I am looking at:
https://docs.aws.amazon.com/cli/latest/reference/events/put-targets.html
With AWS CLI commands, you can often replace the majority of the arguments supplied to a command with a single JSON file using the --cli-input-json argument. This can make it far easier to work with complex structures as input arguments to CLI commands.
In the above example, you would modify the rule-target.json output to become an input (rule-target-input.json) for the next using something like the following:
{
"Rule": "xxx_hourly_cron",
"Targets": [
{
"Id": "xxx_hourly_cron",
"Arn": "arn:aws:ecs:eu-west-2:000000000000:cluster/xxx-cluster",
"RoleArn": "arn:aws:iam::000000000000:role/ecsEventsRole",
"EcsParameters": {
"TaskDefinitionArn": "arn:aws:ecs:eu-west-2:000000000000:task-definition/xxx-cron:55",
"TaskCount": 1,
"EnableECSManagedTags": false,
"EnableExecuteCommand": false,
"PropagateTags": "TASK_DEFINITION"
}
}
]
}
And then feeding that into the input using something like the following:
aws put-targets --cli-input-json file://rule-target-input.json

Combining JSON items using JMESPath and/or Ansible

I have an Ansible playbook that queries a device inventory API and gets back a JSON result that contains a lot of records following this format:
{
"service_level": "Test",
"tags": [
"Application:MyApp1"
],
"fqdn": "matestsvcapp1.vipcustomers.com",
"ip": "172.20.11.237",
"name": "matestsvcapp1.vipcustomers.com"
}
I then loop through these ansible tasks to query the JSON result for each of the IP addresses I care about:
- name: Set JMESQuery
set_fact:
jmesquery: "Devices[?ip_addresses[?ip.contains(#,'{{ ip_to_query }}' )]].{ip: '{{ ip_to_query }}', tags: tags[], , service_level: service_level }"
- name: Store values
set_fact:
inven_results: "{{ (inven_results| default([])) + (existing_device_info.json | to_json | from_json | json_query(jmesquery)) }}"
I then go on to do other tasks in ansible, pushing this data into other systems, and everything works fine.
However, I just got a request from management that they would like to see the 'service level' represented as a tag in some of the systems I push this data into. Therefore I need to combine the 'tags' and 'service_level' items resulting in something that looks like this:
{
"tags": [
"Application:MyApp1",
"service_level:Test"
],
"fqdn": "matestsvcapp1.vipcustomers.com",
"ip": "172.20.11.237",
"name": "matestsvcapp1.vipcustomers.com"
}
I've tried modifying the JMESPath query to join the results together using the join function, and tried doing it the 'ansible' way, using the combine or map, but I couldn't get either of those to work either.
Any thoughts on the correct way to handle this? Thanks in advance!
Note: 'tags' is a list of strings, and even though it's written in key:value format, it's really just a string.
to add two arrays you use the + operator like this:
ansible localhost -m debug -a 'msg="{{ b + ["String3"] }}"' -e '{"b":["String1", "String2"]}'
result:
localhost | SUCCESS => {
"msg": [
"String1",
"String2",
"String3"
]
}
So if i take your json code as test.json you could run
ansible localhost -m debug -a 'msg="{{ tags + ["service_level:" ~ service_level ] }}"' -e #test.json
Result:
localhost | SUCCESS => {
"msg": [
"Application:MyApp1",
"service_level:Test"
]
}
With this knowledge you can use set_fact to put this new array in a variable for later use.

Ansible "win_lineinfile" two changes to make on lines where originals are identical

We have a third party product that we deploy using Ansible. We use the "win_lineinfile" module to make necessary configuration changes to suit our environments.
My issue is there are two lines in a config file exactly the same ("EntityId"), and they need to have different config put into them. This is the section of file how we receive it, that needs transforming;
"ServiceProviderOptions": {
"EntityId": "http://localhost:50000/saml"
},
"IdentityProviderOptions": {
"EntityId": "",
"SingleSignOnEndpoint": {
"url": ""
},
This needs to look like this (obfuscated out our business sensitive text!);
"ServiceProviderOptions": {
"EntityId": "https://our_application_server/saml"
},
"IdentityProviderOptions": {
"EntityId": "http://our_adfs_server/services/trust",
"SingleSignOnEndpoint": {
"url": "https://our_adfs_server/"
},
What I actually end up with is this;
"ServiceProviderOptions": {
"EntityId": "http://localhost:50000/saml"
},
"IdentityProviderOptions": {
"EntityId": "http://our_adfs_server/services/trust",
"EntityId": "https://our_application_server/saml"
"SingleSignOnEndpoint": {
"url": "https://our_adfs_server/"
},
So rather than amending the first line, it has dropped both lines under the second location, and I have no idea why!! I have tried to use a combination of "insertafter" with "regexp" to try to define what I want where, but it simply isn't working how I intend.
Here is the section of my code dealing with this;
- name: Alter lines without unique keys (use insertafter)
win_lineinfile:
path: C:\{{ item.file }}
insertafter: "{{ item.beforeLine }}"
regexp: "{{ item.regExp }}"
line: "{{ item.line }}"
with_items:
- { file: 'config_file.json', beforeLine: 'ServiceProviderOptions', regExp: 'EntityId.*', line: '{{ ServiceProviderEntityId }}' }
- { file: 'config_file.json', beforeLine: 'IdentityProviderOptions', regExp: 'EntityId.*,', line: '{{ IdentityProviderEntityId }}' }
Again, with the juicy bits taken out that could get me sacked!
If anyone has any suggestions how I should go about this, I would be extremely grateful.
Thanks!
Just in case anyone else has stumbled on this and is looking for an answer to the same!
I have got round this issue by removing the lines first and then adding them back in again using the "insertafter" argument in the "win_lineinfile" module.
role/vars/mail.yml -
MyVarArray:
- { file: 'path\file.json', beforeLine: 'ServiceProviderOptions', regExp: 'EntityId.*', line: '{{ Line_to_go_under_SvcPrvdOpts }}' }
- { file: 'path\file.json', beforeLine: 'IdentityProviderOptions', regExp: 'EntityId.*,', line: '{{ Line_to_go_under_IdtyPrvdOpts }}' }
role/tasks/mail.yml -
- name: Remove non-unique lines ready to be recreated
win_lineinfile:
path: C:\{{ item.file }}
regexp: "{{ item.regExp }}"
state: absent
with_items: "{{ MyVarArray }}"
- name: Alter lines without unique keys (use insertafter)
win_lineinfile:
path: C:\{{ item.file }}
insertafter: "{{ item.beforeLine }}"
line: "{{ item.line }}"
with_items: "{{ MyVarArray }}"
This works, and maybe proves a point that it's not worth trying to be too clever all of the time! Especially with automation, sometimes it's worth taking things one step at a time.
You have multiple problems in your code. Please also refer to the docs.
a) The correct parameter is regex not regexp. You should use one that will match if the value is unset, but will not match if the value is set. Like that it will only be changed if necessary.
b) At insertafter you probably need to supply a regex that matches the line. E.g. .*ServiceProviderOptions.*$.
I could not try this, as I don't have a windows at hand. You might need to tweak the regexes. Documentation for .NET regexes.

How to extract values from MySQL query in Ansible play

In an Ansible play, I'm running a successful SQL query on a MySQL database which returns:
"result": [
{
"account_profile": "sbx"
},
{
"account_profile": "dev"
}
]
That result is saved into a variable called query_output. I know that I can display the results array in Ansible via
- debug:
var: query_output.result
But for the life of me I cannot figure out how to extract the 2 account_profile values.
My end goal is to extract them into a fact which is an array. Something like:
"aws_account_profiles": [ "sbx", "dev" ]
I know that I'm missing something really obvious.
Suggestions?
The thing you want is the map filter's attribute= usage:
{{ query_output.result | map(attribute="account_profile") | list }}

Conditionally changing JSON values in jq with sub() function

I need to alter some values in JSON data, and would like to include it in an already existing shell script. I'm trying to do so using jq, and will need the "sub()" function to cut off a piece of a string value.
Using this command line:
jq '._meta[][].ansible_ssh_pass | sub(" .*" ; "")'
with the data below will correctly replace the value (cutting off anything including the first space in the data), but only prints out the value, not the complete JSON structure.
Here's sample JSON data:
{_meta": {
"hostvars": {
"10.1.1.3": {
"hostname": "core-gw1",
"ansible_user": "",
"ansible_ssh_pass": "test123 / ena: test2",
"configsicherung": "true",
"os": "ios",
"managementpaket": ""
}
}
}}
Output should be something like this:
{"_meta": {
"hostvars": {
"10.1.1.3": {
"hostname": "core-gw1",
"ansible_user": "",
"ansible_ssh_pass": "test123",
"configsicherung": "true",
"os": "ios",
"managementpaket": ""
}
}
}}
I assume I have to add some sort of "if... then" based arguments, but haven't been able to get jq to understand me ;) Manual is a bit sketchy and I haven't been able to find any example I could get to match up with what I need to do ...
OK, as usual ... once you post a public question, you then manage to find a solution yourself ... ;)
This jq-call does what I need:
jq '. ._meta.hostvars[].ansible_ssh_pass |= sub(" .*";"" )'