How to extract values from MySQL query in Ansible play - mysql

In an Ansible play, I'm running a successful SQL query on a MySQL database which returns:
"result": [
{
"account_profile": "sbx"
},
{
"account_profile": "dev"
}
]
That result is saved into a variable called query_output. I know that I can display the results array in Ansible via
- debug:
var: query_output.result
But for the life of me I cannot figure out how to extract the 2 account_profile values.
My end goal is to extract them into a fact which is an array. Something like:
"aws_account_profiles": [ "sbx", "dev" ]
I know that I'm missing something really obvious.
Suggestions?

The thing you want is the map filter's attribute= usage:
{{ query_output.result | map(attribute="account_profile") | list }}

Related

jq query to find nested value and return parent values

having trouble finding this, maybe it's just my search terms or who knows.
basically, i have a series of arrays mapping keyspaces to destination DBs for a large noSQL migration, in order for us to more easily script data movement. i'll include sample JSON below.
it's nested basically like: environment >> { [ target DB ] >> [ list of keyspaces ] }, { [ target DB ] >> [ list of keyspaces ] }
my intent was to update my migration script to more intelligently determine where things go based on which environment is specified, etc and require less user input or "figuring things out".
here's sample JSON:
{
"Prod": [
{
"prod1": [
"prod_db1",
"prod_db2",
"prod_d31",
"prod_db4"
]
},
{
"prod2": [
"prod_db5",
"prod_db6",
"prod_db7",
"prod_db8"
]
}
]
}
assuming i'm able to provide keyspace and environment to the script, and use those as variables in my jq query, is there a way to search for the keyspace and return the value for one level up? IE, i know i can do something like:
!#/bin/bash
ENV="Prod"
jq '.."${ENV}"[][]' env.json
to just get the DBs in the prod environment. but if i'm searching for prod_db6' how can i return the value prod2`?
Use to_entries to decompose an object into an array of key-value pairs, then IN to search in the value's array, and finally return the key:
jq -r --arg env "Prod" --arg ksp "prod_db6" '
.[$env][] | to_entries[] | select(IN(.value[]; $ksp)).key
' env.json
prod2
Demo

Non json output from gcloud ai-platform predict. Parsing non-json outputs

I am using gcloud ai-platform predict to call an endpoint and get predictions as below using json-request and not json-response
gcloud ai-platform predict --json-request instances.json
The response is however not json and hense cannot be read further causing other complications. Below is the response.
VAL HS
0.5 {'hs_1': [[-0.134501, -0.307326, -0.151994, -0.065352, -0.14138]], 'hs_2' : [[-0.134501, -0.307326, -0.151994, -0.065352, 0.020759]]}
Can gcloud ai-platform predict return a json instead or may be parse it differently. ?
Thanks for your help.
Apparently, your output is a table with headers and two columns: a score and the (alleged) JSON content. You should extract the second column of any preferred data row (your example only has one but in general you might receive several score-JSON pairs). Maybe your API already offers functionality to extract a certain 'state', e.g. the one with the highest score. If not, a simple awk or sed script can get this job done easily.
Then, the only remaining issue before having proper JSON (which can then be queried by jq) is with the quoting style. Your output encloses field names with ' instead of " ('lstm_1' instead of "lstm_1"). Correcting thin, unfortunately, is a not-so-easy task if you can expect to receive arbitrarily complex JSON data (such as strings containing quotation marks etc.). However, if your JSON will always look as simple as in the example provided, simply substituting the wrong for the right one becomes an easy task again for tools like awk or sed.
For instance, using sed on your example output to select the second line (which is the first data row), drop everything from the beginning until but not including the first opening curly brace (which marks the beginning of the second column), make said substitutions and pipe the result into jq:
... | sed -n "2{s/^[^{]\+//;s/'/\"/g;p;q}" | jq .
{
"lstm_1": [
[
-0.13450142741203308,
-0.3073260486125946,
-0.15199440717697144,
-0.06535257399082184,
-0.1413831114768982
]
],
"lstm_2": [
[
-0.13450142741203308,
-0.3073260486125946,
-0.15199440717697144,
-0.06535257399082184,
0.02075939252972603
]
]
}
[Edited to reflect upon a comment]
If you want to utilize the score as well, let jq handle it. For instance:
... | sed -n "2{s/'/\"/g;p;q}" | jq -s '{score:first,status:last}'
{
"score": 0.548,
"status": {
"lstm_1": [
[
-0.13450142741203308,
-0.3073260486125946,
-0.15199440717697144,
-0.06535257399082184,
-0.1413831114768982
]
],
"lstm_2": [
[
-0.13450142741203308,
-0.3073260486125946,
-0.15199440717697144,
-0.06535257399082184,
0.02075939252972603
]
]
}
}
[Edited to reflect upon changes in the OP]
As changes affected only names and values but no structure, the hitherto valid approach still holds:
... | sed -n "2{s/'/\"/g;p;q}" | jq -s '{val:first,hs:last}'
{
"val": 0.5,
"hs": {
"hs_1": [
[
-0.134501,
-0.307326,
-0.151994,
-0.065352,
-0.14138
]
],
"hs_2": [
[
-0.134501,
-0.307326,
-0.151994,
-0.065352,
0.020759
]
]
}
}

Combining JSON items using JMESPath and/or Ansible

I have an Ansible playbook that queries a device inventory API and gets back a JSON result that contains a lot of records following this format:
{
"service_level": "Test",
"tags": [
"Application:MyApp1"
],
"fqdn": "matestsvcapp1.vipcustomers.com",
"ip": "172.20.11.237",
"name": "matestsvcapp1.vipcustomers.com"
}
I then loop through these ansible tasks to query the JSON result for each of the IP addresses I care about:
- name: Set JMESQuery
set_fact:
jmesquery: "Devices[?ip_addresses[?ip.contains(#,'{{ ip_to_query }}' )]].{ip: '{{ ip_to_query }}', tags: tags[], , service_level: service_level }"
- name: Store values
set_fact:
inven_results: "{{ (inven_results| default([])) + (existing_device_info.json | to_json | from_json | json_query(jmesquery)) }}"
I then go on to do other tasks in ansible, pushing this data into other systems, and everything works fine.
However, I just got a request from management that they would like to see the 'service level' represented as a tag in some of the systems I push this data into. Therefore I need to combine the 'tags' and 'service_level' items resulting in something that looks like this:
{
"tags": [
"Application:MyApp1",
"service_level:Test"
],
"fqdn": "matestsvcapp1.vipcustomers.com",
"ip": "172.20.11.237",
"name": "matestsvcapp1.vipcustomers.com"
}
I've tried modifying the JMESPath query to join the results together using the join function, and tried doing it the 'ansible' way, using the combine or map, but I couldn't get either of those to work either.
Any thoughts on the correct way to handle this? Thanks in advance!
Note: 'tags' is a list of strings, and even though it's written in key:value format, it's really just a string.
to add two arrays you use the + operator like this:
ansible localhost -m debug -a 'msg="{{ b + ["String3"] }}"' -e '{"b":["String1", "String2"]}'
result:
localhost | SUCCESS => {
"msg": [
"String1",
"String2",
"String3"
]
}
So if i take your json code as test.json you could run
ansible localhost -m debug -a 'msg="{{ tags + ["service_level:" ~ service_level ] }}"' -e #test.json
Result:
localhost | SUCCESS => {
"msg": [
"Application:MyApp1",
"service_level:Test"
]
}
With this knowledge you can use set_fact to put this new array in a variable for later use.

Retrieve one (last) value from influxdb

I'm trying to retrieve the last value inserted into a table in influxdb. What I need to do is then post it to another system via HTTP.
I'd like to do all this in a bash script, but I'm open to Python also.
$ curl -sG 'https://influx.server:8086/query' --data-urlencode "db=iotaWatt" --data-urlencode "q=SELECT LAST(\"value\") FROM \"grid\" ORDER BY time DESC" | jq -r
{
"results": [
{
"statement_id": 0,
"series": [
{
"name": "grid",
"columns": [
"time",
"last"
],
"values": [
[
"2018-01-17T04:15:30Z",
690.1
]
]
}
]
}
]
}
What I'm struggling with is getting this value into a clean format I can use. I don't really want to use sed, and I've tried jq but it complains the data is a string and not an index:
jq: error (at <stdin>:1): Cannot index array with string "series"
Anyone have a good suggestion?
Pipe that curl to the jq below
$ your_curl_stuff_here | jq '.results[].series[]|.name,.values[0][]'
"grid"
"2018-01-17T04:15:30Z"
690.1
The results could be stored into a bash array and used later.
$ results=( $(your_curl_stuff_here | jq '.results[].series[]|.name,.values[0][]') )
$ echo "${results[#]}"
"grid" "2018-01-17T04:15:30Z" 690.1
# Individual values could be accessed using "${results[0]}" and so, mind quotes
All good :-)
Given the JSON shown, the jq query:
.results[].series[].values[]
produces:
[
"2018-01-17T04:15:30Z",
690.1
]
This seems to be the output you want, but from the point of view of someone who is not familiar with influxdb, the requirements seem very opaque, so you might want to consider a variant, such as:
.results[-1].series[-1].values[-1]
which in this case produces the same result, as it happens.
If you just want the atomic values, you could simply append [] to either of the queries above.

Searching for object with jQ in command line

I have a JSON file of AWS security groups list. I am trying to fetch the Group Id using the Group Name. The object looks like the following:
{
"SecurityGroups": [{
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
}
],
"Description": "launch-wizard-2 created 2017-10-21T09:19:40.007-04:00",
"GroupName": "MY1SG-PUBLIC-80",
"VpcId": "vpc-ceed12b7",
"OwnerId": "712503525534",
"GroupId": "sg-ee0c979c"
}]
}
With jQ my attempt is as follows:
aws ec2 describe-security-groups | jq '.GroupId' ["GroupName": "MY1SG-PUBLIC-80"]
Error:
jq: error: Could not open file [GroupName:: No such file or directory
jq: error: Could not open file MY1SG-PUBLIC-80]: No such file or directory
Issue 1: Format
https://shapeshed.com/jq-json/
The second input to jq is the file you wish to read from. If this value is - the program will read from the instream.
Issue 2: Selection
https://stedolan.github.io/jq/manual/#select(boolean_expression)
To select an element by value you should/could use a select statement
select(.GroupName == "MY1SG-PUBLIC-80")
jq 'SCOPE | select(.GroupName == "MY1SG-PUBLIC-80") | .GroupId[]' -
where SCOPE is the group you wish to look in. if SCOPE is .[], it will scan every json entry. Following this, it pipes this group into a select filter, and trims it down to only ones that have GroupName set to the given value. This result set is then piped into a key filter, where it only returns the array of matching GroupID's.
I am trying to fetch the Group Id using the Group Name.
Assuming the input has been tweaked to make it valid JSON (*), the filter:
.SecurityGroups[] | select(.GroupName=="MY1SG-PUBLIC-80") | .GroupId
produces:
"sg-ee0c979c"
It might be worthwhile considering this alternative filter:
.[][]|select(.GroupName=="MY1SG-PUBLIC-80")|.GroupId
(*) The input as originally shown has an extraneous comma.