Unable to Create a CloudWatch Healthcheck via Ansible - boto

I have a inventory file which has a RDS endpoint as :
[ems_db]
syd01-devops.ce4l9ofvbl4z.ap-southeast-2.rds.amazonaws.com
I wrote the following play book to create a Cloudwatch ALARM :
---
- name: Get instance ec2 facts
debug: var=groups.ems_db[0].split('.')[0]
register: ems_db_name
- name: Display
debug: var=ems_db_name
- name: Create CPU utilization metric alarm
ec2_metric_alarm:
state: present
region: "{{aws_region}}"
name: "{{ems_db_name}}-cpu-util"
metric: "CPUUtilization"
namespace: "AWS/RDS"
statistic: Average
comparison: ">="
unit: "Percent"
period: 300
description: "It will be triggered when CPU utilization is more than 80% for 5 minutes"
dimensions: { 'DBInstanceIdentifier' : ems_db_name }
alarm_actions: arn:aws:sns:ap-southeast-2:493552970418:cloudwatch_test
ok_actions: arn:aws:sns:ap-southeast-2:493552970418:cloudwatch_test
But this results in
TASK: [cloudwatch | Get instance ec2 facts] ***********************************
ok: [127.0.0.1] => {
"var": {
"groups.ems_db[0].split('.')[0]": "syd01-devops"
}
}
TASK: [cloudwatch | Display] **************************************************
ok: [127.0.0.1] => {
"var": {
"ems_db_name": {
"invocation": {
"module_args": "var=groups.ems_db[0].split('.')[0]",
"module_complex_args": {},
"module_name": "debug"
},
"var": {
"groups.ems_db[0].split('.')[0]": "syd01-devops"
},
"verbose_always": true
}
}
}
TASK: [cloudwatch | Create CPU utilization metric alarm] **********************
failed: [127.0.0.1] => {"failed": true}
msg: BotoServerError: 400 Bad Request
<ErrorResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<Error>
<Type>Sender</Type>
<Code>MalformedInput</Code>
</Error>
<RequestId>f30470a3-2d65-11e6-b7cb-cdbbbb30b60b</RequestId>
</ErrorResponse>
FATAL: all hosts have already failed -- aborting
What is wrong here? What can i do to solve this ? I am new to this but surely this seems some syntax issue with me or the way i am picking up the inventory endpoint split.

The variable from debug isn't being assigned in the first debug statement, though you may be able to if you change it to a message and enclose it with quotes and double braces (untested):
- name: Get instance ec2 facts
debug: msg="{{groups.ems_db[0].split('.')[0]}}"
register: ems_db_name
However, I would use the set_fact module in that task (instead of debug) and assign that value to it. That way, you can reuse it in this and subsequent calls to a play.
- name: Get instance ec2 facts
set_fact: ems_db_name="{{groups.ems_db[0].split('.')[0]}}"
UPDATE: Add a threshold: 80.0 to the last task, and the dimensions needs to use the instance id encapsulated with double braces.

Related

Is the Ansible Inventory compatible with OpenAPI standards?

I´m trying to specify the API used by ansibles dynamic inventory.
Does anyone have experience with solving the incompatibility?
For a single host it might work as following:
The Ansible json output will be like this:
{
"ansible_host": "172.16.19.123",
"proxy": "somehost.domain.fake"
}
The openapi.yml
paths:
/api/inventory/host/:
get:
summary: Gets One Ansible Host
parameters:
- in: query
name: hostname
schema:
type: string
required: true
responses:
'200':
description: OK
content:
application/json:
schema:
$ref: '#/components/schemas/SingleHost'
components:
schemas:
SingleHost: # A Single Host for ansible-inventory --host [hostname] requests
type: object
properties:
ansible_host:
type: string
description: IP-Address of the host
ansible_port:
type: integer
description: ssh-port number of host
But when specifiyng the endpoint for ansible-inventory --list it gets tricky already..
example of a ansible inventory yml
{
"HostGroupName-A": {
"hosts": [
"Host-A"
]
},
"HostGroupName-B": {
"hosts": [
"Host-A",
"Host-B"
]
}
}
Should I just avoid using openapi to specify this?

Ansible: Invalid JSON when using --extra-vars

Hi community,
I have been struggling with an issue in ansible issue for days now.
Everything is executed wihtin a Jenkins pipeline.
The ansible command looks like:
sh """
ansible-playbook ${env.WORKSPACE}/cost-optimization/ansible/manage_dynamo_db.yml \
--extra-vars '{"projectNameDeployConfig":${projectNameDeployConfig},"numberOfReplicas":${numberOfReplicas},"dynamodbtask":${dynamodbtask}}'
"""
And the playbooks is:
playbook.yml
---
- hosts: localhost
vars:
numberOfReplicas: "{{numberOfReplicas}}"
dynamodbtask: "{{dynamodbtask}}"
namespace: "{{projectNameDeployConfig}}"
status: "{{status}}"
- tasks:
- name: "Get replica number for the pods"
command: aws dynamodb put-item --table-name pods_replicas
register: getResult
when: dynamodbtask == "get"
- name: "Update replica number for specified pods"
command: |
aws dynamodb put-item
--table-name pods_replicas
--item '{"ProjectNameDeployConfig":{"S":{{namespace}}},"NumberReplicas":{"N":{{numberOfReplicas}}}}'
register: updatePayload
when: dynamodbtask == "put" and getResult is skipped
However, there is always the following error:
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["aws", "dynamodb", "put-item", "--table-name",
"pods_replicas", "--item", "{\"ProjectNameDeployConfig\":{\"S\":LERN-PolicyCenterV10},\"NumberReplicas\":
{\"N\":0}}"], "delta": "0:00:01.702107", "end": "2020-02-09 16:58:26.055579",
"msg": "non-zero return code", "rc": 255, "start": "2020-02-09 16:58:24.353472", "stderr": "\nError parsing parameter '--item': Invalid JSON: No JSON object could be decoded\nJSON received: {\"ProjectNameDeployConfig\":{\"S\":LERN-PolicyCenterV10},\"NumberReplicas\":{\"N\":0}}", "stderr_lines": ["", "Error parsing parameter '--item': Invalid JSON: No JSON object could be decoded", "JSON received: {\"ProjectNameDeployConfig\":{\"S\":LERN-PolicyCenterV10},\"NumberReplicas\":{\"N\":0}}"], "stdout": "", "stdout_lines": []}
There are two answers to your question: the simple one and the correct one
The simple one is that had you actually fed the JSON into jq, or python -m json.tool, you would have observed that namespace is unquoted:
"{\"ProjectNameDeployConfig\":{\"S\": LERN-PolicyCenterV10 },\"NumberReplicas\": {\"N\":0}}"
where I added a huge amount of space, but didn't otherwise alter the quotes
The correct answer is that you should never use jinja2 to try and assemble structured text when there are filters that do so for you.
What you actually want is to use the to_json filter:
- name: "Update replica number for specified pods"
command: |
aws dynamodb put-item
--table-name pods_replicas
--item {{ dynamodb_item | to_json | quote }}
vars:
dynamodb_item:
"ProjectNameDeployConfig":
"S": '{{ projectNameDeployConfig }}'
"NumberReplicas":
"N": 0
register: updatePayload
when: dynamodbtask == "put" and getResult is skipped
although you'll notice that I changed your variable name because namespace is the name of a type in jinja2, so you can either call it ns or I just used the interpolation value from your vars: block at the top of the playbook, as it doesn't appear that it changed from then

How to call the ansible play recursively based on until condition

I'm trying to execute a playbook recursively until the condition satisfies. But, I couldn't achieve it some-how. Can anyone suggest me the solution.
Ansible-version: 2.2.1.0
Here is my test-plays.
main_play.yml:
---
- hosts: localhost
tasks:
- name: Wait till you get the needed thing in the get call
include: loop.yml
Here is the loop.yml
- name: Wait until migration jobs reach DbcAllJobxxxxx
uri:
url: "http://<url->/jobs"
method: GET
headers:
Content-Type: "application/json"
Accept: "application/json"
Postman-Token: "31d6"
cache-control: "no-cache"
return_content: yes
register: migration_status
ignore_errors: yes
- debug: msg="{{ migration_status }}"
#write mig-status to file
- copy: content="{{ migration_status.content }}" dest=/path/to/dest/migration_status.json
- name: Get the DbcAllJobxxxxx status from py script
shell: python jsonrc.py /path/to/dest/migration_status.json
register: pyout
- debug: msg="{{ pyout.stdout }}"
- include: loop.yml
when: pyout.stdout != '1'
ignore_errors: yes
- debug: msg="{{ pyout.stdout }}"
Requirement : GET json call will return json. The json may vary time-to-time as it returns dynamic status. So, want to pool the json data continuously to know the value of a key - Which is a sign to call other event. So, I need to wait for the key-value pair in that json. [It may loss within time frame.. Need to catch at that point]. To achieve same parsing the json through python script and catching the return of pyscript and checking the value and calling the same play if it doesn't satisfy the condition.
Executing ansible-playbook main_play.yml
Even the pyout.stdout == '1' it's still throwing ERROR! Unexpected Exception: maximum recursion depth exceeded error. Did I miss any ?? Help me in this regard.
BTW, I tried to achieve this with until using json_query. but, parsing become difficult in this part. So, avoided this solution.
From ansible 2.4 onwards there is the include_tasks builtin that does work recursively.
main_play.yml:
---
- hosts: localhost
tasks:
- set_fact:
counter: 1
- include_tasks: loop.yml
loop.yml:
- set_fact:
counter: "{{counter | int + 1 }}"
- debug: msg="{{ counter }}"
- include_tasks: loop.yml
when: counter | int < 5
Result:
PLAY [localhost] ****************************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [set_fact] *****************************************************************************************************************************************************************************
ok: [localhost]
TASK [include_tasks] ***********************************************************************************************************************************
included: loop.yml for localhost
TASK [set_fact] *****************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "2"
}
TASK [include_tasks] ************************************************************************************************************************************************************************
included: loop.yml for localhost
TASK [set_fact] *****************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "3"
}
TASK [include_tasks] ************************************************************************************************************************************************************************
included: loop.yml for localhost
TASK [set_fact] *****************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "4"
}
TASK [include_tasks] ************************************************************************************************************************************************************************
included: loop.yml for localhost
TASK [set_fact] *****************************************************************************************************************************************************************************
ok: [localhost]
TASK [debug] ********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "5"
}
TASK [include_tasks] ************************************************************************************************************************************************************************
skipping: [localhost]
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=14 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
This code is to loop recursivley until I get the right message or it reaches MAX retries (In case just set very high)
- name: Group of tasks that are tightly coupled
block:
- name: Increment the retry count
set_fact:
retry_count: "{{ 0 if retry_count is undefined else retry_count | int + 1 }}"
- name: get state file {{ retry_count }}
command:
cmd: cat /tmp/test.txt
register: out
- debug:
msg: "Found END message"
when: "'END' in out.stdout_lines"
- fail:
msg: "END not reached"
when: "not 'END' in out.stdout_lines"
rescue:
- fail:
msg: Maximum retries of grouped tasks reached
when: retry_count | int == 15
- name: pause a moment
pause:
seconds: 1
when: "not 'END' in out.stdout_lines"
- debug:
msg: "hasen't finished yet, retry"
when: "not 'END' in out.stdout_lines"
- include_tasks: loop_control.yml
when: "not 'END' in out.stdout_lines"
It's obvious, I think.
"Here is the loop.yml"
...
- include: loop.yml
ERROR! Unexpected Exception: maximum recursion depth exceeded error.

Ansible "set_fact" repository url from json file using filters like "from_json"

Using Ansible "set_fact" module, I need to get repository url from json file using filters like "from_json". I tried in couple ways, and still doesn't get it how is should work.
- name: initial validation
tags: bundle
hosts: localhost
connection: local
tasks:
- name: register bundle version_file
include_vars:
file: '/ansible/playbook/workbench-bundle/bundle.json'
register: bundle
- name: debug registered bundle file
debug:
msg: '{{ bundle }}'
I get json that I wanted:
TASK [debug registered bundle file] ************************************************
ok: [127.0.0.1] => {
"msg": {
"ansible_facts": {
"engine-config": "git#bitbucket.org/engine-config.git",
"engine-monitor": "git#bitbucket.org/engine-monitor.git",
"engine-server": "git#bitbucket.org/engine-server.git",
"engine-worker": "git#bitbucket.org/engine-worker.git"
},
"changed": false
}
}
And then I'm trying to select each value by key name to use this value as URL to "npm install" each package in separate instances.
- name: set_fact some paramater
set_fact:
engine_url: "{{ bundle.('engine-server') | from_json }}"
And then I get error:
fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "template error
while templating string: expected name or number. String: {{
bundle.('engine-server') }}"}
I many others ways like this loopkup, and it still fails with others errors. Can someone help to understand, how I can find each parameter and store him as "set_fact"? Thanks
Here is a sample working code to set a variable like in the question (although I don't see much sense in it):
- name: initial validation
tags: bundle
hosts: localhost
connection: local
tasks:
- name: register bundle version_file
include_vars:
file: '/ansible/playbook/workbench-bundle/bundle.json'
name: bundle
- debug:
var: bundle
- debug:
var: bundle['engine-server']
- name: set_fact some paramater
set_fact:
engine_url: "{{ bundle['engine-server'] }}"
The above assumes your input data (which you did not include) is:
{
"engine-config": "git#bitbucket.org/engine-config.git",
"engine-monitor": "git#bitbucket.org/engine-monitor.git",
"engine-server": "git#bitbucket.org/engine-server.git",
"engine-worker": "git#bitbucket.org/engine-worker.git"
}

How to use return values of one task in another task for a different host in ansible

I was trying to setup mysql master slave replication with ansible for a hostgroup consisting of 2 mysql hosts.
Here is my scenario:
I run one task in the 1st host and skips the 2nd host, so the 1st task (i.e master replication status) returns some values like Position, File etc.
Then, I run another task in 2nd host (skips the 1st hosts), This task uses the return values of the 1st task like master.Position, master.File etc.
Now, when I run the playbook, the variables of the 1st task does not seem to be working in the 2nd task
Inventory File
[mysql]
stagmysql01 ansible_host=1.1.1.1 ansible_ssh_user=ansible ansible_connection=ssh
stagmysql02 ansible_host=1.1.1.2 ansible_ssh_user=ansible ansible_connection=ssh
Tasks on Master
- name: Mysql - Check master replication status.
mysql_replication: mode=getmaster
register: master
- debug: var=master
Tasks on Slave
- name: Mysql - Configure replication on the slave.
mysql_replication:
mode: changemaster
master_host: "{{ replication_master }}"
master_user: "{{ replication_user }}"
master_password: "{{ replication_pass }}"
master_log_file: "{{ master.File }}"
master_log_pos: "{{ master.Position }}"
ignore_errors: True
Master Output
TASK [Mysql_Base : Mysql - Check master replication status.] ****************
skipping: [stagmysql02]
ok: [stagmysql01]
TASK [Mysql_Base : debug] ***************************************************
ok: [stagmysql01] => {
"master": {
"Binlog_Do_DB": "",
"Binlog_Ignore_DB": "mysql,performance_schema",
"Executed_Gtid_Set": "",
"File": "mysql-bin.000003",
"Is_Master": true,
"Position": 64687163,
"changed": false,
"failed": false
}
}
ok: [stagmysql02] => {
"master": {
"changed": false,
"skip_reason": "Conditional result was False",
"skipped": true
}
}
Slave Output
TASK [Mysql_Base : Mysql - Configure replication on the slave.] *************
skipping: [stagmysql01]
fatal: [stagmysql02]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'File'\n\nThe error appears to have been in '/root/ansible/roles/Mysql_Base/tasks/replication.yml': line 30, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Mysql - Configure replication on the slave.\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'File'"}
...ignoring
As, you can see above, the 2nd task failed for 2nd host because of undefined variables. However the required variables are there in 1st task of 1st host.
How do I use the variables returned from 1st host in 2nd host in another task ?
P.S: I have seen the approach of using {{ hostvars['inventory_hostname']['variable'] }}.
However I'm quite confused with this approach as the inventory_hostname or IP address needs to be added directly. I was looking for a common template that can be used for different inventory files and playbooks.
I was able to solve my problem by defining the variables to a new dummy host and then using it across the playbook with hostvars.
Similar solution was already mentioned in one of the answers in How do I set register a variable to persist between plays in ansible?
However I did not notice it until I posted this question.
Here is what I did in the ansible tasks:
I have created a dummy host master_value_holder and defined the
required variables. (Here I needed master_log_file and
master_log_Postion)
Accessed the variables using hostvars['master_value_holder']['master_log_file']
Tasks on Master
- name: Mysql - Check master replication status.
mysql_replication: mode=getmaster
register: master
- name: "Add master return values to a dummy host"
add_host:
name: "master_value_holder"
master_log_file: "{{ master.File }}"
master_log_pos: "{{ master.Position }}"
Tasks for Slave
- name: Mysql - Displaying master replication status
debug: msg="Master Bin Log File is {{ hostvars['master_value_holder']['master_log_file'] }} and Master Bin Log Position is {{ hostvars['master_value_holder']['master_log_pos'] }}"
- name: Mysql - Configure replication on the slave.
mysql_replication:
mode: changemaster
master_host: "{{ replication_master }}"
master_user: "{{ replication_user }}"
master_password: "{{ replication_pass }}"
master_log_file: "{{ hostvars['master_value_holder']['master_log_file'] }}"
master_log_pos: "{{ hostvars['master_value_holder']['master_log_pos'] }}"
when: ansible_eth0.ipv4.address != replication_master and not slave.Slave_SQL_Running
Output
TASK [Mysql_Base : Mysql - Check master replication status.] ****************
skipping: [stagmysql02]
ok: [stagmysql01]
TASK [AZ-Mysql_Base : Add master return values to a dummy host] ****************
changed: [stagmysql01]
TASK [AZ-Mysql_Base : Mysql - Displaying master replication status] ************
ok: [stagmysql01] => {
"msg": "Master Bin Log File is mysql-bin.000001 and Master Bin Log Position is 154"
}
ok: [stagmysql02] => {
"msg": "Master Bin Log File is mysql-bin.000001 and Master Bin Log Position is 154"
}
TASK [AZ-Mysql_Base : Mysql - Configure replication on the slave.] *************
skipping: [stagmysql01]
skipping: [stagmysql02]
As you can see from the above output that the master replication status is available for both the hosts now.