Using Artillery.io 1.6.0-10, I call an api that returns a JSON and try to capture one of the values for later use in the flow, however the capture doesn't seem to be working. Here is the simplified code:
get_ddg.yml
config:
target: "https://api.duckduckgo.com"
phases:
- duration: 3
arrivalCount: 1
scenarios:
- name: "Get search"
flow:
- get:
url: "/?q=DuckDuckGo&format=json"
capture:
json: "$.Abstract"
as: "abstract"
- log: "Abstract: {{ $abstract }}"
When I run artillery the value is empty:
$ artillery run get_ddg.yml
Started phase 0, duration: 3s # 10:28:34(+0200) 2017-10-25
⠋ Abstract: <----- EMPTY! NO VALUE FOR $abstract
Report # 10:28:37(+0200) 2017-10-25
Scenarios launched: 1
Scenarios completed: 1
Requests completed: 1
Concurrent users: 1
RPS sent: 2.08
Request latency:
min: 311.9
max: 311.9
median: 311.9
p95: NaN
p99: NaN
Scenario duration:
min: 349.5
max: 349.5
median: 349.5
p95: NaN
p99: NaN
Codes:
200: 1
Any help is much appreciated.
Found the solution. The problem is how the variable is read after capture. The correct way to call the variable it is not using the '$':
- log: "Abstract: {{ abstract }}"
Related
I am using ansible and community.mysql.mysql_query to perform some sanity on my database.
I already figured out that I need to register the output and the output holds a parameter named query_result that contains the returned data.
My problem is that all examples are for a standard select in which you use param.query_result['column'] but mine has a COUNT(*).
My output for this debug :
- name: debug in db role
debug:
msg: |
result : {{ first_query.query_result }}
is :
ok: [localhost] => {
"msg": "result : [[{u'COUNT(*)': 16}]]\n"
}
Since count has * in it I cannot access it in the playbook.
Any thoughts on to how I can accomplish it and actually use this '16' count number?
Thanks
That was fast on my part ...
- name: debug in db role
debug:
msg: |
result : {{ first_query.query_result[0][0]['COUNT(*)'] }}
When using Ansible I am able to execute when passed one-by-one like this:
---
- name: Using a REST API
become: false
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: “Let’s get list of Interfaces”
- name: Adding a Bridge-Interface
uri:
url: https://router/rest/interface/bridge
method: PUT
validate_certs: false
url_username: ansible
url_password: ansible
force_basic_auth: yes
body_format: json
status_code: 201
body: '{"name":"bridge_ansible"}'
register: results
- debug:
var: results
I want to iterate through a set of commands so I thought of looping, but that does not work for me, I am using this code:
---
- name: Using a REST API
become: false
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: “Let’s get list of Interfaces”
- name: Adding a Bridge-Interface
uri:
url: "{{item.url}}"
method: PUT
validate_certs: false
url_username: ansible
url_password: ansible
force_basic_auth: yes
body_format: json
status_code: 201
body: "{{item.body}}"
register: results
loop:
- {body:'{"name":"bridge_ansible"}', url:'https://router/rest/interface/bridge'}
- {body:'{"address":"6.6.6.6", "interface":"bridge_ansible"}', url:'https://router/rest/ip/address'}
- debug:
var: results
I get an error for this {body:'{"name":"bridge_ansible"}', url:'https://router/rest/interface/bridge'} in the json object { I think my syntax is not correct but cannot understand the correct thing. Can someone please help
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
did not find expected ',' or '}'
The error appears to be in '/ansible-playbook/1-demo.yaml': line 23, column 19, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
loop:
- {body:'\{"name":"bridge_ansible"\}', url:'https://router/rest/interface/bridge'}
^ here
This one looks easy to fix. It seems that there is a value started
with a quote, and the YAML parser is expecting to see the line ended
with the same kind of quote. For instance:
when: "ok" in result.stdout
Could be written as:
when: '"ok" in result.stdout'
Or equivalently:
when: "'ok' in result.stdout"
We could be wrong, but this one looks like it might be an issue with
unbalanced quotes. If starting a value with a quote, make sure the
line ends with the same set of quotes. For instance this arbitrary
example:
foo: "bad" "wolf"
Could be written as:
foo: '"bad" "wolf"'
Thanks
I am running the example of javascript github actions and it works just fine when I have
on: [push]
but not when I have
on:
schedule:
- cron: '*/5 * * * *'
I expect the github action to run every 5 minutes but it doesn't seem to run at all.
Here is the rest of my code for reference
.github/worflows/main.yml
on:
schedule:
- cron: '*/5 * * * *'
jobs:
hello_world_job:
runs-on: ubuntu-latest
name: A job to say hello
steps:
- name: Hello world action step
id: hello
uses: StephenVNelson/website/#3-experiment-with-actions
with:
who-to-greet: 'Mona the Octocat'
# Use the output from the `hello` step
- name: Get the output time
run: echo "The time was ${{ steps.hello.outputs.time }}"
./action.yml
name: 'Hello World'
description: 'Greet someone and record the time'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: 'World'
outputs:
time: # id of output
description: 'The time we greeted you'
runs:
using: 'node12'
main: './github-actions/main.js'
./github-actions/main.js
const core = require('#actions/core');
const github = require('#actions/github');
try {
// `who-to-greet` input defined in action metadata file
const nameToGreet = core.getInput('who-to-greet');
console.log(`Hello ${nameToGreet}!`);
const time = (new Date()).toTimeString();
core.setOutput("time", time);
// Get the JSON webhook payload for the event that triggered the workflow
const payload = JSON.stringify(github.context.payload, undefined, 2)
console.log(`The event payload: ${payload}`);
} catch (error) {
core.setFailed(error.message);
}
You won't be able to schedule it for every 5 minutes as the "shortest interval you can run scheduled workflows is once every 15 minutes":
https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule
Change it to '*/15 * * * *' and you'll be fine.
As mentioned in the GitHub documentation about Scheduled events
The schedule event can be delayed during periods of high loads of GitHub Actions workflow runs. High load times include the start of every hour. To decrease the chance of delay, schedule your workflow to run at a different time of the hour.
Read further : No assurance on scheduled jobs?
I am trying to configure envoy as load balancer, currently stuck with fallbacks. In my playground cluster I have 3 backend servers and envoy as front proxy. I am generating some traffic on envoy using siege and watching the responses. While doing this I stop one of the backends.
What do I want: Envoy should resend failed requests from stopped backend to healthy one, so I will not get any 5xx responses
What do I get: When stopping backend I get some 503 responses, and then everything becomes normal again
What am I doing wrong? I think, fallback_policy should provide this functionality, but it doesn't work.
Here is my config file:
node:
id: LoadBalancer_01
cluster: HighloadCluster
admin:
access_log_path: /var/log/envoy/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: http_listener
address:
socket_address: { address: 0.0.0.0, port_value: 80 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: request_route
virtual_hosts:
- name: local_service
domains: ["*"]
require_tls: NONE
routes:
- match: { prefix: "/" }
route:
cluster: backend_service
timeout: 1.5s
retry_policy:
retry_on: 5xx
num_retries: 3
per_try_timeout: 3s
http_filters:
- name: envoy.router
typed_config:
"#type": type.googleapis.com/envoy.config.filter.http.router.v2.Router
name: envoy.file_access_log
typed_config:
"#type": type.googleapis.com/envoy.config.accesslog.v2.FileAccessLog
path: /var/log/envoy/access.log
clusters:
- name: backend_service
connect_timeout: 0.25s
type: STATIC
lb_policy: ROUND_ROBIN
lb_subset_config:
fallback_policy: ANY_ENDPOINT
outlier_detection:
consecutive_5xx: 1
interval: 10s
load_assignment:
cluster_name: backend_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.1.1.1
port_value: 10000
- endpoint:
address:
socket_address:
address: 2.2.2.2
port_value: 10000
- endpoint:
address:
socket_address:
address: 3.3.3.3
port_value: 10000
health_checks:
- http_health_check:
path: /api/liveness-probe
timeout: 1s
interval: 30s
unhealthy_interval: 10s
unhealthy_threshold: 2
healthy_threshold: 1
always_log_health_check_failures: true
event_log_path: /var/log/envoy/health_check.log```
TL;DR
You can use a circuit breaker (see config example below), alongside with your retry_policy and outlier_detection.
Explanation
Context
I have successfully reproduced your issue with your config (except the health_checks part, because I found that it was not useful to reproduce your problem).
I have run envoy and my backend (2 apps load-balanced), generated some traffic with hey (50 threads making requests concurrently during 10 seconds):
hey -c 50 -z 10s http://envoy:8080
And I have stopped one backend app around 5s after the command started.
Result
When diving into envoy admin /stats endpoint, I noticed interesting stuff:
cluster.backend_service.upstream_rq_200: 17899
cluster.backend_service.upstream_rq_503: 28
cluster.backend_service.upstream_rq_retry_overflow: 28
cluster.backend_service.upstream_rq_retry_success: 3
cluster.backend_service.upstream_rq_total: 17930
There were indeed 28 503 responses when I stopped one backend app. But retry worked somehow: 3 retries were successful (upstream_rq_retry_success), but 28 other retries failed (upstream_rq_retry_overflow), resulting to 503 errors. Why ?
From the cluster stats docs:
upstream_rq_retry_overflow : Total requests not retried due to circuit breaking or exceeding the retry budget
Fix
To solve this, we can add a circuit breaker in the cluster (I have been generous with max_requests, max_pending_requests and max_retries parameters for the example). The interesting part is retry_budget.budget_percent value:
clusters:
- name: backend_service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
outlier_detection:
consecutive_5xx: 1
interval: 10s
circuit_breakers:
thresholds:
- priority: "DEFAULT"
max_requests: 0xffffffff
max_pending_requests: 0xffffffff
max_retries: 0xffffffff
retry_budget:
budget_percent:
value: 100.0
From the retry_budget docs:
budget_percent: Specifies the limit on concurrent retries as a percentage of the sum of active requests and active pending requests. For example, if there are 100 active requests and the budget_percent is set to 25, there may be 25 active retries.
This parameter is optional. Defaults to 20%.
I set it to 100.0 to allow 100% of active retries.
When running the example again with this new config, there is no more upstream_rq_retry_overflow, so no more 503 errors:
cluster.backend_service.upstream_rq_200: 17051
cluster.backend_service.upstream_rq_retry_overflow: 0
cluster.backend_service.upstream_rq_retry_success: 5
cluster.backend_service.upstream_rq_total: 17056
Note that if you experience upstream_rq_retry_limit_exceeded, you can try to set and increase retry_budget.min_retry_concurrency (default when not set is 3):
retry_budget:
budget_percent:
value: 100.0
min_retry_concurrency: 10
I have a inventory file which has a RDS endpoint as :
[ems_db]
syd01-devops.ce4l9ofvbl4z.ap-southeast-2.rds.amazonaws.com
I wrote the following play book to create a Cloudwatch ALARM :
---
- name: Get instance ec2 facts
debug: var=groups.ems_db[0].split('.')[0]
register: ems_db_name
- name: Display
debug: var=ems_db_name
- name: Create CPU utilization metric alarm
ec2_metric_alarm:
state: present
region: "{{aws_region}}"
name: "{{ems_db_name}}-cpu-util"
metric: "CPUUtilization"
namespace: "AWS/RDS"
statistic: Average
comparison: ">="
unit: "Percent"
period: 300
description: "It will be triggered when CPU utilization is more than 80% for 5 minutes"
dimensions: { 'DBInstanceIdentifier' : ems_db_name }
alarm_actions: arn:aws:sns:ap-southeast-2:493552970418:cloudwatch_test
ok_actions: arn:aws:sns:ap-southeast-2:493552970418:cloudwatch_test
But this results in
TASK: [cloudwatch | Get instance ec2 facts] ***********************************
ok: [127.0.0.1] => {
"var": {
"groups.ems_db[0].split('.')[0]": "syd01-devops"
}
}
TASK: [cloudwatch | Display] **************************************************
ok: [127.0.0.1] => {
"var": {
"ems_db_name": {
"invocation": {
"module_args": "var=groups.ems_db[0].split('.')[0]",
"module_complex_args": {},
"module_name": "debug"
},
"var": {
"groups.ems_db[0].split('.')[0]": "syd01-devops"
},
"verbose_always": true
}
}
}
TASK: [cloudwatch | Create CPU utilization metric alarm] **********************
failed: [127.0.0.1] => {"failed": true}
msg: BotoServerError: 400 Bad Request
<ErrorResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<Error>
<Type>Sender</Type>
<Code>MalformedInput</Code>
</Error>
<RequestId>f30470a3-2d65-11e6-b7cb-cdbbbb30b60b</RequestId>
</ErrorResponse>
FATAL: all hosts have already failed -- aborting
What is wrong here? What can i do to solve this ? I am new to this but surely this seems some syntax issue with me or the way i am picking up the inventory endpoint split.
The variable from debug isn't being assigned in the first debug statement, though you may be able to if you change it to a message and enclose it with quotes and double braces (untested):
- name: Get instance ec2 facts
debug: msg="{{groups.ems_db[0].split('.')[0]}}"
register: ems_db_name
However, I would use the set_fact module in that task (instead of debug) and assign that value to it. That way, you can reuse it in this and subsequent calls to a play.
- name: Get instance ec2 facts
set_fact: ems_db_name="{{groups.ems_db[0].split('.')[0]}}"
UPDATE: Add a threshold: 80.0 to the last task, and the dimensions needs to use the instance id encapsulated with double braces.