I am trying to clean-up the jmeter docker+ci pipeline of our functional tests. I see taurus has a clean way to run jmeter scripts in a container and it does the heavy lifting of downloading the version of jmeter I want + installing the plugins my scripts use - excellent.
Now I need to generate the reports in junit.xml so I could keep the reporting consistent. Up until now I was using a modified fork of https://github.com/tguzik/m2u to convert jtl reports to junit.xml
Appreciate any help with how I can get request, response (code & body) for all samples onto junit.xml (at least for the failed samples)?
I tried few variations of taurus yaml ...
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: junit-xml
filename: report/report.xml
data-source: sample-labels
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: passfail
- module: junit-xml
filename: report/report.xml
data-source: pass-fail
Also added certain passfail criteria variations on the passfail module. did not help
After fiddling with this for few hours, I believe there is no clean way to get anything meaningful onto the junit .xml report from the junit-xml module in taurus. It appears barebone. I also noticed that it could mess up the default jenkins junit plugin test result summary.
So I settled down with the following yaml setting and continued to use m2u.jar to convert the jtl to junit.xml
modules:
jmeter:
path: ~/.bzt/jmeter-taurus/bin/jmeter
download-link: https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-{version}.zip
version: 5.3
force-ctg: true
detect-plugins: true
plugins:
- jpgc-json=2.2
- jmeter-ftp
- jpgc-casutg
xml-jtl-flags:
xml: true
fieldNames: true
time: true
timestamp: true
latency: true
connectTime: false
success: true
label: true
code: true
message: true
threadName: true
dataType: false
encoding: false
assertions: true
subresults: true
responseData: false
samplerData: false
responseHeaders: false
requestHeaders: true
responseDataOnError: true
saveAssertionResultsFailureMessage: true
bytes: true
threadCounts: false
url: true
execution:
- write-xml-jtl: full
scenario:
script: v_jmxfilename
properties:
environment: v_env
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
# - module: junit-xml
# filename: report/junit-report.xml
# data-source: sample-labels
As per JUnit-XML-Reporter documentation currently this is not possible:
This reporter provides test results in JUnit XML format parseable by Jenkins JUnit Plugin. Reporter has two options:
filename (full path to report file, optional. By default xunit.xml in artifacts dir)
data-source (which data source to use: sample-labels or pass-fail)
If sample-labels used as source data, report will contain urls with test errors. If pass-fail used as source data, report will contain Pass/Fail criteria information. Please note that you have to place pass-fail module in reporters list, before junit-xml module.
Taurus is not only for JMeter, it supports many more tools and not all of them provide possibility to store request and response data so the options I can think of are in:
Add a Listener to your Test Plan and choose what metrics you need to store into a separate file, the easiest one for using is Flexible File Writer
Use ShellExec Service to run your m2u.jar from Taurus config YAML
Related
I have a lot of json files in my bucket in GCS and I need to create a table for each one.
Normally, I do it manually in BigQuery: selecting the format (json), giving it a name and using automatically detected schema.
Is there any way of creating multiple tables at once using data from GCS?
Disclaimer: I have a blogpost authored on this topic at https://medium.com/p/54228d166a7d
Essentially you can leverage Cloud Workflows, to automate this process.
a sample workflow would be:
ProcessItem:
params: [project, gcsPath]
steps:
- initialize:
assign:
- dataset: wf_samples
- input: ${gcsPath}
# omitted parts for simplicity
- runLoadJob:
call: BQJobsInsertLoadJob_FromGCS
args:
project: ${project}
configuration:
jobType: LOAD
load:
sourceUris: ${gcsPath}
schema:
fields:
- name: "mydate"
type: "TIMESTAMP"
- name: "col1"
type: "FLOAT"
- name: "col2"
type: "FLOAT"
destinationTable:
projectId: ${project}
datasetId: ${dataset}
tableId: ${"table_"+output.index}
result: loadJobResult
- final:
return: ${loadJobResult}
BQJobsInsertLoadJob_FromGCS:
params: [project, configuration]
steps:
- runJob:
call: http.post
args:
url: ${"https://bigquery.googleapis.com/bigquery/v2/projects/"+project+"/jobs"}
auth:
type: OAuth2
body:
configuration: ${configuration}
result: queryResult
next: queryCompleted
- queryCompleted:
return: ${queryResult.body}
In this answer you have a solution to recursively go through your bucket and load csv files to BQ. You can adapt this code with for instance:
gsutil ls gs://mybucket/**.json | \
xargs -I{} echo {} | \
awk '{n=split($1,A,"/"); q=split(A[n],B,"."); print "mydataset."B[1]" "$0}' | \
xargs -I{} sh -c 'bq --location=YOUR_LOCATION load --replace=false --autodetect --source_format=NEWLINE_DELIMITED_JSON {}'
This is if you want to run a load job in parallel manually.
If you want to add automation, you can use workflows as #Pentium10 recommends, or plug the Bash command into a Cloud Run instance coupled with a Scheduler for instance (you can look at this repo for inspiration)
I'm working with Filebeat 7.9.3 as a daemonset on k8s.
I'm not able to parse docker container logs of a Springboot app that writes logs to stdout in json.
The fact is that the every row of the Springboot app logs is written in this way:
{ "#timestamp": "2020-11-16T13:39:57.760Z", "log.level": "INFO", "message": "Checking comment 'se' done = true", "service.name": "conduit-be-moderator", "event.dataset": "conduit-be-moderator.log", "process.thread.name": "http-nio-8081-exec-2", "log.logger": "it.koopa.app.ModeratorController", "transaction.id": "1ed5c62964ff0cc2", "trace.id": "20b4b28a3817c9494a91de8720522972"}
But the corresponding docker log file under /var/log/containers/ writes log in this way:
{
"log": "{\"#timestamp\":\"2020-11-16T11:27:32.273Z\", \"log.level\": \"INFO\", \"message\":\"Checking comment 'a'\", \"service.name\":\"conduit-be-moderator\",\"event.dataset\":\"conduit-be-moderator.log\",\"process.thread.name\":\"http-nio-8081-exec-4\",\"log.logger\":\"it.koopa.app.ModeratorController\",\"transaction.id\":\"9d3ad972dba65117\",\"trace.id\":\"8373edba92808d5e838e07c7f34af6c7\"}\n",
"stream": "stdout",
"time": "2020-11-16T11:27:32.274816903Z"
}
I always receive this on filebeat logs
Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {}
This is my filebeat config that tries to parse json log message from docker logs where I'm using decode_json_fields to try to catch Elasticsearch standard fields (I'm using co.elastic.logging.logback.EcsEncoder)
filebeat.yml: |-
filebeat.inputs:
- type: container
#json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: log
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ["log"]
overwrite_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
How can I do this???
As processors are applied before the JSON parser of the input, you will need to first configure the decode_json_fields processors which will allow you to decode your json.log field. You will then be able to apply the json configuration fo the inputs on the message fields. Something like:
filebeat.yml: |-
filebeat.inputs:
- type: container
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: message
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ['log']
expand_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
This configuration assumes that all your logs use JSON format. Else you will probably need to add an exclude or include regex pattern.
I've setup a sample Kubernetes cluster using minikube with Elasticsearch and Kibana 6.8.6, and Filebeat 7.5.1.
My application generate log messages in json format {"#timestamp":"2019-12-30T21:59:48+0000","message":"example","data":"data-462"}
I can see the log message in Kibana, but my json log is embedded inside "message" atribute as a string:
I configured json.keys_under_root: true to no effect (as stated in documentation: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#filebeat-input-log-config-json)
My configuration:
filebeat.yml: |-
migration.6_to_7.enabled: true
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config.enabled: false
json.keys_under_root: true
json.add_error_key: true
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
I need the "message" and "data" fields as separate fields in Kibana.
What I'm missing?
Try adding json.message_key: message in your filebeat configuration
This is my json log file. I'm trying to store the file to my elastic-Search through my logstash.
{"message":"IM: Orchestration","level":"info"}
{"message":"Investment Management","level":"info"}
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- D:/Development_Avecto/test-log/tn-logs/im.log
json.keys_under_root: true
json.add_error_key: true
processors:
- decode_json_fields:
fields: ["message"]
output.logstash:
hosts: ["localhost:5044"]
input{
beats {
port => "5044"
}
}
filter {
json {
source => "message"
}
}
output{
elasticsearch{
hosts => ["localhost:9200"]
index => "data"
}
}
No able to view out put in elasticserach. Not able to find whats the error.
filebeat log
2019-06-18T11:30:03.448+0530 INFO registrar/registrar.go:134 Loading registrar data from D:\Development_Avecto\filebeat-6.6.2-windows-x86_64\data\registry
2019-06-18T11:30:03.448+0530 INFO registrar/registrar.go:141 States Loaded from registrar: 10
2019-06-18T11:30:03.448+0530 WARN beater/filebeat.go:367 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-06-18T11:30:03.448+0530 INFO crawler/crawler.go:72 Loading Inputs: 1
2019-06-18T11:30:03.448+0530 INFO log/input.go:138 Configured paths: [D:\Development_Avecto\test-log\tn-logs\im.log]
2019-06-18T11:30:03.448+0530 INFO input/input.go:114 Starting input of type: log; ID: 16965758110699470044
2019-06-18T11:30:03.449+0530 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-06-18T11:30:34.842+0530 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":312,"time":{"ms":312}},"total":{"ticks":390,"time":{"ms":390},"value":390},"user":{"ticks":78,"time":{"ms":78}}},"handles":{"open":213},"info":{"ephemeral_id":"66983518-39e6-461c-886d-a1f99da6631d","uptime":{"ms":30522}},"memstats":{"gc_next":4194304,"memory_alloc":2963720,"memory_total":4359488,"rss":22421504}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":10,"update":1},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":4}}}}}
2
https://www.elastic.co/guide/en/ecs-logging/dotnet/master/setup.html
Check step 3 at the bottom of the page for the config you need to put in your filebeat.yaml file:
filebeat.inputs:
- type: log
paths: /path/to/logs.json
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.expand_keys: true
I have a service call that returns system status in json format. I want to use the ansible URI module to make the call and then inspect the response to decide whether the system is up or down
{"id":"20161024140306","version":"5.6.1","status":"UP"}
This would be the json that is returned
This is the ansible task that makes a call:
- name: check sonar web is up
uri:
url: http://sonarhost:9000/sonar/api/system/status
method: GET
return_content: yes
status_code: 200
body_format: json
register: data
Question is how can I access data and inspect it as per ansible documentation this is how we store results of a call. I am not sure of the final step which is to check the status.
This works for me.
- name: check sonar web is up
uri:
url: http://sonarhost:9000/sonar/api/system/status
method: GET
return_content: yes
status_code: 200
body_format: json
register: result
until: result.json.status == "UP"
retries: 10
delay: 30
Notice that result is a ansible dictionary and when you set return_content=yes the response is added to this dictionary and is accessible using json key
Also ensure you have indented the task properly as shown above.
You've made the right first step by saving the output into a variable.
The next step is to use either when: or failed_when: statement in your next task, which will then switch based on the contents of the variable. There are a whole powerful set of statements for use in these, the Jinja2 builtin filters, but they are not really linked well into the Ansible documentation, or summarised nicely.
I use super explicitly named output variables, so they make sense to me later in the playbook :) I would probably write yours something like:
- name: check sonar web is up
uri:
url: http://sonarhost:9000/sonar/api/system/status
method: GET
return_content: yes
status_code: 200
body_format: json
register: sonar_web_api_status_output
- name: do this thing if it is NOT up
shell: echo "OMG it's not working!"
when: sonar_web_api_status_output.stdout.find('UP') == -1
That is, the text "UP" is not found in the variable's stdout.
Other Jinja2 builtin filters I've used are:
changed_when: "'<some text>' not in your_variable_name.stderr"
when: some_number_of_files_changed.stdout|int > 0
The Ansible "Conditionals" docs page has some of this info. This blog post was also very informative.
As per documentation at https://docs.ansible.com/ansible/latest/modules/uri_module.html
Whether or not to return the body of the response as a "content" key in the dictionary result. Independently of this option, if the reported Content-type is "application/json", then the JSON is always loaded into a key called json in the dictionary results.
---
- name: Example of JSON body parsing with uri module
connection: local
gather_facts: true
hosts: localhost
tasks:
- name: Example of JSON body parsing with uri module
uri:
url: https://jsonplaceholder.typicode.com/users
method: GET
return_content: yes
status_code: 200
body_format: json
register: data
# failed_when: <optional condition based on JSON returned content>
- name: Print returned json dictionary
debug:
var: data.json
- name: Print certain element
debug:
var: data.json[0].address.city