how to expand output in GitLab CI/CD job? - gitlab-ci-runner

I've setup a job that run some PowerShell commands. One them returns JSON object.
however when I open Job log I see only part of the object. How I can see the full object?
{#{productNo=1; onTarget=f944fb79-b39f-4936-b0b6-8eef3c802014; name=asdffgh-as…

Write the output to a file, then store the file as an artifact:
script:
- your_command | Out-File -FilePath output.json
artifacts:
paths:
- output.json
See Using Out-File, and Job artifacts.

Related

I want to save a CSV with Aggregate Report in JMeter. I need to save this file in my local repository where the JMX file is

I want to save a CSV with Aggregate Report in JMeter. I need to save this file in my local repository where the JMX file is. How I can indicate the path? Please help.
You can generate the CSV file with Aggregate Report data out of the .jtl file containing your test results using JMeter Plugins Command Line Graph Plotting Tool
Install JMeter Plugins Command Line Tool using JMeter Plugins Manager
you may also need to install Synthesis Report if you don't have it
Run your JMeter test in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.jtl
Once your test is complete you can generate Aggregate Report CSV representation as follows:
JMeterPluginsCMD --generate-csv aggregate-report.csv --input-jtl result.jtl --plugin-type AggregateReport
this aggregate-report.csv is a relative path to the current folder, you can make it absolute like:
JMeterPluginsCMD --generate-csv c:/somefolder/someotherfolder/aggregate-report.csv --input-jtl result.jtl --plugin-type AggregateReport
More information: How to Use the JMeterPluginsCMD Command Line

How to download logs created by ECS container to make them look the old-fashined way remove JSON?

My good old application creates log caught by AWS Cloudwatch logs
However it is ugly to read them trapped inside JSON. Can I get them in a raw form?
Install jq (a C++ application without dependency hell) from your favourite package repository (or from GitHub).
Download the logs and parse them with
#profile=...
#lgn=...
aws --profile $profile logs get-log-events --log-stream-name
$lsn --log-group-name /$lgn | jq --raw-output '.events[] | .message'

Filtering out jenkins console output of a job

I'm quite new in Jenkins and I would like to filter out from the jenkins console output only the json output of my unix script run via a jenkins job
To simplify my scenario, I have a MyScript unix script that returns a json output. A jenkins job wraps the MyScript execution using a "Execute shell" build action.
When I run the jenkins job, MyScript is executed and the jenkins console output returns below output:
Started by remote host ...
Building remotely on ... in workspace ...
Set build name.
New build name is '#11-/products/software/myScript.py'
[ScriptWrapper] $ /bin/sh -xe /tmp/hudson9139846468482145951.sh
+ /products/software/myScript.py -t ...
{'ip': '...', 'host': '...'}
Set build name.
New build name is '#11-/products/software/myScript.py'
Variable with name 'BUILD_DISPLAY_NAME' already exists, ...
Finished: SUCCESS
From the above output I would like to filter out only the json output of my unix script that is "{'ip': '...', 'host': '...'}" .
That it is needed as we call the jenkins job via REST API and we need to get only the json output of the called unix script:
curl -s -k -u ... --request GET "https://<jenkins uri>/jenkins/view/ScriptWrapper/job/ScriptWrapper/19/consoleText"
We tried defining a parsing rules file but in this way we are able only to highlight some lines in the console output in the "Parsed Console Output" jenkins view.
In addition it seems that this "Parsed Console Output" is not accessible via rest api:
curl -s -k -u ... --request GET "https://<jenkins uri>/jenkins/view/ScriptWrapper/job/ScriptWrapper/19/parsed_console"
-> it doesn't work
Is there any way to filter out the jenkins console output?
We are also evaluating the possibility to use the Jenkins Groovy Postbuild Plugin. Do you think it can help ?
I thank you in advance for any suggestion.
If I understand the question correctly, you wish to generate clean output containing only the text you want?
If so, then I'd suggest you modify your shell script to output the desired text to a file, and then use either the "archive artifact" function in Jenkins to make the file content available, or the "html publisher" plugin to "publish" that file.
https://wiki.jenkins-ci.org/display/JENKINS/HTML+Publisher+Plugin
I third option could be to modify your shell script to output "magic cookies" as delimiters around the string you want.
That way you can fetch the entire console output using the REST API, and then easily filter out the text you want using a simple regex.

Validate OpenShift objects defined in yaml before actually applying or executing it

I have a OpenShift template in template.yaml file which includes following objects - deployment-config, pod, service and route. I am using the following command to execute the yaml:
oc process -f template.yml | oc apply -f -
I want to perform following validations before I actually apply/execute the yaml:
YAML syntax validation - if there are any issues with the YAML syntax.
OpenShift schema validation - to check if the object definition abides by the OpenShift object schema.
It seems that the command 'oc process' is doing following checking:
Basic YAML syntax validation
Template object schema validation
How to perform schema validation of other objects (e.g. deployment-config, service, pod, etc.) that are defined in template.yaml?
This is now possible with the OpenShift client (and on Kubernetes in general), e.g.
$ oc login
Username: john.doe
Password:
Login successful.
$ oc apply -f openshift/template-app.yaml --dry-run
template "foobar-app" created (dry run)
It's also possible to process the template locally, thus you can avoid sending it to the server first, e.g.
$ oc process -f openshift/template-app.yaml --local -p APP_NAME=foo | oc apply --dry-run --validate -f -
deploymentconfig "foo" created (dry run)
service "foo" created (dry run)
Also note the --validate option I'm using for schema validation. Unfortunately, you still have to log in for the apply command to work (there's no --local option for apply).
Oddly, this feature is not described in the CLI documentation, however it's mentioned on the help screen:
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
JSON and YAML formats are accepted.
Usage:
oc apply -f FILENAME [options]
...
Options:
...
--dry-run=false: If true, only print the object that would be sent, without sending it.
...
--validate=false: If true, use a schema to validate the input before sending it
Use "oc <command> --help" for more information about a given command.
Use "oc options" for a list of global command-line options (applies to all commands).
I'm having the same issue with cryptic errors coming back from the oc process command.
However if you go into the Openshift Console and use the "Add to Project" link at the top of the console, choose the "Import YAML / JSON" option and import your YAML/JSON that way you get slightly more useful errors.

AWS CLI Command

I'm trying to execute the following command using AWS CLI command -
aws s3 cp s3://my_bucket/folder/file_1234.txt -| pipe to sed command | pipe to jq command | aws s3 cp - s3://my_bucket/new_folder/final_file.txt
The above code is working fine - basically pulling data from s3, doing some operations and pushing it back to s3.
Now, I have some files in s3 that have a pattern - for instance - file_771.txt, file_772.txt, file_773.txt and so on.
Now in order to get all the files that match the pattern I'm doing the following operation which is not working as expected. Its generating an empty output file in s3.
aws s3 cp --include file_77* s3://my_bucket/folder/ -| pipe to sed command | pipe to jq command | aws s3 cp - s3://my_bucket/new_folder/final_file.txt
This code is generating empty final_file.txt. Any reason ? Am I missing something in the code ?
To copy multiple files at once, you would have to use --recursive, in your case with --exclude "*" --include "file_77*", but:
Downloading as a stream is not currently compatible with the
--recursive parameter
cp