How to set an environment variable programmatically in Jenkins/Hudson? - hudson

I have two scripts in the pre-build step in a Jenkins job, the first one a perl script, the second a system groovy script using the groovy plugin. I need information from the first perl script in my second groovy script. I think the best way would be to set some environment variable, and was wondering how that can be realized.
Or any other better way.
Thanks for your time.

The way to propagate environment variables among build steps is via EnvInject Plugin.
Here are some previous answers that show how to do it:
How to set environment variables in Jenkins?
Jenkins : Report results of intermediate [windows batch] build steps in email body
In your case, however, it may be simpler just to write to a file in one build step and read that file in another. To make sure you do not accidentally read from a previous version of the file you can incorporate BUILD_ID in the file name.

Using EnvInject Plugin from job configuration you should use Inject environment variables to the build process / Evaluated Groovy script.
Depending on the setup you may execute Groovy or shell command and save it in map containing environment variables:
Example
By either getting command result with execute method:
return [DATE: 'date'.execute().text]
or with Groovy equivalent if one exists:
return [DATE: new Date()]

Related

Which is the best way of parsing CSV-data in a logic app without using a custom connector?

I have an SFTP trigger in a logic app which fires when a file is added to a certain file area. It is a CSV-formatted file and I want the rows to be parsed and coverted into json. Which is the best way to convert CSV-data into json without using any custom connectors?
I cannot find any built-in connectors doing this job. And as far as I know there are no logic apps functions doing the job either.
Right now, there is no connector/action in logic app that can provide the out of box solution for your requirement. You need to loop in through the array and perform the calculation as per your requirement but I will not suggest you leverage the loop, variables action as it may take time and cost you more.
The alternative would be leveraging the inline code (JavaScript code) to do the calculation as per your requirement. Please note that you will need Integration Account to run your inline code.
Please refer to javascript code and modified if needed according to your needs. I have used '_' for differentiating the nested objects. For more details you can refer to previous discussion here.
For complex calculation you can offload this functionality to azure function and write your code as per the supported languages and call azure function from logic app.
1.Created logic app as shown below:
2 .Created container in storage account and uploaded a CSV file in container.
3.Next using compose action to split the contents of the CSV file on every new line into an array.
a. Here is the expression used in SplitLines compose action:
split(body('Get_blob_content_(V2)'),decodeUriComponent('%0D%0A'))
b. Follow the below MS Doc to write expressions:
4. Removing last(empty) line from previous output using another compose action as shown below ,
take(outputs('SplitLines'),add(length(outputs('SplitLines')),-1))
5.Separating filed names using compose action
split(first(outputs('SplitLines')), ',')
Forming json as shown below using Select action,
**From**: **`skip(outputs('RemoveLastLine'), 1)`**
**Map:**
**`outputs('SplitFieldName')[0]`** **`split(item(), ',')?[0]`**
**`outputs('SplitFieldName')[1]`** **`split(item(), ',')?[1]`**
Tested logic app and it is running successfully. 
Content of CSV file is as shown below:
Csv data is formatted as json:
Reference:Use data operations in Power Automate (contains video) — Power Automate | Microsoft Docs
Credit: #Iason Koulas

PyCharm Tests Add Shell Command to Additional Arguments

I'm still pretty new to running anything in PyCharm more advanced than just a simple script. I'm writing a test in pytest right now and I want to have the test results output to a junit xml file; I'm thinking the best naming convention will be based on the current date/time, so I am trying to pipe in the current date using the date shell command as an environment variable as seen below:
Current Configuration:
However, when I run the configuration as-is, it just names the .xml file based on the command without actually executing it. Any ideas what I'm missing, or if this is even possible?
Thanks!
Yes, it is possible with a workaround. I don't think what you are trying to achieve is possible using a single configuration. The the value you set in Environment variables are substituted as-is and wouldn't be executed in bash prior to that.
The workaround would be use multiple configurations.
Store the following line in a bash file.
export PYTEST_EXEC_TIME=$(date '+%Y-%m-%d%H:%M:%S')
Add a bash configuration to which executes this file.
Add that configuration to the pytest configuration as a "Before Launch" configuration and use the $PYTEST_EXEC_TIME in the additional parameters.
Note: Here is a detailed answer showing step by step process of setting up a "Before Launch" configuration.

Ansible to give MYSQL table output

I am trying to execute a Ansible one liner, which call a bash script from a remote server and then executes in local machine. The bash script actually fetch data from Database.
Is it possible for Ansible to give a Table formatted output?
I am just pasting the column headers alone.
Thanks
Aravind
author_name scheduled_start_time scheduled_end_time comment_data name
If you want to parse ansible output, there are only two ways, which both are hard and somewhat hacky. One is to use callback plugins, the other is to parse with sed/awk/perl/python/whatever you like. See Ansible output formatting options for reference.
I think there is a cleaner solution: you can execute your script on remote machine, save its output in a file on the remote machine and then save it locally by using fetch module. After that you can process resulting files locally using local action.

Custom dynamic inventory scripts/plugins in Ansible

Ansible allows devs
to write programs (in any language) that will return JSON describing the dynamic “snapshot” of current hosts. I’m using vSphere, which is currently not supported by Ansible OSS, and so I need to write such a "custom inventory plugin".
I can handle the querying of vSphere for a list of hosts, as well as constructing the JSON that is compatible with what Ansible is expecting.
Where the documentation completely (seemingly) falls flat is:
How do I “connect” Ansible with my inventory app? That is, say my inventory app is a simple bash script (inventory.sh)..how do I configure Ansible to call bash inventory.sh and obtain JSON from it? In reality the app will likely be a Java executable (inventory.jar) but I figure that if I can figure out how to get it working with bash, I can extrapolate to Java; and
How does Ansible actually capture/fetch the JSON back from the app? STDOUT? Is this all supposed to happen over an HTTP connection? Examples? How does inventory.sh or inventory.jar communicate that JSON back to Ansible?
The inventory script has to be located on the same machine where Ansible runs. It is not communicating through http, Ansible will simply parse the STDOUT of your program. The location does not matter at all, you have to pass the path to Ansible when you call Ansible:
ansible-playbook ... -i /path/to/your/inventory.sh
To avoid passing the inventory location every time you could add this to you ansible.cfg:
inventory = /path/to/your/inventory.sh
You could also copy the script to /etc/ansible/hosts, which is the default location Ansible will look for inventory files/scripts, but I prefer to keep things together so I suggest to place it close to your playbooks/roles etc.
And (3) Is any of this documented, anywhere? Don't see anything in the Ansible docs...
It is not mentioned on the page Developing Dynamic Inventory Sources but it is to be seen on some examples on the page Dynamic Inventory. The docs are community managed and from times litte unstructured and lacking important information.
BTW, there is a VMware inventory script included. By looking at the source I have seen it imports some vSphere stuff. I have little experience with VMware so I can't judge if this is actually what you need and don't need to write your own.
This is completely user defined. Typically you would write your dynamic inventory in Python and use a json dump of the output to create the inventory.
Here is an example for the use case you mentioned (vSphere): https://github.com/RaymiiOrg/ansible-vmware/blob/master/query.py
In a nutshell you create it like a normal Python file and create the options (as he does in main) and selectively execute functions based on which options are passed. These will make REST calls and return the output in the form of a JSON dump, which Ansible can parse for use in inventory.

How to append the results HTML output file on JMeter tests using Ant

I have the following issue.
I have 100+ Jmeter tests as separate files with the tendency to add more. Using Ant I have configured the results to come into a separate output HTML file for each test. So now when I have 100+ tests I get 100+ resulting HTML files. And I need to check every single one if the tests run OK.
My question is how to make the Ant append the results into one HTML file for all 100+ tests so I can view with a single glance that the tests run OK.
I guess I either need to modify the ..extras/build.xml file in Jmeter or modify the command line where I invoke my tests via Ant.
Thank you in advance.
If you are using JMeter Ant Task try this - it uses a FileSet for the testplans:
<jmeter
jmeterhome="c:\jakarta-jmeter-1.8.1"
resultlog="${basedir}/loadtests/JMeterResults.jtl">
<testplans dir="${basedir}/loadtests" includes="*.jmx"/>
</jmeter>
So, it will only be one result file generated and then transformed into HTML.