How to get a response from a script back to Hudson and to fail/success emails? - hudson

I'm starting a Python script from a Hudson job. The script is started though 'Execute Windows batch command' in build section as 'python my_script.py'
Now I'd need to get some data created by the script back to Hudson and add it to the fail/success emails. My current approach is that the Python script writes data to stderr which is read to a temp file by the batch and then taken into an environment variable. I can see the environment variable correctly right after the script execution (using set command), but in the post-build actions it's not visible any more. The email sending is probably done in different process, so the variables are not visible anymore. I'm accessing the env vars in the email as ${ENV, varname} (or actually in debug mode as $ENV to print them all)
Is there a way to make the environment variable global inside Hudson?
Or can someone provide a better solution for getting data back from Python script to Hudson.
All the related parts (Hudson, batch and Python script) are under my control and can be modified as needed.
Thanks.

Every build step get's is own shell. This implies, that your environment variables are only valid within the build step.
You can just write the data in a nice format to the std output (use a header that is easy to identify) and if the job fails, the data output gets attached in the email.
If you insist on only putting in the data, you can use the following token for the Editable Email Notification post build action (Email-ext plugin).
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines, substText}

Related

Securing a Script Task in SSIS

I have a Script Task with C# code written inside. The code is supposed to make several REST calls to get some data. The credentials (username, password) are hard-coded within the script. What are the things to do to make sure that my package is secured, and what is the best practice in similar scenarios, keeping in my mind that there is no possibility to use third-party API connectors, and Script Task is my only option ?
The best approach would be to move Login and Password from script task into Package parameters and declare Password as being sensitive. Thus Login and Password can be specified later at Package start or stored at Environment variables. Sensitive Password parameter means that it will be stored encrypted and cannot be dumped to a file, for example.
The following code sample shows how to use encrypted password in your script task
Dts.Variables["$Package::YourPassword"].GetSensitiveValue().ToString()
If you need to distribute your package without disclosing Login and Password, switch to another authentication method, perhaps, with certificates. Script Task source code cannot be obfuscated, so everyone who can download the package from the server has an opportunity to inspect your Script Task.

Edit and checkin versions.txt file in GitHub Actions using powershell

I see the documentation in github actions to run a powershell script in a step. I need to increment version number and checkin versions.txt file in GitHub Actions using powershell. Can I accomplish this by running a powershell script from my folder and by using git.exe and powershell commands?
How do I set a variable from powershell script to use the version number in subsequent steps?
So there are two questions in your question, I will answer in reverse order.
To have one step (for example shell script) set a value for subsequent steps, you can use the special workflow commands which are just done by outputting specially formatted strings to STDOUT.
In your case, are looking for Setting an Output Parameter:
echo "::set-output name=version::1.2.3"
Which can then be read by subsequent steps, like this (for example):
env:
VERSION: ${{ steps.the-other-step-id.outputs.version }}
As for having GitHub actions modify code and then check it in, I would strongly advise against it as it will a) complicate your workflow, b) might create "workflow loops", where actions-originated checkins trigger this/other workflows.
For version tagging, I suggest you use tags as it is the almost universally accepted way to mark versions in source control. You might want to have a local script that both tags and updates your version.txt, and then your workflow just reads the file or the tag.

FitNesse: How to send tests execution reports from Jenkins to an endpoint in JSON format?

I have a task to send reports of periodic execution of FitNesse tests to some specific endpoint in some specific JSON format.
I set periodic execution of tests in Jenkins properties and saving it in XML, but now I need to parse information about results of it.
It cannot be just step in "after build" property in Jenkins (or can, but I don't know a plugin for it), but what it would be and how I can do this?
Especially, I don't need information about the test, only general moments like date of the test, pass rate, status, name of the project, etc.
I think the best way to solve this is to make a script that parses the XML file, and creates the required JSON file. We normally use python scripts for this.
If you need certain generic information of the build in the script, like build number, you can pass this to your script using the Jenkins environments.
To call the script just add a batch or shell step, and place it below your fitnesse build step, to make sure the XML is generated before calling the script.
FitNesse comes with a jUnit runner which allows you to execute a test/suite. If you create a test class annotated with #RunWith(FitNesseRunner.class) and include its execution in a Jenkins Maven job (where the jUnit class is executed by either surefire or failsafe plugin), the outcome of the tests executed will be picked up automatically by Jenkins, just like it picks up other/regular jUnit tests (as surefire or failsafe will include them in their XML reports and Jenkins will pick these up).
You can find a sample Maven FitNesse project using (a slightly customised version of) this approach at https://github.com/fhoeben/sample-fitnesse-project. How to run the tests on Jenkins is described at https://github.com/fhoeben/hsac-fitnesse-fixtures#to-run-the-tests-on-a-build-server:
Have the build server checkout the project and execute mvn clean test-compile failsafe:integration-test. The result in JUnit XML results can be found in: target/failsafe-reports (Jenkins will pick these up automatically for a Maven job)
You indicate you don't need the HTML results, but they will be made available. They can be found in: target/fitnesse-results/index.html, and you could choose to use the 'HTML Publisher' Jenkins plugin to link to them from each build.

Passing execution time parameters to TCL scripts from non interactive shell

I am able to run the TCL scripts on the linux server from the Non-interactive shell created by JSch library used in the java program from windows environment. The problem is I have some scripts which needs to pass certain parameters during the execution of the script based on the intermediate output of the script and after the parameters are entered, the script execution continues from there onwards. But as it is non interactive shell, I am not able to pass this parameters during execution. Is there any way where I can make it work ? I thought of an option where, I will pass the parameters as command line argument, but wanted to know any other way.
When you say "parameters", do you mean anything that a user would have entered in an interactive session as an input to prompts presented by the script?
If yes, there are two possibilities:
If the script does not expect the session to be interactive, and just reads its input from its standard input stream (using gets for instance), then just feed this input to the standard input of the tclsh process which interprets your script.
If the script does expect the session to be interactive (and refuses to just accept the data from its input stream), you will have to allocate a pseudo-TTY for the target process.
I'm not familiar with JSch, but this appears to be a question (and an answer) dealing with the making JSch allocate a PTY.

How can I get a Windows batch or Perl script to run when a file is added to a directory?

I am trying to write a script that will parse a local file and upload its contents to a MySQL database. Right now, I am thinking that a batch script that runs a Perl script would work, but am not sure if this is the best method of accomplishing this.
In addition, I would like this script to run immediately when the data file is added to a certain directory. Is this possible in Windows?
Thoughts? Feedback? I'm fairly new to Perl and Windows batch scripts, so any guidance would be appreciated.
You can use Win32::ChangeNotify. Your script will be notified when a file is added to the target directory.
Checking a folder for newly created files can be implemented using the WMI functionality. Namely, you can create a Perl script that subscribes to the __InstanceCreationEvent WMI event that traces the creation of the CIM_DirectoryContainsFile class instances. Once that kind of event is fired, you know a new file has been added to the folder and can process it as you need.
These articles provide more information on the subject and contain VBScript code samples (hope it won't be hard for you to convert them to Perl):
How Can I Automatically Run a Script Any Time a File is Added to a Folder?
WMI and File System Monitoring
The function you want is ReadDirectoryChangesW. A quick search for a perl wrapper yields this Win32::ReadDirectoryChanges module.
Your script would look something like this:
use Win32::ReadDirectoryChanges;
$rdc = new Win32::ReadDirectoryChanges(path => $path,
subtree => 1,
filter => $filter);
while(1) {
#results = $rdc->read_changes;
while (scalar #results) {
my ($action, $filename) = splice(#results, 0, 2);
... run script ...
}
}
You can easily achieve this in Perl using File::ChangeNotify. This module is to be found on CPAN: http://search.cpan.org/dist/File-ChangeNotify/lib/File/ChangeNotify.pm
You can run the code as a daemon or as a service, make it watch one or more directories and then automatically execute some code (or start up a script) if some condition matches.
Best of all, it's cross-platform, so should you want to switch to a Linux machine or a Mac, it would still work.
It wouldn't be too hard to put together a small C# application that uses the FileSystemWatcher class to detect files being added to a folder and then spawn the required script. It would certainly use less CPU / system resources / hard disk bandwidth than polling the folder at regular intervals.
You need to consider what is a sufficient heuristic for determining "modified".
In increasing order of cost and accuracy:
file size (file content can still be changed as long as size is maintained)
file timestamp (If you aren't running ntpd time is not monotonic)
file sha1sum (bulletproof but expensive)
I would run ntpd, and then loop over the timestamps, and then compare the checksum if the timestamp changes. This can cover a lot of ground in little time.
These methods are not appropriate for a computer security application, they are for file management on a sane system.