Jenkins - Storing curl output into file for future use - json

I'm currently trying to make Jenkins job that runs if there's difference between the version number between 2 runs.
What I'm currently thinking on doing is by running curl on my webapp endpoint to get the webapp version. I'm looking to store this webapp version information onto jenkinsfile or any file.
The next time the jenkins job runs, it will do curl again to my webapp endpoint and compare the version between the current curl output and the saved version information from last run.
However, as I'm still kind of new to Jenkins, i have no idea on where to start to create the file to store the information i want, anyone have some recommendation or advice for me on how to solve this problem ?
Thanks

you can write version number to any file(JSON,txt..etc) and read file into your workspace by using pipeline-utility-steps by specifying the path in your jenkinsfile

you can write a file to represent the state in a shared location and check it next run (no need any plugins):
# this will create initial file in jenkins home folder
curl http://version-check > ~/version
when u run the job u can use a simple sh script:
stage {
sh """
LAST_VER=$(cat ~/version)
CURRENT_VER=$(curl http://version-check)
if [ "$LAST_VER" = "$CURRENT_VER" ]; then
echo "versions are equal"
else
echo "version are not equal"
echo $CURRENT_VER > ~/version # update new state
fi
"""
}

Related

Writing SSL client and server keys keys

I am trying to export/write the SSL master secret and keys to a file from the chromium browser. I would appreciate if someone could advice me how to do this.
To write the premaster secret we can simply export the SSLKEYLOGFILE variable in the environment.
The premaster secrets can be used by wireshark to decrypt an HTTPS session.
The premaster secret is used to compute the master secret which is further used to create 6 keys -
CLIENT_WRITE_MAC
CLIENT_IV
CLIENT_WRITE
and 3 more for the server.
I want to output these keys to a file instead of the premaster secret.
I figured if I could use wireshark code to simply output it but this is more complex
I believe the code for wireshark handling SSL packets and using the premaster secret is in here.
github.com/boundary/wireshark/blob/master/epan/dissectors/packet-ssl-utils.c
Another way to proceed is to make changes to the chromium browser and compile it. I think changes need to be made here.
https://code.google.com/p/chromium/codesearch#chromium/src/net/third_party/nss/ssl/derive.c&q=client_write_mac_secret&sq=package:chromium&type=cs&l=214
I was going through more source code and I found this file to be related.
https://code.google.com/p/chromium/codesearch#chromium/src/net/third_party/nss/ssl/sslsock.c&q=SSLKEYLOG&sq=package:chromium&dr=C&l=3569
Looking at the code above I notice that there are more environment variables that can be set. Does anybody know if the SSLDEBUG environment can be set in the same way as the SSLKEYLOG variable. Any other way or technique to do this would help also
I have not been able to successfully do export the keys so far.
I figured it out.
To do this, you need to download the latest version of wireshark source code. I ran my test on Wireshark 2.0.1
You need to make changes to the file - /epan/dissectors/packet-ssl-utils.c in the wireshark source folder.
Print the variables to a file from line 3179 - 3194.
You can find the Client write key, Server write key, Client MAC key, Server MAC key, Cient IV and Server IV)
To write to a file in C use this
File *fptr;
fptr = fopen("directory you want to open a file in", "a+");
fprintf("data"); // this will write data to the file
Note - To do it a more objective way, change and create the following functions
void custom_ssl_print_data(const gchar* name, const guchar* data, size_t len){
//Write the following lines
File *ssl_debug_file;
ssl_debug_file=fopen("directory you want to open the file in","a+");
//Copy original functionality from line 4927
}
void custom_ssl_print_string(const gchar* name, const StringInfo* data){
//Copy original functionality from line 4953
}
Now use these functions to export your keys to a file.
Go to the main wireshark source folder.
Run ./autogen.sh
./configure
sudo make
sudo make install
and run wireshark in the terminal. ( You still need to feed wireshark the premaster secret file by exporting the SSLKEYLOGFILE environment variable)

Bitbake append file to reconfigure kernel

I'm trying to reconfigure some .config variables to generate a modified kernel with wifi support enabled. The native layer/recipe for the kernel is located in this directory:
meta-layer/recipes-kernel/linux/linux-yocto_3.19.bb
First I reconfigure the native kernel to add wifi support (for example, adding CONFIG_WLAN=y):
$ bitbake linux-yocto -c menuconfig
After that, I generate a "fragment.cfg" file:
$ bitbake linux-yocto -c diffconfig
I have created this directory into my custom-layer:
custom-layer/recipes-kernel/linux/linux-yocto/
I have copied the "fragment.cfg file into this directory:
$ cp fragment.cfg custom-layer/recipes-kernel/linux/linux-yocto/
I have created an append file to customize the native kernel recipe:
custom-layer/recipes-kernel/linux/linux-yocto_3.19.bbappend
This is the content of this append file:
FILESEXTRAPATHS_prepend:="${THISDIR}/${PN}:"
SRC_URI += "file://fragment.cfg"
After that I execute the kernel compilation:
$ bitbake linux-yocto -c compile -f
After this command, "fragment.cfg" file can be found into this working directory:
tmp/work/platform/linux-yocto/3.19-r0
However none of the expected variables is active on the .config file (for example, CONFIG_WLAN is not set).
How can I debug this issue? What is supposed I'm doing wrong?
When adding this configuration you want to use append in your statement such as:
SRC_URI_append = "file://fragment.cfg"
After analyzing different links and solutions proposed on different resources, I finally found the link https://community.freescale.com/thread/376369 pointing to a nasty but working patch, consisting in adding this function at the end of append file:
do_configure_append() {
cat ${WORKDIR}/*.cfg >> ${B}/.config
}
It works, but I expected Yocto managing all this stuff. It would be nice to know what is wrong with the proposed solution. Thank you in advance!
If your recipe is based on kernel.bbclass then fragments will not work. You need to inherit kernel-yocto.bbclass
You can also use merge_config.sh scripts which is present in kernel sources. I did something like this:
do_configure_append () {
${S}/scripts/kconfig/merge_config.sh -m -O ${WORKDIR}/build ${WORKDIR}/build/.config ${WORKDIR}/*.cfg
}
Well, unfortunately, not a real answer... As I haven't been digging deep enough.
This was working alright for me on a Daisy-based build, however, when updating the build system to Jethro or Krogoth, I get the same issue as you.
Issue:
When adding a fragment like
custom-layer/recipes-kernel/linux/linux-yocto/cdc-ether.cfg
The configure step of the linux-yocto build won't find it. However, if you move it to:
custom-layer/recipes-kernel/linux/linux-yocto/${MACHINE}/cdc-ether.cfg
it'll work as expected. And it's a sligthly less hackish way of getting it to work.
If anyone comes by, this is working on jethro and sumo:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI_append = " \
file://fragment.cfg \
"
FILESEXTRAPATHS documentation says:
Extends the search path the OpenEmbedded build system uses when looking for files and patches as it processes recipes and append files. The directories BitBake uses when it processes recipes are defined by the FILESPATH variable, and can be extended using FILESEXTRAPATHS.

How to get a response from a script back to Hudson and to fail/success emails?

I'm starting a Python script from a Hudson job. The script is started though 'Execute Windows batch command' in build section as 'python my_script.py'
Now I'd need to get some data created by the script back to Hudson and add it to the fail/success emails. My current approach is that the Python script writes data to stderr which is read to a temp file by the batch and then taken into an environment variable. I can see the environment variable correctly right after the script execution (using set command), but in the post-build actions it's not visible any more. The email sending is probably done in different process, so the variables are not visible anymore. I'm accessing the env vars in the email as ${ENV, varname} (or actually in debug mode as $ENV to print them all)
Is there a way to make the environment variable global inside Hudson?
Or can someone provide a better solution for getting data back from Python script to Hudson.
All the related parts (Hudson, batch and Python script) are under my control and can be modified as needed.
Thanks.
Every build step get's is own shell. This implies, that your environment variables are only valid within the build step.
You can just write the data in a nice format to the std output (use a header that is easy to identify) and if the job fails, the data output gets attached in the email.
If you insist on only putting in the data, you can use the following token for the Editable Email Notification post build action (Email-ext plugin).
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines, substText}

Configure or Create hudson job automatically

Is there any way to create new Hudson job by one more Hudson job based one previous Jobs?
For example if I need to create new bunch of jobs one by one, Automatically create 4 jobs with similar configuration with different parameter
Basically steps like this
create SVN branch I can call svn cp command and make it parametrized using script
Create some build based on new svnbranch name
Later tag it
Or other word, I need to clone the previous job and give the new branch name where ever $ Branch comes in new job.
Thanks
You can try the Hudson Remote API for this kind of task (setting up an Hudson project).
See this tutorial for instance, and remember you can display the help quite easily:
java -jar hudson-cli.jar -s http://your_Hudson_server/ help
So, to copy a job:
java -jar hudson-cli.jar -s http://your_Hudson_server/ copy-job myjob copy-myjob
You could use groovy system script like this :
def jenkins = hudson.model.Hudson.instance
def template = jenkins.getItem("MyTemplate")
def job = jenkins.copy(template,"MyNewJob")
job.scm = new hudson.scm.SubversionSCM("http://base/branches/mybranche")
job.save()
Kind of already covered in the other answers, but for an easy way to copy the config.xml over:
curl --user USER:PASS -H "Content-Type: text/xml" -s
--data-binary "#config.xml" "http://hudsonserver:8080/createItem?name=newjobname"
There seems to be a plugin for jenkins.
https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin
I have not tested the plug-in yet. But if the plugin works, it should alleviate some of human errors from straight copying a job and modifying variables/values.
def jenkins = hudson.model.Hudson.instance
def template = jenkins.getItem("MyTemplate")
def job = jenkins.copy(template,"MyNewJob")
job.save()
I used this now I have to change the parameter values of MyNewJob using Groovy how will I do that?
ex I have a parameter called "Build_BranchName" and the default is //perforce/mybranch
I have to change it to
//perforce/mynewbranch
You have the option that VonC just gave you (which is probably the safest way but you can also go a different rout by just creating a new directory in {Hudson_Home}\jobs (the directory name will be the job name) and copy a modified config.xml in there. The modification will basically just be the SVN URL. You should check out the xml from the job that you are copying. You need to find out how you change the xml file via script, but this is a secondary problem.
Unfortunately, you have to either restart Hudson, or force a reload of the configuration. (visit the page http://:/reload to reload the config).
In case you're willing to use GIT (like I do, mirroring the main SVN repo, onto the Hudson/Jenkins server, and it works great)....
..you could try Stephen Haberman's post-receive-hudson:
This hook creates new jobs for each
branch in the Hudson continuous
integration tool. Besides creating the
job if needed, the user who pushed is
added to the job's email list if they
were not already there.
In any case, that script can give you new hints on how to remote control Jenkins(Hudson).

How can I get a Windows batch or Perl script to run when a file is added to a directory?

I am trying to write a script that will parse a local file and upload its contents to a MySQL database. Right now, I am thinking that a batch script that runs a Perl script would work, but am not sure if this is the best method of accomplishing this.
In addition, I would like this script to run immediately when the data file is added to a certain directory. Is this possible in Windows?
Thoughts? Feedback? I'm fairly new to Perl and Windows batch scripts, so any guidance would be appreciated.
You can use Win32::ChangeNotify. Your script will be notified when a file is added to the target directory.
Checking a folder for newly created files can be implemented using the WMI functionality. Namely, you can create a Perl script that subscribes to the __InstanceCreationEvent WMI event that traces the creation of the CIM_DirectoryContainsFile class instances. Once that kind of event is fired, you know a new file has been added to the folder and can process it as you need.
These articles provide more information on the subject and contain VBScript code samples (hope it won't be hard for you to convert them to Perl):
How Can I Automatically Run a Script Any Time a File is Added to a Folder?
WMI and File System Monitoring
The function you want is ReadDirectoryChangesW. A quick search for a perl wrapper yields this Win32::ReadDirectoryChanges module.
Your script would look something like this:
use Win32::ReadDirectoryChanges;
$rdc = new Win32::ReadDirectoryChanges(path => $path,
subtree => 1,
filter => $filter);
while(1) {
#results = $rdc->read_changes;
while (scalar #results) {
my ($action, $filename) = splice(#results, 0, 2);
... run script ...
}
}
You can easily achieve this in Perl using File::ChangeNotify. This module is to be found on CPAN: http://search.cpan.org/dist/File-ChangeNotify/lib/File/ChangeNotify.pm
You can run the code as a daemon or as a service, make it watch one or more directories and then automatically execute some code (or start up a script) if some condition matches.
Best of all, it's cross-platform, so should you want to switch to a Linux machine or a Mac, it would still work.
It wouldn't be too hard to put together a small C# application that uses the FileSystemWatcher class to detect files being added to a folder and then spawn the required script. It would certainly use less CPU / system resources / hard disk bandwidth than polling the folder at regular intervals.
You need to consider what is a sufficient heuristic for determining "modified".
In increasing order of cost and accuracy:
file size (file content can still be changed as long as size is maintained)
file timestamp (If you aren't running ntpd time is not monotonic)
file sha1sum (bulletproof but expensive)
I would run ntpd, and then loop over the timestamps, and then compare the checksum if the timestamp changes. This can cover a lot of ground in little time.
These methods are not appropriate for a computer security application, they are for file management on a sane system.