Does Jenkins/Hudson have a plugin which invokes something like a Velocity template engine to allows the interpolation of variables into a set of templates in order to generate files?
I have an html page which needs to have the ${BUILD_NUMBER} inserted into it at the proper place each time I do a build.
You can try Groovy Plugin and exploit Groovy's Template Engine feature. Add a Groovy build step and pass ${BUILD_ID} and the path to your HTML template file as parameters. In the build step itself write code that uses ${args[0]},${args[1]} to get the parameters, and then employs SimpleTemplateEngine to process.
I was going to go the Groovy route as suggested (which is a good idea), however instead I took advantage of the fact my build server is on a *Nix OS and instead wrote a line of sed to do the job using the Shell build step.
sed -e '/BUILD_NUMBER/${BUILD_NUMBER}/' ${WORKSPACE}/index.html.template > ${WORKSPACE}/index.html
It just replaces any occurrence of the text: BUILD_NUMBER inside my template file with the Jenkins/Hudson build number. Quick, dirty, but works.
Related
I see the documentation in github actions to run a powershell script in a step. I need to increment version number and checkin versions.txt file in GitHub Actions using powershell. Can I accomplish this by running a powershell script from my folder and by using git.exe and powershell commands?
How do I set a variable from powershell script to use the version number in subsequent steps?
So there are two questions in your question, I will answer in reverse order.
To have one step (for example shell script) set a value for subsequent steps, you can use the special workflow commands which are just done by outputting specially formatted strings to STDOUT.
In your case, are looking for Setting an Output Parameter:
echo "::set-output name=version::1.2.3"
Which can then be read by subsequent steps, like this (for example):
env:
VERSION: ${{ steps.the-other-step-id.outputs.version }}
As for having GitHub actions modify code and then check it in, I would strongly advise against it as it will a) complicate your workflow, b) might create "workflow loops", where actions-originated checkins trigger this/other workflows.
For version tagging, I suggest you use tags as it is the almost universally accepted way to mark versions in source control. You might want to have a local script that both tags and updates your version.txt, and then your workflow just reads the file or the tag.
Is it possible to get serialized output of some sort from the /dashboard/projects screen in GitLab?
(I want to track differences and alert myself when someone assigns me a new project. One option is of course to build a script that iterates through the HTML pages, but if there's a way to get all projects at once -- preferably in a machine-friendly format -- that's even better.)
I think that usually this kind of alert are not strictly needed, because usually the assignment workflow is about issue/MR assignment (which usually end up in a email in you inbox), anyway..
You should take a look at GitLab API or, even better, use an already existing project like Python GitLab
It is a Python client implementation of GitLab API and also have an handy gitlab command line tool that can give you the required data in a human/machine readable format
I'm not entirely sure how to define an Erlang function within an Erlang module. I'm getting the following error:
11> invoke_record:invoke().
** exception error: undefined function erlang:rr/1
From this simple code trying to invoke the rr(?MODULE). from within the beam executable in order to "initialize" records so that it doesn't need to be called from the shell every time.
-module(invoke_record).
-export([invoke/0]).
-record(process, {pid,
reference="",
lifetime=0
}).
invoke() ->
erlang:rr(?MODULE).
The command rr("file.hrl"). is meant to be be used only in shell for debugging purposes.
As other users highlighted in their answers, the correct way to import a record (or a function) contained in a .hrl file within your erlang code consists in using the command -include("file.hrl').
Once you have included the .hrl file in your code (and usually in a module based on OTP behaviours this is done after the -export(...) part) you can refer to the Erlang record (or function) without any problem.
rr is a shell command. You cannot use it it compiled code.
http://www.erlang.org/doc/man/shell.html
If your intent is to read many record definitions in the shell, in order to facilitate the debug, you can write a file containing all needed include statements and simply invoke rr once in the shell.
in rec.hrl:
-include("include/bank.hrl").
-include("include/reply.hrl").
and in the in the shell
1> rr("rec.hrl").
[account,reply]
2>
I didn't find any way to execute this automatically, when starting the VM.
When working on a project, you can gather all necessary includes and other command line arguments that you want to use for that particular project in a plain text file. After having made the plain text file, you can start your shell:
erl -args_file FileName
where FileName is the name of the plain text file. Note that all command line arguments accepted by erl are allowed. See also erl Flags in the ERTS Reference Manual
I'm starting a Python script from a Hudson job. The script is started though 'Execute Windows batch command' in build section as 'python my_script.py'
Now I'd need to get some data created by the script back to Hudson and add it to the fail/success emails. My current approach is that the Python script writes data to stderr which is read to a temp file by the batch and then taken into an environment variable. I can see the environment variable correctly right after the script execution (using set command), but in the post-build actions it's not visible any more. The email sending is probably done in different process, so the variables are not visible anymore. I'm accessing the env vars in the email as ${ENV, varname} (or actually in debug mode as $ENV to print them all)
Is there a way to make the environment variable global inside Hudson?
Or can someone provide a better solution for getting data back from Python script to Hudson.
All the related parts (Hudson, batch and Python script) are under my control and can be modified as needed.
Thanks.
Every build step get's is own shell. This implies, that your environment variables are only valid within the build step.
You can just write the data in a nice format to the std output (use a header that is easy to identify) and if the job fails, the data output gets attached in the email.
If you insist on only putting in the data, you can use the following token for the Editable Email Notification post build action (Email-ext plugin).
${BUILD_LOG_REGEX, regex, linesBefore, linesAfter, maxMatches, showTruncatedLines, substText}
I am trying to write a script that will parse a local file and upload its contents to a MySQL database. Right now, I am thinking that a batch script that runs a Perl script would work, but am not sure if this is the best method of accomplishing this.
In addition, I would like this script to run immediately when the data file is added to a certain directory. Is this possible in Windows?
Thoughts? Feedback? I'm fairly new to Perl and Windows batch scripts, so any guidance would be appreciated.
You can use Win32::ChangeNotify. Your script will be notified when a file is added to the target directory.
Checking a folder for newly created files can be implemented using the WMI functionality. Namely, you can create a Perl script that subscribes to the __InstanceCreationEvent WMI event that traces the creation of the CIM_DirectoryContainsFile class instances. Once that kind of event is fired, you know a new file has been added to the folder and can process it as you need.
These articles provide more information on the subject and contain VBScript code samples (hope it won't be hard for you to convert them to Perl):
How Can I Automatically Run a Script Any Time a File is Added to a Folder?
WMI and File System Monitoring
The function you want is ReadDirectoryChangesW. A quick search for a perl wrapper yields this Win32::ReadDirectoryChanges module.
Your script would look something like this:
use Win32::ReadDirectoryChanges;
$rdc = new Win32::ReadDirectoryChanges(path => $path,
subtree => 1,
filter => $filter);
while(1) {
#results = $rdc->read_changes;
while (scalar #results) {
my ($action, $filename) = splice(#results, 0, 2);
... run script ...
}
}
You can easily achieve this in Perl using File::ChangeNotify. This module is to be found on CPAN: http://search.cpan.org/dist/File-ChangeNotify/lib/File/ChangeNotify.pm
You can run the code as a daemon or as a service, make it watch one or more directories and then automatically execute some code (or start up a script) if some condition matches.
Best of all, it's cross-platform, so should you want to switch to a Linux machine or a Mac, it would still work.
It wouldn't be too hard to put together a small C# application that uses the FileSystemWatcher class to detect files being added to a folder and then spawn the required script. It would certainly use less CPU / system resources / hard disk bandwidth than polling the folder at regular intervals.
You need to consider what is a sufficient heuristic for determining "modified".
In increasing order of cost and accuracy:
file size (file content can still be changed as long as size is maintained)
file timestamp (If you aren't running ntpd time is not monotonic)
file sha1sum (bulletproof but expensive)
I would run ntpd, and then loop over the timestamps, and then compare the checksum if the timestamp changes. This can cover a lot of ground in little time.
These methods are not appropriate for a computer security application, they are for file management on a sane system.