How to trigger manual clean of Hudson workspaces - hudson

We have a Hudson cluster with eight nodes. When a particular code branch is no longer in active use, we disable the build job, but the workspaces for that job still hang around taking up space on all the nodes.
I am looking for a way to trigger workspace cleanup across all nodes. Note that I am not looking for a "clean workspace before build" solution.

You do not need to write a plugin. You can write a job that utilizes Groovy plugin to write a Groovy system script. The job would run, say, nightly. It would identify disabled projects and erase their workspaces. Here is a link to Hudson Model API that your script will tap into. There is a Groovy script console at http://<hudson-server>/script that is very useful for debugging.
Here is a code snippet that should be of direct benefit to you. Run it in the script console and examine the output:
def hi = hudson.model.Hudson.instance
hi.getItems(hudson.model.Job).each {
job ->
println(job.displayName)
println(job.isDisabled())
println(job.workspace)
}
You may also find code snippets in this answer useful. They refer to Jenkins API, but on this level I do not think there is a difference between Jenkins and Hudson.
Update:
Here's how you can do it on multiple slaves: create a multi-configuration job (also called "matrix job") that runs on all the slaves. On each slave the following system Groovy script will give you for every job its workspace on that slave (as well as enabled/disabled flag):
def hi = hudson.model.Hudson.instance
def thr = Thread.currentThread()
def build = thr?.executable
def node = build.executor.owner.node
hi.getItems(hudson.model.Job).each {
job ->
println("---------")
println(job.displayName)
println(job.isDisabled())
println(node.getWorkspaceFor(job))
}
As the script runs on the slave itself you can wipe out the workspace directly from it. Of course, the worskspace may not exist, but it's not a problem. Note that you write the script only once - Jenkins will run it on all the slaves you specify in the matrix job automatically.

I have tried following script and it works for Single node,
def hi = hudson.model.Hudson.instance
hi.getItems(hudson.model.Job).each {
job ->
if(job.isDisabled())
{
println(job.displayName)
job.doDoWipeOutWorkspace()
}
}

The following Groovy script wipes workspaces of certain jobs on all nodes. Execute it from "Jenkins host"/computer/(master)/script
In the TODO part, change the job name to the one that you need.
import hudson.model.*
// For each job
for (item in Hudson.instance.items)
{
jobName = item.getFullDisplayName()
// check that job is not building
if (!item.isBuilding())
{
// TODO: Modify the following condition to select which jobs to affect
if (jobName == "MyJob")
{
println("Wiping out workspaces of job " + jobName)
customWorkspace = item.getCustomWorkspace()
println("Custom workspace = " + customWorkspace)
for (node in Hudson.getInstance().getNodes())
{
println(" Node: " + node.getDisplayName())
workspacePath = node.getWorkspaceFor(item)
if (workspacePath == null)
{
println(" Could not get workspace path")
}
else
{
if (customWorkspace != null)
{
workspacePath = node.getRootPath().child(customWorkspace)
}
pathAsString = workspacePath.getRemote()
if (workspacePath.exists())
{
workspacePath.deleteRecursive()
println(" Deleted from location " + pathAsString)
}
else
{
println(" Nothing to delete at " + pathAsString)
}
}
}
}
}
else
{
println("Skipping job " + jobName + ", currently building")
}
}

its a bit late, but i ran into the same problem. my script will check if atleast 2 GB space is available. if this is not the case, all workspaces on the node are cleared to free space.
import hudson.FilePath.FileCallable
import hudson.slaves.OfflineCause
for (node in Jenkins.instance.nodes) {
computer = node.toComputer()
if (computer.getChannel() == null) continue
rootPath = node.getRootPath()
size = rootPath.asCallableWith({f, c -> f.getUsableSpace()} as FileCallable).call()
roundedSize = size / (1024 * 1024 * 1024) as int
println("node: " + node.getDisplayName() + ", free space: " + roundedSize + "GB")
if (roundedSize < 2) {
computer.setTemporarilyOffline(true, [toString: {"disk cleanup"}] as OfflineCause)
for (item in Jenkins.instance.items) {
jobName = item.getFullDisplayName()
if (item.isBuilding()) {
println(".. job " + jobName + " is currently running, skipped")
continue
}
println(".. wiping out workspaces of job " + jobName)
workspacePath = node.getWorkspaceFor(item)
if (workspacePath == null) {
println(".... could not get workspace path")
continue
}
println(".... workspace = " + workspacePath)
customWorkspace = item.getCustomWorkspace()
if (customWorkspace != null) {
workspacePath = node.getRootPath().child(customWorkspace)
println(".... custom workspace = " + workspacePath)
}
pathAsString = workspacePath.getRemote()
if (workspacePath.exists()) {
workspacePath.deleteRecursive()
println(".... deleted from location " + pathAsString)
} else {
println(".... nothing to delete at " + pathAsString)
}
}
computer.setTemporarilyOffline(false, null)
}
}

I was recently also looking to clean up my jenkins workspaces, but with a little twist: I wanted to only remove workspaces from jobs that no longer exist. This is because jenkins does not get rid of workspaces when deleting a job, which is pretty annoying.
And we only use a master at the moment, no separate nodes.
I found a script somewhere (can't find the link anymore) but tweaked it a bit for our usage, putting it in a jenkins job with an 'Execute system Groovy script' build step, running daily:
import hudson.FilePath
import jenkins.model.Jenkins
import hudson.model.Job
def deleteUnusedWorkspace(FilePath root, String path) {
root.list().sort{child->child.name}.each { child ->
String fullName = path + child.name
def item = Jenkins.instance.getItemByFullName(fullName);
println "Checking '$fullName'"
try{
if (item.class.canonicalName == 'com.cloudbees.hudson.plugins.folder.Folder') {
println "-> going deeper into the folder"
deleteUnusedWorkspace(root.child(child.name), "$fullName/")
} else if (item == null) {
// this code is never reached, non-existing projects generate an exception
println "Deleting (no such job): '$fullName'"
child.deleteRecursive()
} else if (item instanceof Job && !item.isBuildable()) {
// don't remove the workspace for disabled jobs!
//println "Deleting (job disabled): '$fullName'"
//child.deleteRecursive()
}
} catch (Exception exc) {
println " Exception happened: " + exc.message
println " So we delete '" + child + "'!"
child.deleteRecursive()
}
}
}
println "Beginning of cleanup script."
// loop over possible slaves
for (node in Jenkins.instance.nodes) {
println "Processing $node.displayName"
def workspaceRoot = node.rootPath.child("workspace");
deleteUnusedWorkspace(workspaceRoot, "")
}
// do the master itself
deleteUnusedWorkspace(Jenkins.instance.rootPath.child("workspace"), "")
println "Script has completed."
Might need some individual tweaking though.
Obviously you should run this script with all delete statements commented out first, and make sure you have a backup before doing an actual run.

It sounds like you are looking for a "delete workspace when disabling build" solution. You could write a Hudson plugin to do this. Which is probably overkill.
If I had to do this (which I wouldn't as we don't have a disk space shortage), I would write a unit script to find all disabled jobs under the hudson directory. A job is represented by an XML file. Then I'd have the script delete the workspace for any matches. And I'd probably set it up in cron so it runs nightly or weekly or whatever is appropriate in the environment.

Related

google apps script: get the name of the function that is currently running

I'm trying to get google apps script to log the current function that is running. I have something similar in python like so:
code_obj = sys._getframe(1).f_code
file = os.path.basename(code_obj.co_filename).split('.')[0]
Def = code_obj.co_name
line = str(sys._getframe(1).f_lineno)
try:
Class = get_class_from_frame(sys._getframe(1)).__name__
except:
Class = ''
return "--> " + file + "." + Class + "()." + Def + "()." + line + " --> " + now() + '\n'
Anybody know how to get the name of the current running function?
As in plain JavaScript, you can retrieve current function via arguments.callee, and then retrieve Function's property name:
callee is a property of the arguments object. It can be used to refer to the currently executing function inside the function body of that function.
function myFunctionName() {
const functionName = arguments.callee.name;
// ...
}
Important note:
Since ES5, arguments.callee is forbidden in strict mode. This can cause an error in cases where the script is using strict mode behind the scenes (see, for example, this):
TypeError: 'caller', 'callee', and 'arguments' properties may not be accessed on strict mode functions or the arguments objects for calls to them
I've noticed this happening, for example, when using default parameters or rest parameters.
Unfortunately, there's not an easy alternative to arguments.callee:
How to get function name in strict mode [proper way]
Get current function name in strict mode
There's a nice way to get all function names and lines called in google-apps-script using:
(new Error()).stack
Please try:
function test() {
console.log(f_(100));
}
function f_(n) {
console.log(stack2lines_((new Error()).stack))
return n + 1;
}
function stack2lines_(stack) {
var re1 = new RegExp('at ([^ ]+) \\(Code\\:(\\d+).*', 'gm');
var re2 = new RegExp('\\{.*\\}', 'gm');
var result = stack
.replace(re1, '{"$1": $2}')
.match(re2);
return result;
}
The result:
[ '{"f_": 6}', '{"test": 2}' ]
Source:
https://script.google.com/u/0/home/projects/1BbaDeja4ObWMwbVnhCJ29FHsNfQm-SfAPYvpqobh2B96TA5449GnGnHc/edit

Intermittently: Couchbase Save Not Happening

I am using Couchbase java sdk client 2.7.11 with Couchbase 6.0 community addition. While performing upsert it gives me success response, but when I fetch the document or see through Couchbase UI, it’s not available.
//getClient returning me "api com.couchbase.client.java.Bucket" instance
private static final RetryWhenFunction RETRY_POLICY =
RetryBuilder.anyOf( TimeoutException.class,
TemporaryFailureException.class,
RequestCancelledException.class,
BackpressureException.class,
CASMismatchException.class)
.delay(Delay.exponential(TimeUnit.MILLISECONDS, 50))
.max(3)
.build();
int expiryTime = Instant.now().getEpochSecond() + (10 * 60);
StringDocument document = StringDocument.create("ABC_Test",expiryTime , "SomeValue");
StringDocument savedDocument = getClient().async().upsert(document).retryWhen(RETRY_POLICY)
.doOnError(exception -> {
String msg = "Unable to update a document = " + exception.getMessage();
LOGGER.error(()->msg);
})
.doOnCompleted(() -> LOGGER.debug(()-> "Succesfully saved document with key \"" + key))
.doAfterTerminate(() -> LOGGER.debug(()-> "Processing save document with key \"" + key + "\" Completed."))
.toBlocking()
.singleOrDefault(null);
if(savedDocument==null) {
LOGGER.error(()-> "Document with id couldn't be saved: " + key);
} else {
LOGGER.debug(()-> "Saved document: \n" + savedDocument);
}
I faced the similar issue when trying to use QueuePush. The Queue push gave me the success response but Queue pop says queue itself doesn’t exist. I intend to use both of the saving within next 5 sec for say. I do not have any load test running that could indicate towards Async delay behavior.
//expirationTime is quiet ahead in future.
getClient().async()
.queuePush(queueName, queueElement, MutationOptionBuilder.builder().createDocument(true).expiry(expirationTime))
.retryWhen(RETRY_POLICY)
.doOnError(exception -> LOGGER.error(() -> “Unable to add element '”+ queueElement +"’ in queue ‘" + queueName +
"’ Exception = " + exception.getMessage()))
.doOnCompleted(() -> LOGGER.debug(()-> "Succesfully saved document in queue “” + queueName))
.doAfterTerminate(latch::countDown).subscribe();
Both of above scenario have been noticed intermittently. Could you please suggest to diagnose this one? Does Community Version has a way to enable Document Level Auditing?
I have posted the similar question on Couchbase forum too, trying to bring it for bigger audience https://forums.couchbase.com/t/intermittently-couchbase-save-not-happening/28006 and get the right direction.
Thank you in advance.
Regards

ffmpeg azure function consumption plan low CPU availability for high volume requests

I am running an azure queue function on a consumption plan; my function starts an FFMpeg process and accordingly is very CPU intensive. When I run the function with less than 100 items in the queue at once it works perfectly, azure scales up and gives me plenty of servers and all of the tasks complete very quickly. My problem is once I start doing more than 300 or 400 items at once, it starts fine but after a while the CPU slowly goes from 80% utilisation to only around 10% utilisation - my functions cant finish in time with only 10% CPU. This can be seen in the image shown below.
Does anyone know why the CPU useage is going lower the more instances my function creates? Thanks in advance Cuan
edit: the function is set to only run one at a time per instance, but the problem exists when set to 2 or 3 concurrent processes per instance in the host.json
edit: the CPU drops get noticeable at 15-20 servers, and start causing failures at around 60. After that the CPU bottoms out at an average of 8-10% with individuals reaching 0-3%, and the server count seems to increase without limit (which would be more helpful if I got some CPU with the servers)
Thanks again, Cuan.
I've also added the function code to the bottom of this post in case it helps.
using System.Net;
using System;
using System.Diagnostics;
using System.ComponentModel;
public static void Run(string myQueueItem, TraceWriter log)
{
log.Info($"C# Queue trigger function processed a request: {myQueueItem}");
//Basic Parameters
string ffmpegFile = #"D:\home\site\wwwroot\CommonResources\ffmpeg.exe";
string outputpath = #"D:\home\site\wwwroot\queue-ffmpeg-test\output\";
string reloutputpath = "output/";
string relinputpath = "input/";
string outputfile = "video2.mp4";
string dir = #"D:\home\site\wwwroot\queue-ffmpeg-test\";
//Special Parameters
string videoFile = "1 minute basic.mp4";
string sub = "1 minute sub.ass";
//guid tmp files
// Guid g1=Guid.NewGuid();
// Guid g2=Guid.NewGuid();
// string f1 = g1 + ".mp4";
// string f2 = g2 + ".ass";
string f1 = videoFile;
string f2 = sub;
//guid output - we will now do this at the caller level
string g3 = myQueueItem;
string outputGuid = g3+".mp4";
//get input files
//argument
string tmp = subArg(f1, f2, outputGuid );
//String.Format("-i \"" + #"input/tmp.mp4" + "\" -vf \"ass = '" + sub + "'\" \"" + reloutputpath +outputfile + "\" -y");
log.Info("ffmpeg argument is: "+tmp);
//startprocess parameters
Process process = new Process();
process.StartInfo.FileName = ffmpegFile;
process.StartInfo.Arguments = tmp;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardError = true;
process.StartInfo.WorkingDirectory = dir;
//output handler
process.OutputDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("O: "+e.Data);
}
);
process.ErrorDataReceived += new DataReceivedEventHandler(
(s, e) =>
{
log.Info("E: "+e.Data);
}
);
//start process
process.Start();
log.Info("process started");
process.BeginOutputReadLine();
process.BeginErrorReadLine();
process.WaitForExit();
}
public static void getFile(string link, string fileName, string dir, string relInputPath){
using (var client = new WebClient()){
client.DownloadFile(link, dir + relInputPath+ fileName);
}
}
public static string subArg(string input1, string input2, string output1){
return String.Format("-i \"" + #"input/" +input1+ "\" -vf \"ass = '" + #"input/"+input2 + "'\" \"" + #"output/" +output1 + "\" -y");
}
When you use the D:\home directory you are writing to the virtual function, which means each instance has to continually try to write to the same spot as the functions run which causes the massive I/O block. Instead writing to D:\local and then sending the finished file somewhere else solves that issue, this way rather than each instance constantly writing to a location they only write when completed, and write to a location designed to handle high throughput.
The easiest way I could find to manage the input and output after writing to D:\local was just to hook up the function to an azure storage container and handle the ins and outs that way. Doing so made the average CPU stay at 90-100% for upwards of 70 concurrent Instances.

[Qt][QMYSQL] Deployed app - Driver not loaded

First of all, many thanks for those who will take time to help me on this topic. I've searched a lot on many different forums before posting here but it seems I'm missing something.
Well, I'm working on Windows 7 (64 bits) with Qt5.5 / MySQL Server 5.6.
And I use MinGW 5.5.0 32 bits on Qt Creator (auto detected).
It's not a matter of building the drivers, it's done and it works perfectly for the dev.! :-)
I can reach my BD, do any query that I want and retrieve/insert all the data.
I'm facing to a matter of deploying my application on other computers.
I know that I have to put qsqlmysql.dll in a folder "sqldrivers" placed in my app. directory. Such as placing libmysql.dll in this directory too.
So I have something like the following
App directory
App.exe
libmysql.dll
Qt5Core.dll
Qt5Gui
Qt5Sql
Qt5Widget
libwinpthread-1.dll
libstdc++-6.dll
libgcc_s_dw2-1.dll
platforms
qwindow.dll
sqldrivers
qsqlmysql.dll
BUT when I release the application and I try to run it from another computer which I used to develop, I have a "Driver not loaded" error...
As far now, I have really no idea what I've missed...
So please, if anyone could give me some, it would be really really appreciated!
I let you the part of the code which is really useful, just in case...
main.cpp
QApplication a(argc, argv);
Maintenance w;
w.show();
return a.exec();
Maintenance.cpp
void Maintenance::login(){
int db_select = 1;
this->maint_db = Database(db_select);
/* All that follow is linked to the login of user... */
}
Database.cpp
Database::Database(int default_db)
{
this->db = QSqlDatabase::addDatabase("QMYSQL");
switch(default_db){
case 0:
this->db.setHostName("XXX.XX.XXX.XX");
this->db.setDatabaseName("maintenance_db");
this->db.setUserName("USERNAME");
this->db.setPassword("PASSWORD");
this->db.setPort(3306);
break;
// Only to make some trials in local
case 1:
this->db.setHostName("127.0.0.1");
this->db.setDatabaseName("maintenance_db");
this->db.setUserName("USERNAME");
this->db.setPassword("PASSWORD");
break;
}
/* I've added the following code to try to solve the problem
I retrieve that the available drivers are: QMYSQL / QMYSQL3
But all the information about the DB are empty (due to the unloaded driver I assume.)
And the error from *lastError()* is "Driver not loaded"
*/
QString my_drivers;
for(int i = 0; i < QSqlDatabase::drivers().length(); i++){
my_drivers = my_drivers + " / " + QSqlDatabase::drivers().at(i);
}
QString lib_path;
for(int i = 0; i < QApplication::libraryPaths().length(); i++){
lib_path = lib_path + " / " + QApplication::libraryPaths().at(i);
}
QString start = QString::number(QCoreApplication::startingUp());
QMessageBox::information(0, "BDD init",
"Drivers available: " + my_drivers
+ " \nHostname: " + this->db.hostName()
+ "\nDB name: " + this->db.databaseName()
+ "\nUsername: " + this->db.userName()
+ "\nPW: " + this->db.password()
+ "\n\n" + lib_path + "\n" + start
);
if(this->db.isOpen()){
QMessageBox::information(0, "BDD init", "Already open.");
}
else{
if(this->db.open()){
QMessageBox::information(0, "BDD init", "Opened.");
}
else{
QMessageBox::critical(0, "BDD init", "Not opened.\n" + this->db.lastError().text());
}
}
}
There are at least 3 possible solutions:
Find all .dll paths are correct with your favourite process monitor
Make sure all .dll is in the same arch as your .exe, which is x86 (32bit)
Debug with QPluginLoader
Simplest way to create "deploy" folder for you windows Qt5 aplication is using windeployqt tool
Create empty directory, copy your app.exe file and than run windeployqt app.exe
Check out the docs http://doc.qt.io/qt-5/windows-deployment.html#the-windows-deployment-tool

"CalendarApp: Mismatch: etags" when adding reminders - Google Apps Scripts

I have a small Google Apps Script that processes a date column in a spreadsheet and generates entries in a Calendar (birthdays).
Work is fine, but when adding reminders to the (recently-created) CalendarEvent, an error is thrown :
Service error: CalendarApp: Mismatch: etags = ["GUQKRgBAfip7JGA6WhJb"], version = [63489901413]
I've tried to perform 1 second sleep after creating event (wait for changes to be done in calendar), but no luck on this...
BTW, events are created succesfully, only reminders cannot be added.
PD: the calendar is one I own, but not my primary calendar.
Here is part of the code:
try
{
birthday = new Date(Data[i][BirthColumn]);
birthday.setFullYear(today.getFullYear());
birthday.setUTCHours(12);
birthlist += Data[i][NameColumn] + " --> " + birthday + "\n";
calendarevent = cal.createAllDayEventSeries("¡Cumpleaños " + Data[i][NameColumn] + "!", birthday, CalendarApp.newRecurrence().addYearlyRule().times(YearsInAdvance));
if (calendarevent == null)
success = false;
else
{
//This sentence fails every single time.
calendarevent.addEmailReminder(0);
calendarevent.addPopupReminder(0);
calendarevent.addSmsReminder(0);
}
}
catch (ee)
{
var row = i + 1;
success = false;
errlist += "Error on row " + row + ": check name and birth date. Exception Error: " + ee.message + "\n";
}
This is the portion of the code I finally change to make it work, as Serge insas suggest me before:
if (calendarevent == null)
success = false;
else
{
cal.getEventSeriesById(calendarevent.getId()).addEmailReminder(0);
cal.getEventSeriesById(calendarevent.getId()).addPopupReminder(0);
cal.getEventSeriesById(calendarevent.getId()).addSmsReminder(0);
}
This is a known issue
See comment nr 67 for a working workaround : the trick is to re-call the event for every item you want to add (reminder, popup...) using cal.getEventSeriesById(eventID) after you get the Id simply with .getId()
I use it in some scripts and it solved the issue for me.