limiting number of time fswatch runs - fswatch

I have a fswatch set up on a directory which triggers a script to refresh my browser every time a file is changed. It works but if there is a bunch of files added or deleted in a single shot, the browser can refresh for very long periods of time before it stops.
Looking at the documentation, it looks like --batch-marker might but what I need but it's not clear from the documentation how I might use it to limit how many times my script is triggered.
UPDATE: here is my current fswatch command:
fswatch -v -o . | xargs -n1 -I{} ~/bin/refresh.sh
UPDATE: I'm on a mac using the FSEvents monitor.

Related

No syntax highlighting with org-html-export-to-html when executing with systemd service

I have a bash script that finds and exports emacs .org files to html in a given directory. I understand that org-mode makes use of htmlize.el to color the output of text in SRC blocks, which seems to work fine when executed from the command line, both as root and normal user. However, when using systemd timers to automate this task the output is no longer colored.
for i in `find /home/user/dir -name '*.org'`
do
emacs $i --batch -l /home/user/.emacs org-html-export-to-html --kill
done
I previously had problems with getting the syntax highlighting to work when executing the script directly, which was solved when -l /home/user/.emacs was added as shown in the excerpt above (publishNotes.sh).
Everything apart from the syntax highlighting seems to be working fine, which indicates that both the systemd service and the executed script itself runs according to the timer.
Service:
[Unit]
Description=Update website
[Service]
Type=simple
ExecStart=/home/user/bin/publishNotes.sh
Timer:
[Unit]
Description=Run every hour
[Timer]
OnCalendar=hourly
Unit=publishNotes.service
[Install]
WantedBy=multi-user.target
Thanks!
I would guess that this is because something is loading differently when run as root than when run under your user account. Exactly what is hard to say from the information given. However, my first suggestion would be to try running the service as your user. Try adding the User=<username> key to the [Service] section of the service, and check to see if it behaves as you expect.

Automate a command that needs to be run with thousands of different arguments

In my project I need to upload a big file (~250GB) to remote server, and then run a script to load the file into mysql.
The problem is, if I load the single file it will take too long time. So I have to split the file into small trunks and run 10-20 processes simultaneously in multiple terminals. If I split each file ~2MB, it will take me 100,000 times operation.Then I have to run like
ruby importer.rb data_part01_aa.csv
ruby importer.rb data_part01_ab.csv
ruby importer.rb data_part01_ac.csv
.
.
.
in each terminal, wait for them to end, and run the next.
Is there any method that can automate this process? Any shell scripts that can continue doing the job when the previous one is finished?
Thanks a lot!
In shell you can try:
for i in *.csv
do
ruby importer.rb $i.csv
done
The previous one can be written as one-line as follow:
for i in *.csv; do ruby importer.rb data_part01_aa.csv; done
Eventually, it can take some time to start running if the arguments are too many. In such case, you can try with find:
find . -name '*.csv' -exec ruby importer.rb {} \;
However, the previous command will search recursively in every sub-directory. To make it run for the current directory only, you will have to run:
find . -maxdepth 1 -name '*.csv' -exec ruby importer.rb {} \;
In every example given, the commands will be run sequentially. Instead of *.csv you can play with different patterns (i.e. a*.csv, b*.csv, [ab]*.*csv, etc.), or you can try another loop:
for j in $(echo {a..q})
do
find . -name "data_part01_$j?.csv" -exec ruby importer.rb {} \; &
done
Where echo {a..q} generates a sequence of letter from a to q, which seems to follow the names of your files. The key in the last example is the &, which leaves the process in background, in the last example, there will 17 process running simultaneously. If you do not want them simultaneously, then you just need to remove the ampersand &.

How to solve jenkins 'Disk space is too low' issue?

I have deployed Jenkins in my CentOS machine, Jenkins was working well for 3 days, but yesterday there was a Disk space is too low. Only 1.019GB left. problem.
How can I solve this problem, it make my master offline for hours?
You can easily change the threshold from jenkins UI (my version is 1.651.3):
[]
Update: How to ensure high disk space
This feature is meant to prevent working on slaves with low free disk space. Lowering the threshold would not solve the fact that some jobs do not properly cleanup after they finish.
Depending on what you're building:
Make sure you understand what is the disk output of your build - if possible - restrict the output to happen only to the job workspace. Use workspace cleanup plugin to cleanup the workspace as post build step.
If the process must write some data to external folders - clean them up manually on post build steps.
Alternative1 - provision a new slave per job (use spot slaves - there are many plugins that integrate with different cloud provider to provision on the fly machines on demand)
Alternative2 - run the build inside a container. Everything will be discarded once the build is finished
Beside above solutions, there is a more "COMMON" way - directly delete the largest space consumer from Linux machine. You can follow the below steps:
Login to Jenkins machine (Putty)
cd to the Jenkins installation path
Using ls -lart to list out hidden folder also, normally jenkin
installation is placed in .jenkins/ folder
[xxxxx ~]$ ls -lart
drwxrwxr-x 12 xxxx 4096 Feb 8 02:08 .jenkins/
list out the folders spaces
Use df -h to show Disk space in high level
du -sh ./*/ to list out total memory for each subfolder in current path.
du -a /etc/ | sort -n -r | head -n 10 will list top 10 directories eating disk space in /etc/
Delete old build or other large size folder
Normally ./job/ folder or ./workspace/ folder can be the largest folder. Please go inside and delete base on you need (DO NOT
delete entire folder).
rm -rf theFolderToDelete
You can limit the reduce of disc space by discarding the old builds. There's a checkbox for this in the project configuration.
This is actually a legitimate question so I don't understand the downvotes, perhaps it belongs on Superuser or Serverfault. This is a soft warning threshold not hard limit where the disk is out of space.
For hudson see where to configure hudson node disk temp space thresholds - this is talking about the host, not nodes
Jenkins is the same. The conclusion is for many small projects the system property called hudson.diagnosis.HudsonHomeDiskUsageChecker.freeSpaceThreshold could be decreased.
In saying that I haven't tested it and there is a disclaimer
No compatibility guarantee
In general, these switches are often experimental in nature, and subject to change without notice. If you find some of those useful, please file a ticket to promote it to the official feature.
I got the same issue. My jenkins version is 2.3 and its UI is slightly different. Putting it here so that it may helps someone. Increasing both disk space thresholds to 5GB fixed the issue.
I have a cleanup job with the following build steps. You can schedule it #daily or #weekly.
Execute system groovy script build step to clean up old jobs:
import jenkins.model.Jenkins
import hudson.model.Job
BUILDS_TO_KEEP = 5
for (job in Jenkins.instance.items) {
println job.name
def recent = job.builds.limit(BUILDS_TO_KEEP)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
build.delete()
}
}
}
You'd need to have Groovy plugin installed.
Execute shell build step to clean cache directories
rm -r ~/.gradle/
rm -r ~/.m2/
echo "Disk space"
du -h -s /
To check the free space as Jenkins Job:
Parameters
FREE_SPACE: Needed free space in GB.
Job
#!/usr/bin/env bash
free_space="$(df -Ph . | awk 'NR==2 {print $4}')"
if [[ "${free_space}" = *G* ]]; then
free_space_gb=${x/[^0-9]*/}
if [[ ${free_space_gb} -lt ${FREE_SPACE} ]]; then
echo "Warning! Low space: ${free_space}"
exit 2
fi
else
echo "Warning! Unknown: ${free_space}"
exit 1
fi
echo "Free space: ${free_space}"
Plugins
Set build description
Post-Build Actions
Regular expression: Free space: (.*)
Description: Free space: \1
Regular expression for failed builds: Warning! (.*)
Description for failed builds: \1
For people who do not know where the configs are, download the tmpcleaner from
https://updates.jenkins-ci.org/download/plugins/tmpcleaner/
You will get an hpi file here. Go to Manage Jenkins-> Manage plugins-> Advanced and then upload the hpi file here and restart jenkins
You can immediately see a difference if you go to Manage Nodes.
Since my jenkins was installed in a debian server, I did not understand most of the answers related to this since i cannot find a /etc/default folder or jenkins file.
If someone knows where the /tmp folder is or how to configure it for debian , do let me know in comments

CMD runs EXE, launches HTML, wait until close and run final EXE

would like have BAT file that runs set-keys.EXE, launches default.html, and then when user closes html, run set-keys-back.EXE. (they are all in the same directory together). This might be run from a CD, so I might not have ability to write a flag file and then wait to see if it is deleted in order to continue. Have already tried START /WAIT but have seen that WAIT won't actually wait for GUI 32-bit applications. Have considered one batch file calling another one, still no luck. Would prefer not to have PAUSE and user have to come back to CMD just to hit a key - seems clunky. When they close out of HTML, I execute top.window.close(). would be nice if I could put some other code after that, but I think once the window is closed it's closed - no more processing. have not been able to get WShell execute to run. HTML status bar just says error on page - no info. Would love to hear your thoughts...
Update 2: I just figured out that you can launch IE directly without having to use the start command:
#echo off
rem You can use %SCRIPTDIR% to refer to the file to load, if you like
rem Note that %SCRIPTDIR% will contain a trailing slash!
set SCRIPTDIR=%~dp0
echo Testing this script...
C:\PROGRA~1\INTERN~1\iexplore.exe %SCRIPTDIR%foo.html
echo Continuing the script...
This example works for me (Windows XP 32-bit), and waits for me to close the browser window to continue.
Update: Here's an updated code block that launches Internet Explorer. Note that I use the short path to the iexplore.exe executable, and I specify the full path to the file to load:
#echo off
echo Testing this script...
start /wait /min cmd /C "C:\PROGRA~1\INTERN~1\iexplore.exe C:\foo.html"
echo Continuing the script...
Initial Answer: You mentioned trying the start /wait command, but how did you explicitly write it? The following batch script example works for me in Windows 7 x64:
#echo off
echo Testing this script...
start /wait /min cmd /C "%windir%\system32\notepad.exe foo.html"
echo Continuing the script...
In this example, the script does not continue execution until the user closes the Notepad application. The only downside here is that an extra command window pops up, but by using the /min parameter, we can start it minimized.

How do I get Hudson to stop escaping parts of my shell script?

I would like to have a shell script that copies some logs from a part of my system to the hudson workspace so I can archive them.
So right now I have
#!/bin/bash -ex
cp /directory/structure/*.log .
This is kind enough to be changed to
cp '/directory/structure/*.log' .
Which of course is not found since I don't have a file named *.log.
So how do I get this script to work?
EDIT
So I left out the part that I was using sudo cp /path/*.log, because I didn't think that would matter. Of course it does and sudo is the issue not hudson.
One simple answer would be to have the shell script in a separate file, and have hudson call that.
sudo bash -c "cp /directory/structure/*.log"
Throwing it out there, but haven't had a chance to try it in Hudson (so I don't know how it gets quoted):
for f in /directory/structure/*.log ; do
cp $f .
done
In my simple test in a bash shell, different quoting options produce either one or multiple invocations of the copy command (either with all matching files or one at a time), but they all manage to do the copy successfully.