SGE Fast Forward Job Numbers - sungridengine

Is there some way in SGE to advance / fast forward the job number counter? I would like to have the number set initially to something higher than 1.

It's not exactly supported but in the spool for your cluster there is a file called jobseqnum. It should be possible to stop the qmaster, edit this file to taste and restart the qmaster.

Related

Snort - How to speed up ? wait for 10+minute for 1 PCAP?

I'm new to Snort and have joined a project where I need to analyze PCAP using snort.
I used docker to deploy Snort3.
Instead of the default rule set Talos, I used 265 rules I wrote by myself to analyze a PCAP file with about 700,000 packets. The picture below shows the detailed summary data.
enter image description here
I was wondering, is it normal that wait for about 10 minutes of Snort computing time?
In addition, if I want to speed up the computing time, how can I modify the command or configurations?
P.S. This is my first time asking a question, if the description of the problem is not clear enough, you can leave a comment and let me know. Thanks a lot!

MySQL/MariaDB, does commit() write to disk if there were no changes? Concerned about Raspberries SD card life

I'm a little worried about my Raspberry Pi's SD-card life.
On the Raspberry, there's a MySQL(MariaDB)-server running.
A program of mine is reading from the database every second,
then looks something up on the internet and only rarely,
when something happens, it's going to write to the database.
I used to use commit() only once every 5 minutes, but apparently if I don't commit,
the program doesn't see changes from other programs even though those are from tables
it doesn't write to.
1) Concerns about a Raspberries-SD-card's life are all over the internet, so my question is,
how to best call commit()?
2) If it just reads from the database but doesn't change anything, will commit even access the disk?
Is there a way to see the new changes without commiting?
3) And if I do have to commit every second in order to see the changes in time, how bad is it?
PS: I'm using Python3 with mysql-connector, an 8 GB SD with the OS the raspberry imager program recommended to me
So I guess since it's just a couple hundred writings a day, it's totally fine.
But why have you guys all answered in the comments?
Who am I gonna pick as best answer?

adaptive load balancing with gnu parallel

Is there some way to run gnu parallel with a dynamically changing list of remote hosts? The dynamism isn't intermittent or irregular -- I'm looking for a way to use the Google compute engine autoscaling feature to smoothly scale up to a max number of hosts and have gnu parallel dispatch jobs as these hosts come alive. I guess I can create a fake job to trigger autoscaling to launch the multiple hosts and have them register themselves to some central host file.. Any ideas how best to manage this?
From man parallel:
--slf filename
File with sshlogins. The file consists of sshlogins on
separate lines.
:
If the sshloginfile is changed it will be re-read when a
job finishes though at most once per second. This makes it
possible to add and remove hosts while running.

Branching failures in SourceGear's Vault?

I'm using SourceGear's Vault version control software (v4.1.2) and am experiencing DBReadFailures when attempting to branch a folder. I don't really know if I'd call the folder "large" or not (treesize is 680MB and the disk space used is 1.3GB)... but during the branch operation, the sql server it's querying times out (approx 5m) and the transaction fails. During the branch operation, the database server pegs 1 of it's 4 CPUs at 100%, which tells me the operation isn't really hardware constrained so much as it is constrained by it's algorithm). The db server is also not memory bound (has 4GB and only uses 1.5GB during this process). So I'm left thinking that there is just a finite limit to the size of the folders you can branch in the Vault product. Anyone have any similar experiences with this product that might help me resolve this?
When attempting to branch smaller folders (i.e. just the sub folders within the main folder I'm trying to branch) it apparently works. Looks like another indicator that it's just size limitations I'm hitting. Is there a way to increase the 5m timeout?
In the Vault config file, there's a SqlCommandTimeout item - have you tried modifying that? I'm not sure what the default is, but ours is set as follows:
<SqlCommandTimeout>360</SqlCommandTimeout>
There's a posting on the SourceGear support site here that seems to describe your exact problem.
The first reply in that posting mentions where to find the config file, if you're not familiar with it.

Change config values on a specific time

I just got a mail saying that I have to change a config value at 2009-09-01 (new taxes). Our normal approach for this would be to to awake at 2009-08-31 at 23:59 and then just change the value manually. Which not is a big problem since this don't happens to often. But it makes me wonder how other people handle issues like this.
So! How do you handle date specific config changes?
(We are working in asp.net but I don't think this has to be language specific)
Br
Carl Bergquist
I'd normally store this kind of data in a database table like this
Key, Value, EffectiveFrom, EffectiveTo
-----------------------------------------
VAT, 15.0, 20081201, 20091231
VAT, 17.5, 20100101, NULL
I'd then use the EffectiveFrom and EffectiveTo dates to chose the value that is effective at the given time. If the rate is open ended then the effecive to could either by NULL or 99991231.
This also allows you to go back without having to change the config. E.g. if someone asks you to recalculate the tax for the previous month before the rate change.
In linux, there is a command "at" for batch execution.
See "man at" for details.
To be honest, waking up near the time and changing it seems to be the simplest and cheapest approach. All of the technical solutions are fine, but it depends where you work.
In our environment it would be cheaper and simpler to get someone to wake up and make the change than to redevelop the functionality of a piece of software that already works. It certainly involves less testing, development overhead and costs which means we would tend to solve the problem as you do, manually.
That depends totally on the situation and the technology.
pjp's idea is good, if you get your config from a database, or as metadata to define the valid time for whole config sets/files.
Another might be: just prepare a new configfile with the new entries and swap them at midnight (probably with a restart of the service/program) whatever.
Swapping them would be possible with at (as given bei Neeraj) ...
If timing is a problem you should handle the change, or at least the timing of the change on the running server (to avoid time out of synch problems).
We got same kind of problem some time before and handled using the following approach.
this is suitable if you are well known to the source that orginates the configuration changes..
In our case, the source exposed a webservice (actualy a third party) which will return a modified config details. And there is a windows service running on our server which keeps on polling the webservice and will update the configuration file if there is any change.
this works perfectly in our case..
You can make use of this approach by changing the polling webservice part to your source of config change (say reading changes from some disk path). But am not sure how this is possible reading config changes from email.
Why not just make a shell script to swap out the files. run it in cron and switch the files out a minute before and send an alert text if NOT successful and an email if successful.
This is an example on a Linux box but I think you get the point and can do this on a Windows box.
Script:
cp /path/to/old/config /path/to/backup/dir/config.timestamp
cp /path/to/new/config
if(/path/to/new/config exsits) {
sendSuccessEmail();
} else {
sendPanicTextAlert();
}
cron:
59 23 31 8 * /path/to/script.sh
you could test this as well before hand just point to some dummy directories and file
I've seen the hybrid approach. Instead of actually changing the data model to include EffectiveDate/EndDate or manually changing the values yourself, schedule a script to change the values automatically. Also, be sure to have a solid test plan that will validate all changes.
However, this type of manual change can have a dramatic impact on reporting. If previous transactions join directly to the tables being changed, numbers in historical reports could change in a very bad way. There really is no "right" answer.
If I'm not able to do something like pjp's solution, I'd use either a scheduled task or a server job to update it automatically at the right time.
But...I'd probably still be awake checking it had worked.
Look the best solution would be to parameterise your config file and add things like when a certain entry should be used from. This would negate the need for any copying or swapping of files and your application would simply deal with it. (That goes for a config file approach or a database)
If you cannot change the current systems and you have to go with swapping the config files, then you also have two options:
Use a scheduled task to kick off a batch job or even a VBScript or PowerShell script (which ever you feel comfortable with) Make sure you set up the correct credentials to be able to do this at the middle of the night and you could also add some checking and mitigation into this approach.
Write a windows Service that does this for you. Here you have all the flexibility you need. Code it to do whatever it needs to do, do all the checks you need to (so that you can keep sleeping rather than making sure it actually worked) etc, etc. You service would then even take care of the scheduling aspect and all will be good. Here you could use xml DOM object and xPath and not replace the file, but simply update the specific entries as required.
Remember that any change to the config file would cause your site to restart, so make sure you take care of all the other housekeeping stuff that this could cause. (Although this would be exactly the same if you where sitting there in the middle of the night copying file around)