I set the quotas both ways(though lpadmin and by editing printers.conf) and neither seem to work(i did restart the daemon) but print requests are still granted with no error.
What is wrong?(i am running ubuntu linux 9.04)
There is no way to do anything else but speculation if you don't provide a detailed record of what exactly your lpadmin command was, or what manipulation of the printers.conf file you made. (Also note, that direct editing of printers.conf is not recommended unless you do it while cupsd is stopped.)
CUPS' quotas do work for me. However, the last job crossing the quota threshold will always start printing, since CUPS does know about its number of pages only after starting to process the jobfile, not beforehand...
Related
im having some trouble with our mail server since yesterday.
First, the server was down for couple days, thanks to KVM, VMs were paused because storage was apparently full. So i managed to fix the issue. But since the mail server is back online, CPU usage was always at 100%, i checked logs, and there was "millions", of mails waiting in the postfix queue.
I tried to flush the queue, thanks to the PFDel script, it took some times, but all mails were gone, and we were finally able to receive new emails. I also forced a logrotate, because fail2ban was also using lots of CPU.
Unfortunately, after couple hours, postfix active queue is still growing, and i really dont understand why.
Another script i found is giving me that result right now:
Incoming: 1649
Active: 10760
Deferred: 0
Bounced: 2
Hold: 0
Corrupt: 0
is there a possibility to desactivate ""Undelivered Mail returned to Sender" ?
Any help would be very helpful.
Many thanks
You could firstly temporarily stop sending bounce mails completely or set more strict rules in order to analyze the reasons of the flood. See for example:http://domainhostseotool.com/how-to-configure-postfix-to-stop-sending-undelivered-mail-returned-to-sender-emails-thoroughly.html
Sometimes the spammers find some weakness (or even vulnerability) in your configuration or in SMTP server and using that to send the spam (also if it could reach the addressee via bounce only). Mostly in this case, you will find your IP/domain in some common blacklist services (or it will be blacklisted by large mail providers very fast), so this will participate additionally to the flood (the bounces will be rejected by recipient server, what then let grow you queue still more).
So also check your IP/domain using https://mxtoolbox.com/blacklists.aspx or similar service (sometimes they provide also the reason why it was blocked).
As for fail2ban, you can also analyze logs (find some pattern) to detect the evildoers (initial sender), and write custom RE for fail2ban to ban them for example after 10 attempts in 20 minutes (or add it to ignore list for bounce messages in postfix)... so you'd firstly send X bounces, but hereafter it'd ban the recidive IPs, what could also help to reduce the flood significantly.
An last but not least, check your config (follow best practices for it) and set up at least MX/SPF records, DKIM signing/verification and DMARC policies.
I have a "standard persistent disk" of size 10GB on Google Cloud using Ubutu 12.04. Whenever, I try to remove this, I encounter following error
The resource 'projects/XXX/zones/us-central1-f/disks/tahir-run-master-340fbaced6a5-d2' is not ready
Does anybody know about what's going on? How can I get rid of this disk?
This happened to me recently as well. I deleted an instance but the disk didn't get deleted (despite the auto-delete option being active). Any attempt to manually delete the disk resource via the dev console resulted in the mentioned error.
Additionally, the progress of the associated "Delete disk 'disk-name'" operation was stuck on 0%. (You can review the list of operations for your project by selecting compute -> compute engine -> operations from the navigation console).
I figured the disk-resource was "not ready" because it was locked by the stuck-operation, so I tried deleting the operation itself via the Google Compute Engine API (the dev console doesn't currently let you invoke the delete method on operation-resources). It goes without saying, trying to delete the operation proved to be impossible as well.
At the end of the day, I just waited for the problem to fix-itself. The following morning, I tried deleting the disk again, as it looks like the lock had been lifted in the meantime, as the operation was successful.
As for the cause of the problem, I'm still left clueless. It looks like the delete-operation was stuck for whatever reason (probably related to some issue or race-condition going on with the data-center's hardware/software infrastructure).
I think this probably isn't considered as a valid answer by SO's standards, but I felt like sharing my experience anyway, as I had a really hard time finding any info about this kind of google cloud engine problems.
If you happen to ever hit the same or similar issue, you can try waiting it out, as any stuck operation will (most likely) eventually be canceled after it has been in PENDING state for too long, releasing any locked resources in the process.
Alternatively, if you need to solve the issue ASAP (which is often the case if the issue is affecting any resource which is crtical to your production environment), you can try:
Contacting Google Support directly (only available to paid support customers)
Posting in the Google Compute Engine discussion group
Send an email to gc-team(at)google.com to report a Production issue
I believe your issue is the same as the one that was solved few days ago.
If your issue didn't happen after performing those steps, you can follow Andrea's suggestion or create a new issue.
Regards,
Adrián.
Say I use SourceGear Vault client on my desktop at work and check out a few files to a network folder. But when I am working from home and login to a terminal server (Windows RDP), Vault thinks that someone else has checked out the files and so I can't access/edit them.
Is there a way to set things up such I can checkout a file to a common network location and keep working on it from multiple machines?
Thanks
What you are seeing is normal, because the Vault cache is specific to each client.
Here are the options I could see for how to deal with this:
1) The best way is to shelve your code changes. Then you can pull your shelved changes down when you get home and continue where you left off. If you need to check in from home, then when you initially shelve your changes, you should also undo your check out so that you can check out again from home.
2) You could use a network location for yourself, but you are likely to run into the same situation when you go to check in. What this would give you is just the ability to have only one location for the code you are editing. Also, some of the statuses you see as you are switching between clients won't look right. You still would get best results by undoing your check out before leaving work, but in this case, you'd choose the option to leave your changes instead of reverting them back.
3) You can perform an additional check in. That way your code is in Vault. Then you can check it out again and continue from where you left off. Some places don't want partially completed code checked in though, so you will have to decide if this is in line with your workplace requirements.
4) You could perform a non-exclusive check out. That way you can check out twice. You will get a warning, but it will still allow you to continue. To get your changes from your work computer, you still will be well served by using Shelve.
Feel free to email me at support#sourcegear.com if you need additional help.
Thanks,
Beth Kieler
Technical Support
SourceGear LLC
I have a requirement to delay mail delivery through an SMTP Relay.
i.e.
Mail message is successfully recieved at time T.
Forward Message to destination at time T+4hours.
Is this possible in sendmail or any other SMTP Relay.
Deployment platform is IBM AIX.
You should've been at least a little more specific in your question. I'll just throw in some suggestions anyway.
If you just want to deliver mail every four hours, you have to run sendmail in queue-only mode (QUEUE_MODE="cron"; in sendmail.conf) and set up the queue to be run every four hours (QUEUE_INTERVAL="4h";). I think, this only applies to debian-like systems, but the principle is the same anywhere - you set the queue mode to cron (this is actually controlled by the arguments, with which you start sendmail) and then you process it periodically.
If you want to just delay mail delivery, there is also a number of ways to do it, depending on why you want to do it. One popular solution is greylisting, it does just the following - when a host connects to your MTA (sendmail, f.ex.), it gets bounced with the prompt to try again in some time interval. A properly configured mailer will just do that - it will try sending the mail again and eventually the message will be accepted and delivered (or forwarded). Most of the spam bots, on the other hand, will not try to resend the message upon receiving an error. If you need greylisting on sendmail you can read up here: http://www.greylisting.org/implementations/sendmail.shtml
Hope this helped at least a bit.
EDIT:
Ok, so now I see what you need doing. Here is the possible solution using sendmail (I've been dealing with sendmail in one way or another for years now, so.. :P): You use two of them.
The first one just receives mail and queues it and (and it is important) it does NOT get to process the queue. The second sendmail instance runs a separate queue and its QUEUE_MODE is set to daemon or cron (say, every minute). Now all you need is to write an external script, that would move the mail from the first queue to the second, once the "age" of the message is reached. Since queue items are just files, it is an easy task, done in a few lines of, say, perl (hell, a shell script can do that, too). Moving queue items from queue to queue is as easy as moving files from directory to directory. Please note, that this technique is widely used in mail processing solutions, such as, say spamassassin, so its not some weirdness, conjured by my deseased mind :P
Hope this gives you a hint or two.
Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it?
Let me start off saying that I don't recommend doing this in a service on a server, but I'll do my best to answer the questions.
Running as a service makes it difficult to clean up. For example with what you have running as a service survive killing a hung word or excel. You may be in a position to have to kill the service. Will your service stop if word or excel is in this state.
One problem with trying to test if it is hung, is that your test could cause a new instance of word to startup and work, while the one that the service is running would still be hung.
The best way to determine if it's hung is to ask it to do what it is supposed to be doing and check for the results. I would need to know more about what it is actually doing.
Here are some commands to use in a batch file for cleaning up (both should be in the path):
sc stop servicename - stops service named servicename
sc start servicename - starts service named servicename
sc query servicename - Queries the status of servicename
taskkill /F /IM excel.exe - terminates all instances of excel.exe
I remember doing this a few years ago - so I'm talking Office XP or 2003 days, not 2007.
Obviously a better solution for automation these days is to use the new XML format that describes docx etc using the System.IO.Packaging namespace.
Back then, I used to notice that whenever MSWord had kicked the bucket and had had enough, a process called "Dr. Watson" was running on the machine. This was my first clue that Word had tripped and fallen over. Sometimes I might see more than one WINWORD.EXE, but my code just used to scan for the good Doctor. Once I saw that (in code), I killed all WINWORD.EXE processes the good Doctor himself, and restarted the process of torturing Word :-)
Hope that gives you some clues as to what to look for.
All the best,
Rob G
P.S. I might even be able to dig out the code in my archives if you don't come right!
I can answer the latter half; if you have a reference to the application object in your code, you can simply call "Quit" on it:
private Microsoft.Office.Interop.Excel.Application _excel;
// ... do some stuff ...
_excel.Quit();
For checking for a hung process, I'd guess you'd want to try to get some data from the application and see if you get results in a reasonable time frame (check in a timer or other thread or something). There's probably a better way though.