Gearman and lost workers - mysql

I want to use gearman as a queue-system for a webproject. Therefor I tried gearman and gearmanManager which works great. But know, I ask myself what happen if a worker, job or server has gone (for example php error or whatever the reason might be).
What happens if I run an synchron job where the client (browser)
waits for a callback and the worker/server crashed?
What happens if there is an asynchronus job which chrashed while it is in work? Will gearman try it again (cause it is stored as a queue in mysql) or will gearman ignore it? And what happen to it if it will not be ignored and the worker crash over and over again?
I would like to found out if the system ist scalabel (for example by NFS). Do you have examples for me or experiences on which I can get forward?
Thank you all and have a very nice weekend...
Phil

If the server has crashed, your client won't have anything to connect to. How you handle that is up to your application. If the worker has crashed, and you're attempting to run a synchronous task, the client will wait until a worker becomes available, unless you've provided a timeout value.
If the async task is performed and the worker, gearman will requeue the task at hand. If it crashes each worker that grabs it, you might be left without any active workers. The C-version / gearmand has an option to tweak this behavior:
-j [ --job-retries ] arg (=0) Number of attempts to run the job
before the job server removes it. This
is helpful to ensure a bad job does not
crash all available workers. Default is
no limit.
If you deploy broken or fragile code you will have issues, regardless of which system you're using. Bad code breaks applications. :-)
A Gearman setup is scalable, but remember that if you use the persistent queue support, each server needs to have it own backend. You can't share the same database and tables across server instances, since the persistent queue is only maintained as a backup in case gearmand dies and has to be restarted. It might be more useful to allow your application to easily queue tasks again if needed.

Related

Perl DBI Connect to keep session active after script completed

Is there anyway I could keep the DBI session active even after the script exits?
http://mysqlresources.com/documentation/perl-dbi/connect
Basically I need to call the perl (DBI) script multiple times with different parameters (decide pass/fail after it completes). Each time its called Perl is making new connection to Mysql and destroys while exiting which itself is adding considerable amount of delay.
Just wondering if there is any way I could store and use the session for future?
Your connection and its associated socket is process specific, so there's no way of keeping it alive after your process terminates.
You should be able to better tune your server so that connecting is faster. A common issue is doing a reverse IP lookup by enabling the skip-name-resolve configuration parameter in my.cnf.
Barring that, what you might do is use either MySQL Proxy to keep a pool of warm connections, or to combine all your various operations into a single script that can run several stages without terminating.

Mysql resource temporarily unavailable

I'm seeing a few of these errors during high load times:
mysql_connect() [<a
href='function.mysql-connect'>function.mysql-connect</a>]: [2002] Resource
temporarily unavailable (trying to connect via
unix:///var/lib/mysql/mysql.sock)
From what I can tell the mysql server isn't hitting its max connections limit, but there's something else stopping it from serving the query. What other limits would MySQL be hitting?
I'm running RHEL 6.2 64bit with MySQL 5.5.21
Let's assume your system is currently Unix-based (as given in your problem statement). If this is correct, here's the set of issues you may be running into:
You've run out of memory available to MySQL.
This is the most likely problem you're facing. Each connection in MySQL's connection pool requires memory to function, and if this resource is exhausted, no further connections can be made. Of course, the memory footprints and maximum packet sizes of various operations can be tuned in your equivalent to my.cnf if you discover this to be an issue.
Here's an additional thread that can help there, but you may also consider using simpler profiling tools like top to get a good ballpark estimate of what's going on.
You've run out of file descriptors available to your MySQL user account.
Another common issue: if you're trying to service requests that require file IO above the 1,024 boundary (by default), you will run into cases where the operation simply fails. This is because most systems specify a soft and hard limit on the number of open file descriptors each user can have available at one time, and walking over this threshold can cause problems.
This will usually have a series of glaringly obvious signs expressed in your log files. Check /var/log/messages and your comparable directories (for example, /var/log/mysql to see if you can find anything interesting.
You've run into a livelock or deadlock scenario where your thread is unsatisfiable.
Corollary to memory and file descriptor exhaustion, threads can time out if you've overstepped the computational load your system is capable of handling. It won't throw this error message, but this is something to watch out for in the future.
Your system is running out of PIDs available to fork.
Another common scenario: fork only has so many PIDs available for its use at any given time. If your system is simply overforked, it will cease to be able to service requests.
The easiest check for this is to see if any other services can connect through to the machine. For example, trying to SSH into the box and discovering that you cannot is a big clue.
An upstream proxy or connection manager has run out of resources and ceased servicing requests.
If you have any service layer between your client and MySQL, it bears inspecting to see if it has crashed, hung, or otherwise become unstable. The advice above applies.
Your port mapper has exhausted itself after 65,536 connections.
Unlikely, but again, a possible exhaustion case. Checking the trivial service connection as above is, ehm, also the best port of call here.
In short: this is a resource exhaustion scenario, inclusive of the server simply being "down". You're going to have to profile your system further to see what you're blocking on. All the error message gives us in this case is the fact the resource is unavailable to the client -- we'd need to see more information about the server to determine a more adequate remedy.
I still haven't found which limits it was hitting, but I did manage to work around the problem. There was a problem with our session table (in vbulletin) which uses the MEMORY engine. The indexes for this table were HASH and thus when vbulletin purged this table once an hour it would lock the table just long enough to hold up other queries and push mysql to the limit of its resources.
By changing the indexes to BTREE this allowed MySQL to delete the rows from the session table a lot quicker and avoid any limits there were reached previously. The errors only started when we upgraded our master db server to MySQL 5.5, so I'm guessing MEMORY tables are handled differently in the latest release.
See http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/ for information on speed increases from using BTREE indexes over HASH For MEMORY.
Geez, this could be so many things. It could be that the socket buffer space is exhausted. It could be that mysql is not accepting connections as fast as they are coming in and the backlog limit is reached (though I'd expect that to give you a "Connection Refused" error, I don't know for sure that's what you'll get for a Unix domain socket). It could be any of the things #MrGomez pointed out.
Since you are running Apache and MySQL on the same server and this is a problem under high load, it could well be that Apache is starving the system of some resource and you're just not seeing (noticing?) the dropped/failed incoming connections/requests in your logs.
Are you using connection pooling? If not, I'd start there.
I'd also look for errors in the Apache logs and syslog around the same time as the mysql_connect error and see what else turns up. I'd especially recommend getting MySQL moved over to its own separate dedicated server.
In my case, I was working with JSON data types with PDO (PHP Driver).
I was using fetch to retrieve one item but forgot to add LIMIT 1 to the query. Adding it solved the problem.

Beanstalk vs DB

I am writing a db logging ruby gem which will simply take out a job from a Beanstalk queue and write it in the DB.
That is one process on Server A puts a job (that it wants to log) in the Beanstalk queue on Server B, and my logging process on Server B takes it out and writes it to the mysql DB on Server B.
I want to know if this is worth it?
Is putting a job in the Beanstalk queue faster than writing to the DB. Or can my process that wants to log to DB directly write it to DB instead of using the logging process.
Note that both the beanstalk server and DB are on another server.
Beanstalk internally makes a socket call from Server A to Server B.
I believe mysql would need to do the same as well?
So therefore is mysql to another server going to be slower than putting in the beanstalk queue.
It'll be much faster, primarily because Beanstalkd jobs, by default, are stored in-memory and are lost if, for example, you lose power on your server, whereas MySQL is a strongly ACID-compliant relational database, and hence will go to a lot of effort and flush each of your logs to disk.
I think you'll find that, after your do some benchmarking with a lot of logs being made by your system, that disk I/O will be your limiting factor, rather than the speed of TCP/IP sockets. Your current system's advantage is that when server A files a log on Server B's beanstalkd instance it takes up very little of Server A's time, and Server B can periodically flush our many logs at once from beanstalkd to MySQL, making the process more efficient. The disadvantage is that, the more you batch up the logs, the more logs you will lose in the event of a software / power failure, unless you use beanstalkd's "-b" parameter which makes jobs durable by writing them to disk (and hence making the process slower).
Of course, the only way to truly settle this question is to benchmark!

Reconfigure and reboot a Hudson/Jenkins slave as part of a build

I have a Jenkins (Hudson) server setup that runs tests on a variety of slave machines. What I want to do is reconfigure the slave (using remote APIs), reboot the slave so that he changes take effect, then continue with the rest of the test. There are two hurdles that I've encountered so far:
Once a Jenkins job begins to run on the slave, the slave cannot go down or break the network connection to the server otherwise Jenkins immediately fails the test. Normally, I would say this is completely desirable behavior. But in this case, I would like for Jenkins to accept the disruption until the slave comes back online and Jenkins can reconnect to it - or the slave reconnects to Jenkins.
In a job that has been attached to the slave, I need to run some build tasks on the Jenkins master - not on the slave.
Is this possible? So far, I haven't found a way to do this using Jenkins or any of its plugins.
EDIT - Further Explanation
I really, really like the Jenkins slave architecture. Combined with the plugins already available, it makes it very easy to get jobs to a slave, run, and the results pulled back. And the ability to pick any matching slave allows for automatic job/test distribution.
In our situation, we use virtualized (VMware) slave machines. It was easy enough to write a script that would cause Jenkins to use VMware PowerCLI to start the VM up when it needed to run on a slave, then ship the job to it and pull the results back. All good.
EXCEPT Part of the setup of each test is to slightly reconfigure the virtual machine in some fashion. Disable UAC, logon as a different user, have a different driver installed, etc - each of these changes requires that the test VM/slave be rebooted before the changes take affect. Although I can write slave on-demand scripts (Launch Method=Launch slave via execution of command on the master) that handle this reconfig and restart, it has to be done BEFORE the job is run. That's where the problem occurs - I cannot configure the slave that early because the type of configuration changes are dependent on the job being run, which occurs only after the slave is started.
Possible Solutions
1) Use multiple slave instances on a single VM. This wouldn't work - several of the configurations are mutually exclusive, but Jenkins doesn't know that. So it would try to start one slave configuration for one job, another slave for a different job - and both slaves would be on the same VM. Locks on the jobs don't prevent this since slave starting isn't part of the job.
2) (Optimal) A build step that allows a job to know that it's slave connection MIGHT be disrupted. The build step may have to include some options so that Jenkins knows how to reconnect the slave (will the slave reconnect automatically, will Jenkins have to run a script, will simple SSH suffice). The build step would handle the disconnect of the slave, ignore the usually job-failing disconnect, then perform the reconnect. Once the slave is back up and running, the next build step can occur. Perhaps a timeout to fail the job if the slave isn't reconnectable in a certain amount of time.
** Current Solution ** - less than optimal
Right now, I can't use the slave function of Jenkins. Instead, I use a series of build steps - run on the master - that use Windows and PowerShell scripts to power on the VM, make the configurations, and restart it. The VM has a SSH server running on it and I use that to upload test files to the test VM, then remote execute them. Then download the results back to Jenkins for handling by the job. This solution is functional - but a lot more work than the typical Jenkins slave approach. Also, the scripts are targeted towards a single VM; I can't easily use a pool of slaves.
Not sure if this will work for you, but you might try making the Jenkins agent node programmatically tell the master node that it's offline.
I had a situation where I needed to make a Jenkins job that performs these steps (all while running on the master node):
revert the Jenkins agent node VM to a powered-off snapshot
tell the master that the agent node is disconnected (since the master does not seem to automatically notice the agent is down, whenever I revert or hard power off my VMs)
power the agent node VM back on
as a "Post-build action", launch a separate job restricted to run on the agent node VM
I perform the agent disconnect step with a curl POST request, but there might be a cleaner way to do it:
curl -d "offlineMessage=&json=%7B%22offlineMessage%22%3A+%22%22%7D&Submit=Yes" http://JENKINS_HOST/computer/THE_NODE_TO_DISCONNECT/doDisconnect
Then when I boot the agent node, the agent launches and automatically connects, and the master notices the agent is back online (and will then send it jobs).
I was also able to toggle a node's availability on and off with this command (using 'toggleOffline' instead of 'doDisconnect'):
curl -d "offlineMessage=back_in_a_moment&json=%7B%22offlineMessage%22%3A+%22back_in_a_moment%22%7D&Submit=Mark+this+node+temporarily+offline" http://JENKINS_HOST/computer/NODE_TO_DISCONNECT/toggleOffline
(Running the same command again puts the node status back to normal.)
The above may not apply to you since it sounds like you want to do everything from one jenkins job running on the agent node. And I'm not sure what happens if an agent node disconnects or marks itself offline in the middle of running a job. :)
Still, you might poke around in this Remote Access API doc a bit to see what else is possible with this kind of approach.
Very easy. You create a Master job that runs on the Master, from the master job you call the client job as a build step (it's a new kind of build step and I love it). You need to check that the master job should wait for the client job to finish. Then you can run your script to reconfigure your client and run the second test on the client.
An even better strategy is to have two nodes running on your slave machines. You need to configure two nodes in Jenkins. I used that strategy successfully with a unix slave. The reason was that I needed different environment variables to be set up and I didn't wanted to push that into the jobs. I used ssh clients, so I don't know if it is possible with different client types. Than you might be able to run both tests at the same time or you chain the jobs or use the master strategy mentioned above.

How can I guarantee the w3wp process exists before turning on perfmon logging?

I have a batch script I run before our performance tests that does some pre-test setup on our server; it clears log files, starts the proper services, restores the database, sets some app settings and turns on perfmon logging.
My problem; the w3wp process we need to monitor is not always present at the time we turn on perfmon logging. It's pretty much hit-or-miss if this process is in the log. The test takes anywhere from 4 to 18 hours to complete, and I don't know until the test is done whether or not w3wp was monitored (it doesn't seem that perfmon detects new processes even though my log file is configured to monitor Process(*)), which ends up wasting a lot of time.
Is there a way to force w3wp to get loaded? Is there some command I can call just prior to starting the perfmon logs?
Or, is it possible to configure the perfmon log to monitor processes that may not exist at the time the log is started?
If you install the IIS Admin tools, you can call a command line app called TinyGet. You can pass in any page on your webserver to initialize it. This would start up the process so you can capture it.