I am working now in Laravel 5.4 and configured the queue driver as database and created the jobs migration.
Controller
public function addUser(){
$job = (new SendReminderEmail())->delay(Carbon::now()->addSeconds(200));
dispatch($job);
dd('Job Completed');
}
Queue
public function handle()
{
$input = ['name'=>'John','email'=>str_random(7),'password'=>Hash::make('general'),];
DB::table('users')->insert($input);
}
This process successfully inserting job row in jobs table.
But I gave 200 seconds for execution delay. But its not firing after time reaches.
How this happening ? Is there any configuration needed more to work queues. ?
Run php artisan queue:listen or php artisan queue:work. These must be run for Artisan to bootstrap the application and run in the background checking for new queue jobs, without it the only queue type that will work is 'sync'.
Related
we use multiple PHP workers. Every PHP worker is organized in one container. To scale the amount of parallel working processes we handle it in a docker swarm.
So the PHP is running in a loop and waiting for new jobs (Get jobs from Gearman).
If a new job is receiving, it would be processed. After that, the script is waiting for the next job without quitting/leaving the PHP script.
Now we want to update our workers. In this case, the image is the same but the PHP script is changed.
So we have to leave the PHP script, update the PHP script file, and restart the PHP script.
If I use this docker service update command. Docker will stop the container immediately. In the worst case, a running worker will be canceled during this work.
docker service update --force PHP-worker
Is there any possibility to restart the docker container soft?
Soft means, give the container a sign: "I have to do a restart, please cancel all running processes." That the container has the chance to quit his work.
In my case, before I run the next process in the loop. I will check this cancel flag. If this cancel flag set I will end the loop and end running the PHP script.
Environment:
Debian: 10
Docker: 19.03.12
PHP: 7.4
In the meantime, we have solved it with SIGNALS.
In PHP work with signals is very easy. In our case, this structure helped us.
//Terminate flag
$terminate = false;
//Register signals
pcntl_async_signals(true);
pcntl_signal(SIGTERM, function() use (&$terminate) {
echo"Get SIGTERM. End worker LOOP\n";
$terminate = true;
});
pcntl_signal(SIGHUP, function() use (&$terminate) {
echo"Get SIGHUP. End worker LOOP\n";
$terminate = true;
});
//loop
while($terminate === false){
//do next job
}
Before the next job is started it is checked if the terminate flag is set.
Docker has great support for gracefully stopping containers.
To define the time to wait we used the tag "stop_grace_period".
With laravel 5.8 envoy command I deploy my changes and I need from my envoy script to write app version to database
For this I created console command , which is located in app/Console/Commands/envoyWriteAppVersion.php file,
but I did not find how to assign additive parameter to my consol commad. I tried like :
php artisan envoy:write-app-version "654"
php artisan envoy:write-app-version 654
php artisan envoy:write-app-version app_version=7.654
But I got error :
Too many arguments, expected arguments "command".
This task did not complete successfully on one of your servers
Which is the valid way ?
Thanks!
I found a valid decision to use in my console command method :
$arguments = $this->arguments();
as it is written here https://laravel.com/docs/5.8/artisan#command-io.
and run from console with space :
php artisan envoy:write-app-version 0.101
I am using laravel 5.2 and I have created a task schedule at 1am to do below process:
Get all users (currently around 250 users)
For each user, I created a job (will be executed by queue), which will add users' tasks. Normally 10 task for each user. below is my handle() method for the command class.
public function handle()
{
// get all users
$users = User::all();
$this->info(count($users) . ' total users');
// schedule user tasks in queue
foreach ($users as $user) {
$job = new ScheduleUserTask($user);
$this->bus->dispatch($job);
}
}
Then the job will check users task and insert into tasks table.
I am using database connection as the queue with supervisord.
My supervisord's worker configuration
[program:mytask-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/myproject/artisan queue:work database --sleep=1 --tries=3 --daemon
autostart=true
autorestart=true
user=user2
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/html/myproject/worker.log
Previously when I run supervisor with 1 process (numprocs), it works nicely. However when I start increase the process to 4, I start getting error bellow.
[Illuminate\Database\QueryException] SQLSTATE[HY000]: General error:
1205 Lock wait timeout exceeded; try restarting transaction (SQL:
insert into jobs (queue, attempts, reserved, reserved_at,
available_at, created_at, payload) values ..
From my understanding, this caused by multiple processes / threads running to insert the queue and locking the table.
My question is, what is the maximum number of process that I can run for supervisord regarding my case?
Is it good to increase the innodb_lock_wait_timeout ?
Thanks
I need to sync data from MySQL Database to Redis Cache every 15 minutes so that cache as latest data.
I am using ubuntu for hosting (Node.js) webservcies. So everytime there is call for rest api it needs to fetch data from cache and serve it.
So now do I need write a background job to sync MySQL data to Cache memory.
If I need to write a background job can I write In node.js and sync it and run as a background job in Ubuntu using crontab command.
Yes. You can write a nodejs script and run it thru crontab command to sync data from MySQL to Redis.
Per my experience, you need some nodejs packages below to help implement the needs.
NodeJS ORM for MySQL:
Sequelize: http://docs.sequelizejs.com/en/latest/ (npm install
sequelize mysql)
Redis Client for NodeJS:
ioredis: https://github.com/luin/ioredis (npm install ioredis)
node_redis: https://github.com/NodeRedis/node_redis (npm install
redis)
The sample code ~/sync-mysql-redis.js:
// Create a mysql client connection
var Sequelize = require('sequelize');
var sequelize = new Sequelize('mysql://user:pass#azure_mysql_host:3306/dbname');
// Create a redis client using node_redis
var redis = require("redis");
var client = redis.createClient(6379, '<redis_host>');
// Query entities data from MySQL table
sequelize.query("SELECT * FROM `t_entity`", { type: sequelize.QueryTypes.SELECT})
.then(function(entities) {
for(var entity in entites) { // for-each entity from entites list
var hash_key = entity.Id // for example, get the entity id as redis hash
for(var prop in entity) { // for-each property from entity
client.hset([hash_key, prop, entity[prop]], redis.print); // mapping a mysql table record to a redis hash
}
}
});
For crontab configuration, you need to vim /etc/crontab as root or sudo user:
$ sudo vim /etc/crontab
# Add a crontab record to run nodejs script interval 15 mins
*/15 * * * * node \home\user\sync-mysql-redis.js
I have managed to create an instance and ssh into it. However, I have couple of questions regarding the Google Compute Engine.
I understand that I will be charged for the time my instance is running. That is till I exit out of the instance. Is my understanding correct?
I wish to run some batch job (java program) on my instance. How do I make my instance stop automatically after the job is complete (so that I don't get charged for the additional time it may run)
If I start the job and disconnect my PC, will the job continue to run on the instance?
Regards,
Asim
Correct, instances are charged for the time they are running. (to the minute, minimum 10 minutes). Instances run from the time they are started via the API until they are stopped via the API. It doesn't matter if any user is logged in via SSH or not. For most automated use cases users never log in - programs are installed and started via start up scripts.
You can view your running instances via the Cloud Console, to confirm if any are currently running.
If you want to stop your instance from inside the instance, the easiest way is to start the instance with the compute-rw Service Account Scope and use gcutil.
For example, to start your instance from the command line with the compute-rw scope:
$ gcutil --project=<project-id> addinstance <instance name> --service_account_scopes=compute-rw
(this is the default when manually creating an instance via the Cloud Console)
Later, after your batch job completes, you can remove the instance from inside the instance:
$ gcutil deleteinstance -f <instance name>
You can put halt command at the end of your batch script (assuming that you output your results on persistent disk).
After halt the instance will have a state of TERMINATED and you will not be charged.
See https://developers.google.com/compute/docs/pricing
scroll downn to "instance uptime"
You can auto shutdown instance after model training. Just run few extra lines of code after the model training is complete.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = 'xyz' # Project ID
# The name of the zone for this request.
zone = 'xyz' # Zone information
# Name of the instance resource to stop.
instance = 'xyz' # instance id
request = service.instances().stop(project=project, zone=zone, instance=instance)
response = request.execute()
add this to your model training script. When the training is complete GCP instance automatically shuts down.
More info on official website:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/stop
If you want to stop the instance using the python script, you can follow this way:
from google.cloud.compute_v1.services.instances import InstancesClient
from google.oauth2 import service_account
instance_client = InstancesClient().from_service_account_file(<location-path>)
zone = <zone>
project = <project>
instance = <instance_id>
instance_client.stop(project=project, instance=instance, zone=zone)
In the above script, I have assumed you are using service-account for authentication. For documentation of libraries used you can go here:
https://googleapis.dev/python/compute/latest/compute_v1/instances.html