Make periodic task occur every 2 seconds - windows-phone-8

I need to check regularly if a new message has been received because the API service I am integrating with does not have a push notification service. How do I set how often a periodic task runs for?
I have the boiler plate code (eg. http://www.c-sharpcorner.com/uploadfile/54f4b6/periodic-and-resourceintensive-tasks-in-windows-phone-mango/) from any example on the internet, but it seems it can only run roughly every 30 minutes :() ?

Unfortunately periodic tasks run not more often than 30 minutes and they are not even guaranteed to run. If you want to run more often than that your only bet is setting up a push notification service...

Related

How can I delay deletion?

I would like to delay deletion of data from the database. I am using MySQL, nest.js. I heard that CRON is what I need. I want to delete the entry in a week. Can you help me with this? CRON is what I need, or i need to use something another?
A cron job (or at in Windows) or a MySQL EVENT can be created to periodically check for something and take action. The resolution is only 1 minute.
If you need a very precise resolution, another technique would be required. For example, if you don't want to show a user something that is more than 1 week old to the second, then simply exclude that from the SELECT. That is add something like this to the WHERE: AND created_date >= NOW() - INTERVAL 7 DAY.
Doing the above gives you the freedom to schedule the actual DELETE for only, say, once a day -- rather than pounding on the database only to usually find nothing to do.
If you do choose to "pound on the database", be aware of the following problem. If one instance of the deleter script is running for a long time (for any of a number of reasons), it might not be finished before the next copy comes along. In some situations these scripts can stumple over each other to the extent of effectively "crashing" the server.
That leads to another solution -- a single script that runs forever. It has a simple loop:
Do the actions needed (deleting old rows)
Sleep 1 -- or 10 or 60 or whatever -- this is to be a "nice guy" and not "pound on the system".
The only tricky part is making sure that starts up after any server restart or crash of the script.
You can configure a cronjob to periodically delete it.
There are several ways to configure a cron job.
You can write a shell script that periodically deletes entities in the db using linux crontab, or you can configure an application that provides cronjobs such as jenkins or airflow.
AWS lambda also provides cronjob.
Using crontab provided by nestjs seems to be the simplest to solve the problem.
See this link
https://docs.nestjs.com/techniques/task-scheduling

Change queue time of Google spreadsheet app script trigger

When I create a daily time-based trigger for the Google app script associated with my Google spreadsheet, I am prompted to select an execution time that is within an hour-long window, and it appears that a cron wrapper randomly assigns an exact execution time within that hour-long interval.
Because my application's specific use case has several data dependencies which may not be completed early in the hour, I was forced to divide my application into several stages, with separate triggers each delayed by an hour, to insure that the required data would be available.
For example, the trigger time that was initially assigned for my script was 6:03AM, but the data which usually arrived at 5:57AM, occasionally did not arrive until 6:10AM and the script had nothing to process for that day. As a blunt force solution, I deleted the 6-7AM trigger and re-created it to execute in the 7-8AM time slot to insure the required data was available. This required that the second stage of the script had to be moved to 8-9AM, resulting in script results which could be delayed by as much as 2-3 hours.
To improve this situation, I am contemplating integrating the two script processing stages and creating a more accurate script execution trigger time, say 6:30AM to be safe. Does anyone know if:
Is it possible, other than by observing daily processing, to discover the exact trigger execution time that has been assigned, and
If randomly assigned, can script triggers be created and deleted until an acceptably precise execution time is obtained?
Thanks in advance for any guidance provided.
If accuracy is paramount, you can forgo using apps script triggers altogether and leverage a 3rd party tool instead.
I'd recommend using cron-job.org. This service can create cron jobs that make POST requests to a url endpoint you specify, and you can schedule times accurate to a minute. To use it with Apps Script implement a doPost() to handle post requests and deploy your script as a Web APP. You then create a cron job using the service and pass it the web app's URL as an endpoint.
The cron job will fire at the scheduled time and you can perform any requisite operations inside the doPost() in response to the incoming POST request.
Thank you to random parts and Dimu Designs for the guidance. Based upon experimentation, here are the answers to my questions:
Is it possible, other than by observing daily processing, to discover the exact trigger execution time that has been assigned? Answer: No way except by observing the random trigger time assigned within the requested hour window.
If randomly assigned, can script triggers be created and deleted until an acceptably precise execution time is obtained? Answer: Yes. I adjusted my script's assigned execution time by observing a trigger's execution time (via email message timestamp), and deleting, recreating, and observing the randomly assigned trigger execution time until I got an acceptable minute within the requested hour window.

Odd Script Execution SSIS

I'm probably going to word this poorly, but here goes.
I've created an SSIS Package with an email proceedure. It is supposed to send three emails, each one based on a SQL query.
The email tasks run sequentially, as part of the reporting requirement (An automated run is required daily, but sometimes one of the three emails needs to be sent manually. In those cases the other two Data Flow Tasks are disabled)
Heres where things get fishy. I can run the task from the editor, and no issues arise. Results: 3 emails, limited latency. These are relatively small queries (~50k records). When the task is run from Windows Task Scheduler, I get two of the three emails (notably the first two in the sequence), and quite a bit of latency (~10 minutes total execution, ~ 3 minutes between emails.) Latency isn't concerning me, but the missing email is.
The task is set to expire if it runs longer than 12 hours, so timeout is unlikely the cause. I'm tearing my hair out trying to figure this out!
Note: to make things more interesting, I recompiled the script executing all three email (script) tasks in one data flow task. Same behaviour there, with a very interesting twist. Every time I complied the bianries with three email tasks I got a two emails
Example:
Compile 1 -> Load into Windows Task Scheduler
Result -> Lab & IT email
Compile 2-> Load into Windows Task Scheduler
Result -> Base & IT email
The heck?

To fork or not to fork?

I am re-developing a system that will send messages via http to one of a number of suppliers. The original is perl scripts and it's likely that the re-development will also use perl.
In the old system, there were a number of perl scripts all running at the same time, five for each supplier. When a message was put into the database, a random thread number (1-5) and the supplier was chosen to ensure that no message was processed twice while avoiding having to lock the table/row. Additionally there was a "Fair Queue Position" field in the database to ensure that a large message send didn't delay small sends that happened while the large one was being sent.
At some times there would be just a couple of messages per minute, but at other times there would be a dump of potentially hundreds of thousands of messages. It seems to me like a resource waste to have all the scripts running and checking for messages all of the time so I am trying to work out if there is a better way to do it, or if the old way is acceptable.
My thoughts right now lie with the idea of having one script that runs and forks as many child processes as are needed (up to a limit) depending on how much traffic there is, but I am not sure how best to implement it such that each message is processed only once, while the fair queuing is maintained.
My best guess right now is that the parent script updates the DB to indicate which child process should handle it, however I am concerned that this will end up being less efficient than the original method. I have little experience of writing forking code (last time I did it was about 15 years ago).
Any thoughts or links to guides on how best to process message queues appreciated!
You could use Thread::Queue or any other from this: Is there a multiprocessing module for Perl?
If the old system was written in Perl this way you could reuse most part of it.
Non working example:
use strict;
use warnings;
use threads;
use Thread::Queue;
my $q = Thread::Queue->new(); # A new empty queue
# Worker thread
my #thrs = threads->create(sub {
while (my $item = $q->dequeue()) {
# Do work on $item
}
})->detach() for 1..10;#for 10 threads
my $dbh = ...
while (1){
#get items from db
my #items = get_items_from_db($dbh);
# Send work to the thread
$q->enqueue(#items);
print "Pending items: "$q->pending()."\n";
sleep 15;#check DB in every 15 secs
}
I would suggest using a message queue server like RabbitMQ.
One process feeds work into the queue, and you can have multiple worker processes consume the queue.
Advantages of this approach:
workers block when waiting for work (no busy waiting)
more worker processes can be started up manually if needed
worker processes don't have to be a child of a special parent process
RabbitMQ will distribute the work among all workers which are ready to accept work
RabbitMQ will put work back into the queue if the worker doesn't return an ACK
you don't have to assign work in the database
every "agent" (worker, producer, etc.) is an independent process which means you can kill it or restart it without affecting other processes
To dynamically scale-up or down the number workers, you can implement something like:
have workers automatically die if they don't get work for a specified amount of time
have another process monitor the length of the queue and spawn more workers if the queue is getting too big
I would recommend using beanstalkd for a dedicated job server, and Beanstalk::Client in your perl scripts for adding jobs to the queue and removing them.
You should find beanstalkd easier to install and set up compared to RabbitMQ. It will also take care of distributing jobs among available workers, burying any failed jobs so they can be retried later, scheduling jobs to be done at a later date, and many more basic features. For your worker, you don't have to worry about forking or threading; just start up as many workers as you need, on as many servers as you have available.
Either RabbitMQ or Beanstalk would be better than rolling your own db-backed solution. These projects have already worked out many of the details needed for queueing, and implemented features you may not realize yet that you want. They should also handle polling for new jobs more efficiently, compared to sleeping and selecting from your database to see if there's more work to do.

LAMP: How to Implement Scheduling?

Users of my application need to be able to schedule certain task to run at certain times (e.g. once only, every every minute, every hour, etc.). My plan is to have a cron run a script every minute to check the application to see if it has tasks to execute. If so, then execute the tasks.
Questions:
Is the running of cron every minute a good idea?
How do I model in the database intervals like cron does (e.g. every minute, ever 5th minute of every hour, etc.)?
I'm using LAMP.
Or, rather than doing any, you know, real work, simply create an interface for the users, and then publish entries in cron! Rather than having cron call you every minute, have it call scripts as directed by the users. When they add or change jobs, rewrite the crontab.
No big deal.
In unix, cron allows each user (unix login that is) to have their own crontab, so you can have one dedicated to your app, don't have to use the root crontab for this.
Do you mean that you have a series of user-defined jobs that need executed in user-defined intervals, and you'd like to have cron facilitate the processing of those jobs? If so, you'd want to have a database with at least 2 fields:
JOB,
OFTEN
where OFTEN is how often they'd like the job to run, using syntax similar to CRON.
you'd then need to write a script (in python, ruby, or some similar language) to parse that data. this script would be what runs every 1 minute via your actual cron.
take a look at this StackOverflow question, and this StackOverflow question, regarding how to parse crontab data via python.