How to handle MySQL shutdown in Matlab? - mysql

Greetings all-
I'm writing a program that parses and cleans a lot of data from one database to another on Matlab, querying from MySQL. This would run continuously, as new data come into the first db every minute, are cleaned, and put to the clean db before the next data point comes in. I was wondering how, during this process, I could account for two things...
Every three nights MySQL is shutdown for backup. I'd like my program to pause when this happens, and resume when its back up. I've looked around for a solution, and can't seem to find one for this.
Allow a user to kill the program. I've narrowed this down to either accounting for a ctrl+c kill, or creating a GUI to do it. Which do you all think would be the better strategy?
Thanks in advance for your time and help on this matter.

Use a TIMER together with a GUI.
First, create a GUI with two toggle buttons - 'pause' and 'cancel'. When your program starts, launch the GUI and capture its handle. Pass this handle to the timer object. Whenever the timer object is set to execute, it should set the 'Value' property of the 'pause'-button to 1, and at the end of the scheduled maintenance set it back to 0. Meanwhile, your program which runs, I assume, a while loop, should check at every iteration for the value of the pause button. If the button is pressed (i.e. its value is 1), the program should not try and access the database. If the button is released, the program should run as normal.
When the program checks for a pressed pause button, it should also check for a pressed 'cancel' button. If that button is pressed, the function should break the loop and gracefully exit.
In the GUI, you can also set a closeRequestFcn, where you have a dialog pop open to ask whether the user really wants to quit the running database program. If the user chooses 'yes', hide the GUI (set(guiHandle,'Visible',false)) and "press" the cancel button, so that the program can exit. The closeRequestFcn will also execute when you close Matlab without having stopped the program first. This can help you avoid accidentially closing Matlab and thus accidentially killing your process.

Related

Why my MYSQL event scheduler stops every night at midnight?

So i have this database running on a Synology NAS, for a restaurant's app made with Laravel, and i have this event here that should start every day at 4am
The content of this event is nothing special:
UPDATE shipping_times SET shipping_times.available = shipping_times.max_quantity
Thing is, every night at midnight the event scheduler variable auto sets to OFF even if i do GLOBAL event_scheduler = ON.
This is quite a problem since the event is used to "replenish" available orders.
Since my event should occur at 4AM i don't think the problem is event-related.
What could it be?
Alternative: Laravel Task Scheduling
Laravel Tasks come to mind for solving this instead: https://laravel.com/docs/8.x/scheduling
(other Laravel version docs available there as well)
Adding recurring tasks in this way puts them all in one place instead of hidden somewhere in a menu. They are also independant of your elusive event_scheduler.
You can also use a command and call it from the task scheduling code. See BKF's comment.
Are your events firing?
I am not sure but could it be true the event is only firing if you always have traffic between 00:00 and 04:00? Laravel Tasks also fire after the fact, AFAIK.
Add this to my.cnf (or whaterver the config file is):
event_scheduler = ON
and restart mysqld.
Apparently, something is shutting down MySQL every night, perhaps a backup? The above setting will turn it on each time it restarts.

SSIS restart/ re-trigger package automatically

Good day!
I have a SSIS package that retrieves data from a database and exports it to a flat file (simple process). The issue I am having is that the data my package retrieves each morning depends on a separate process to load the data into a table prior to my package retrieving it.
Now, the process which initially loads the data does inserts metadata into a table which shows the start and end date/time. I would like to setup something in my package that checks the metadata table for an end date/time for the current date. If the current date exists, then the process continues... IF no date/ time exists then the process stops (here is the kicker) BUT the package re-triggers itself automatically an hour later to check if the initial data load is complete.
I have done research on checkpoints, etc. but all that seems to cover is if the package fails it would pick up where it left when the package is restarted. I don't want to manually re-trigger the process, I'd like it to check the metadata and re-start itself if possible. I could even put in processing that if it checks the metadata 3 times it would stop completely.
Thanks so much for your help
What you want isn't possible exactly the way you describe it. When a package finishes running, it's inert. It can't re-trigger itself, something has to re-trigger it.
That doesn't mean you have to do it manually. The way I would handle this is to have an agent job scheduled to run every hour for X number of hours a day. The job would call the package every time, and the meta data would tell the package whether it needs to do anything or just do nothing.
There would be a couple of ways to handle this.
They all start by setting up the initial check, just as you've outlined above. See if the data you need exists. Based on that, set a boolean variable (I'll call it DataExists) to TRUE if your data is there or FALSE if it isn't. Or 1 or 0, or what have you.
Create two Precedent Contraints coming off that task; one that requires that DataExists==TRUE and, obviously enough, another that requires that DataExists==FALSE.
The TRUE path is your happy path. The package executes your code.
On the FALSE path, you have options.
Personally, I'd have the FALSE path lead to a forced failure of the package. From there, I'd set up the job scheduler to wait an hour, then try again. BUT I'd also set a limit on the retries. After X retries, go ahead and actually raise an error. That way, you'll get a heads up if your data never actually lands in your table.
If you don't want to (or can't) get that level of assistance from your scheduler, you could mimic the functionality in SSIS, but it's not without risk.
On your FALSE path, trigger an Execute SQL Task with a simple WAITFOR DELAY '01:00:00.00' command in it, then have that task call the initial check again when it's done waiting. This will consume a thread on your SQL Server and could end up getting dropped by the SQL Engine if it gets thread starved.
Going the second route, I'd set up another Iteration variable, increment it with each try, and set a limit in the precedent constraint to, again, raise an actual error if you're data doesn't show up within a reasonable number of attempts.
Thanks so much for your help! With some additional research I found the following article that I was able to reference and create a solution for my needs. Although my process doesn't require a failure to attempt a re-try, I set the process to force fail after 3 attempts.
http://microsoft-ssis.blogspot.com/2014/06/retry-task-on-failure.html
Much appreciated
Best wishes

Force user login after x min of inactivity

I have an application which has 1 login form, 1 main dashboard form, 7/8 sub dashboard forms and many other non-main/dashboard forms.
I would like to implement some sort of system, whereby if the user has been inactive for x minutes, that they are asked to login again.
Is there a way to have a global function run continuously that will check every 60 seconds if a login is required? The obvious way is using the on timer event. However with so many forms I would have to add the call to each form etc.
Is there an easier way?
I don't think that there is an easy way to do this.
First off, we have to define what "user has been inactive for x minutes" really means. There are some options:
The Access application was not the focused window for x minutes. (Drawback: the user could be idling with the Access application on front, never triggering the focus loss)
A certain form was not focused for x minutes. (Drawback: You'd have to implement the check routine for each and every form in you application, logging all GotFocus and/or LostFocus events to verify that at least all x minutes another form was focused. But: A user legitimately working for x minutes with the same form never needing to change its focus will nevertheless trigger the logout.)
No user interaction occurred (that is, events triggered from button clicks and so on) for x minutes. (Drawback: For each event that you handle you have to add additional code to reset the x-minutes-timer. Also you'll need to introduce new event handlers for the naviagtion (see #2, focus loss) so you don't log off users being active by navigating the forms.)
... other methods that probably won't work better either.
There is no good way to do this without having to cope with a host of new problems.
The better solution would be to set a default idle time in your backend database, assuming it is an active database server and not just another Access database (or even the same database where the forms are in).
In the latter case where you don't have a dedicated database server I would challenge you to rethink why you want a timeout of the login in the first place. Or, to rephrase:
What is a login good for in an Access-only database?
Security? There is no way you can prevent a user from accessing all data in your database, independent of them having a login or not. Remember: An Access database is just a file on the filesystem. A user using that database has, in fact, already access to everything in that file. There is no such thing as user-level security in Access (not anymore at least; and that's for the best).
You can encrypt the whole database (File --> Info), but you can only specify one password per database, not per user. Again: User-level security cannot be done in Access.
See this answer for more on security of Access databases: https://stackoverflow.com/a/530778/6216216

Should I try a rollback before beginning a new transaction?

I have a doubt and I can't find a similar question.
In a generic php script like:
$pdo->beginTransaction();
//...
//many things to do...
//...
$pdo->commit();
Let's say the user stops the page loading or loses connection before the commit is reached.
Does the transaction remain opened? Do I have to try a rollback before the beginTransaction?
If you are worried about a user dropping connection, you would be better off using ignore_user_abort
ignore_user_abort
That way, regardless of whether a user stops the page loading or any other consequence, the script runs until completion.

Perfmon can't create data collector set on Windows 7 x64

I'm trying to create a data collector set using Perfmon. When I right click 'New -> Data Collector Set' in the User Defined folder, I see the wizard dialog appear. However, the 'Next' and 'Finish' buttons are disabled. All I can do is click on Cancel.
Any ideas?
Found the answer to this while googling a problem with the disk defragger. The recommendation I found was to (backup first!) remove the registry key:
HKLM\Software\Microsoft\RPC\Internet
I exported the key for a backup; deleted the key; then rebooted. After reboot I could run the disk defragger and also add my own data collector sets in Perfmon.
For some reason the task scheduler couldn't start, which prevented both the disk defragger and Perfmon from running. They can each schedule tasks, and didn't run with full capabilities without the task scheduler interface available (the disk defragger didn't run at all). Deleting that key allowed the task schedule to start properly. I haven't seen any side affects...yet!