Perfmon can't create data collector set on Windows 7 x64 - perfmon

I'm trying to create a data collector set using Perfmon. When I right click 'New -> Data Collector Set' in the User Defined folder, I see the wizard dialog appear. However, the 'Next' and 'Finish' buttons are disabled. All I can do is click on Cancel.
Any ideas?

Found the answer to this while googling a problem with the disk defragger. The recommendation I found was to (backup first!) remove the registry key:
HKLM\Software\Microsoft\RPC\Internet
I exported the key for a backup; deleted the key; then rebooted. After reboot I could run the disk defragger and also add my own data collector sets in Perfmon.
For some reason the task scheduler couldn't start, which prevented both the disk defragger and Perfmon from running. They can each schedule tasks, and didn't run with full capabilities without the task scheduler interface available (the disk defragger didn't run at all). Deleting that key allowed the task schedule to start properly. I haven't seen any side affects...yet!

Related

Why is MySQL Event Scheduler Stuck Opening Tables?

I'm using MySQL 8.0.21 From the MySQL Community Installer on Windows 10 updated to version2004 and for some reason if I create a event in the event scheduler, which calls a procedure once every second (regardless of what that SP actually does, I'll explain my test case) - my CPU maxes out and when I look at the active connections in MySQL Workbench, it stacks up a ton of worker threads which stall on the "Opening Tables" state. My PC freezes, I have to edit the event to be disabled, stop the MySQL process in Task Manager and Start the service again.
TEST CASE
During setup of a brand new server, I used all default settings, except I enabled the general log and I use the new 8.0+ mysql_sha2_password encryption (although I ALTER USER to mysql_native_password for phpmyadmin so that might revert it, I'm honestly not sure)
I create a new Schema called "Test"
I create one Table called
"TestTable" has only one column called "column1" INT
I then create a Stored Procedure "TestProc" which does "SELECT COUNT(*) FROM
TestTable;" Adjusts Priv.'s, DEFINER::Definer is root#localhost and
Reads SQL
And Finally I create an Event called "TestEvent" which does
"CALL TestProc()s" Reoccurring every 1/sec, preserve on Complete, and
definer is root#localhost
restart server before event is fired.
Also, if I enable the event, or create it, it'll run without issue, it's important to note that the issue begins when the event scheduler is left on, and the event is left enabled, then the server is restarted from the services in task manager. Immediately the CPU jacks up to max and active connections show threads stacking up without completing.
Any clues are appreciated, I find no actual errors nor do I have any idea where to begin debugging anymore. I've tried skipping grant tables (but obviously that's not optimal, and didn't work).
I did find a hint when reviewing the MySQL 8.0+ docs
"If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking. " from
However, when analyzing there does not appear to be any locks, nor should I need to implement such manually just for this test case (or my actual program)
UPDATE
Up to this point, albeit a rather niche bug, I do believe that is exactly what this is, and I have posted it on MySQL bug forum. Reference post is here:
The answer actually has turned out to be a bug which is reproducible - Bug#: 100449

Minor MYSQL DB upgrade on GCP

There is a bug on the Mysql 5.7.14 regarding password hash and has been fixed on version 5.7.19. But the Mysql in the GCP doesn't have any option to do a minor upgrade. So can anyone suggest how to go about this issue?
Version 5.7.25, which includes the fix for this bug, will be in the next maintenance release later this month.
No you cannot do minor upgrades by yourself inCloud SQL becasue it is a fully managed service by Google and all updates and upgrades are done behind the scenes for their customers instances. These updates can be done at any time during the next maintenance cycle. However, you can control the day and time and specify a maintenance window for the instance in question.
When you specify a maintenance window, Cloud SQL will not initiate the updates outside of that window. This way you can specify the window when there is less or no traffic on your applications which help reduce the disruptive side effects of that maintenance. Maintenance usually takes between 1-3 minutes for the new update to be pushed and the instance become available again.
To specify a maintenance window:
1- Go to the project page and select a project.
2- Click an Instance name.
3- On the Cloud SQL Instance details page, click Edit maintenance preferences.
4- Under Configuration options, open Maintenance.
5- Configure the following options:
Preferred window. Set the day and hour range when updates can occur on this instance.
Order of update. Set the order for updating this instance, in relation to updates to other instances. Set timing to Any, Earlier, or Later. Earlier instances receive updates up to a week earlier than later instances within the same location.
read more on it here.

Google Compute Engine gives error from when creating instance with existing boot and data disk

I originally created an instance with a persistent boot and data disk. I wanted to test that should something happen to an instance, I could just recreate one with the same boot and data disk and it would run as normal.
However, I'm getting this error when creating the instance from the developer console:
Invalid value for field 'resource.disks[1].source': 'site-data'. Must be a URL to a valid Compute resource of the correct type.
The only thing I'm doing differently is setting the boot disk to the previous site-boot disk rather than a new image, and attaching the site-data disk in read/write.
I suggest you try again -- it looks like their web-based Developer Console was broken for a few days bracketing the time you put your question in. It seems to work correctly now.
I also received this error when attempting to create an instance that included an additional Persistent Disk. Creating an instance with only the boot drive worked fine, but attempting to create an instance with any additional disk (including a new, empty disk) resulted in the same error you reported above.
I used the "Need Help?" link at the bottom left of the 'Create a new instance' web form to report the problem yesterday (10/21/14). Although I did not receive any kind of reply (I have not paid for any support options), the issue was resolved within 24 hours. I am now able to successfully create instances with additional Persistent Disks again.

What happens if second instance of Robocopy started on same folder?

I have some SQL log backups scheduled to run every 15 minutes including a robocopy with the /MIR option, to an archive folder on a cloud storage volume using CloudBerry.
Sometimes after a Full backup, and a slow network, the full backup archive copy has not completed when the log backup is run, and I suspect a problem caused by the second robocopy now also trying to copy the large full backup file in addition to the new log backup.
What should happen? If the retry flag is set to /R:60, should the second instance somehow skip files already being copied by another robocopy instance, or will the two instances of robocopy step all over each other? Or must the second instance be run with the /R:0 option set to skip the first file still being copied?
I know this answer is a little late and I hope you found a solution but here are my 2 cents:
Robocopy has an option to "monitor a source" for changes, I think it's the /MON and /MOT options. this would prevent robocopy from rerunning- it would always run in what is essentially a hot folder type scenario.
From the help of robocopy:
/MON:n :: MONitor source; run again when more than n changes seen.
/MOT:m :: MOnitor source; run again in m minutes Time, if changed.
While this is quite an old question, I have not found a proper answer and find it to still be relevant, so here are my findings:
I ran a couple of tests and it seems that RoboCopy takes a snapshot of the source and destination directories and compares which files need to be copied from the point of the snapshot.
This means that if one RoboCopy instance starts immediately after another, the two instances will keep clashing and overwriting each other, as neither of the instances are aware that changes are happening in the destination directory.
If one instance (instance A) attempts to copy the same file that the other instance (instance B) is copying, it will error and either retry (if using /R) or skip to the next (if using /R:0). Once instance B is finished with the file it will then try to copy the next file on the list, which will either error (if instance A is still copying it) or overwrite the file (if instance A already moved on to the next file).
So in the case of the question, the most likely behavior (assuming network speed and file sizes remain somewhat consistent) is that the new instance of RoboCopy will overwrite the backup files in the beginning of the list while the original instance is still copying the last files on the list.

How to handle MySQL shutdown in Matlab?

Greetings all-
I'm writing a program that parses and cleans a lot of data from one database to another on Matlab, querying from MySQL. This would run continuously, as new data come into the first db every minute, are cleaned, and put to the clean db before the next data point comes in. I was wondering how, during this process, I could account for two things...
Every three nights MySQL is shutdown for backup. I'd like my program to pause when this happens, and resume when its back up. I've looked around for a solution, and can't seem to find one for this.
Allow a user to kill the program. I've narrowed this down to either accounting for a ctrl+c kill, or creating a GUI to do it. Which do you all think would be the better strategy?
Thanks in advance for your time and help on this matter.
Use a TIMER together with a GUI.
First, create a GUI with two toggle buttons - 'pause' and 'cancel'. When your program starts, launch the GUI and capture its handle. Pass this handle to the timer object. Whenever the timer object is set to execute, it should set the 'Value' property of the 'pause'-button to 1, and at the end of the scheduled maintenance set it back to 0. Meanwhile, your program which runs, I assume, a while loop, should check at every iteration for the value of the pause button. If the button is pressed (i.e. its value is 1), the program should not try and access the database. If the button is released, the program should run as normal.
When the program checks for a pressed pause button, it should also check for a pressed 'cancel' button. If that button is pressed, the function should break the loop and gracefully exit.
In the GUI, you can also set a closeRequestFcn, where you have a dialog pop open to ask whether the user really wants to quit the running database program. If the user chooses 'yes', hide the GUI (set(guiHandle,'Visible',false)) and "press" the cancel button, so that the program can exit. The closeRequestFcn will also execute when you close Matlab without having stopped the program first. This can help you avoid accidentially closing Matlab and thus accidentially killing your process.