I'm using MySQL 8.0.21 From the MySQL Community Installer on Windows 10 updated to version2004 and for some reason if I create a event in the event scheduler, which calls a procedure once every second (regardless of what that SP actually does, I'll explain my test case) - my CPU maxes out and when I look at the active connections in MySQL Workbench, it stacks up a ton of worker threads which stall on the "Opening Tables" state. My PC freezes, I have to edit the event to be disabled, stop the MySQL process in Task Manager and Start the service again.
TEST CASE
During setup of a brand new server, I used all default settings, except I enabled the general log and I use the new 8.0+ mysql_sha2_password encryption (although I ALTER USER to mysql_native_password for phpmyadmin so that might revert it, I'm honestly not sure)
I create a new Schema called "Test"
I create one Table called
"TestTable" has only one column called "column1" INT
I then create a Stored Procedure "TestProc" which does "SELECT COUNT(*) FROM
TestTable;" Adjusts Priv.'s, DEFINER::Definer is root#localhost and
Reads SQL
And Finally I create an Event called "TestEvent" which does
"CALL TestProc()s" Reoccurring every 1/sec, preserve on Complete, and
definer is root#localhost
restart server before event is fired.
Also, if I enable the event, or create it, it'll run without issue, it's important to note that the issue begins when the event scheduler is left on, and the event is left enabled, then the server is restarted from the services in task manager. Immediately the CPU jacks up to max and active connections show threads stacking up without completing.
Any clues are appreciated, I find no actual errors nor do I have any idea where to begin debugging anymore. I've tried skipping grant tables (but obviously that's not optimal, and didn't work).
I did find a hint when reviewing the MySQL 8.0+ docs
"If a repeating event does not terminate within its scheduling interval, the result may be multiple instances of the event executing simultaneously. If this is undesirable, you should institute a mechanism to prevent simultaneous instances. For example, you could use the GET_LOCK() function, or row or table locking. " from
However, when analyzing there does not appear to be any locks, nor should I need to implement such manually just for this test case (or my actual program)
UPDATE
Up to this point, albeit a rather niche bug, I do believe that is exactly what this is, and I have posted it on MySQL bug forum. Reference post is here:
The answer actually has turned out to be a bug which is reproducible - Bug#: 100449
Related
I'm running PHPBB (most up-to-date non-beta version) and in the last 3 months, the error appears during a search every few days:
'phpbb_search_wordmatch' is marked as crashed and last (automatic?) repair failed
To fix it, I then just run a repair on the table. I am still working on a way to figure out why this keeps crashing. The host was not helpful and it could be that the table is too large for the server (700 mb or so)
My Question: Could I create a trigger in PHPMyAdmin in the meantime to automatically repair the table whenever this error happens? You see it on the table in PMA when you go to access it, so there must be some entry that I can use to create the trigger.
Unfortunately this issue is difficult to fix for me being on a shared server, and all resources online say to contact the host, so as long as I can at least get it to fix itself when it happens.
You may be better off setting up a scheduled task. I'm not aware of how you could create a trigger that detects when the table needs repair; I don't believe there are hooks for the logic to detect that situation and cause a procedure to run.
From the database, there's an Events tab where you can enable the MySQL event scheduler and create an event which would run, say, once per week and run the SQL statement to repair the table. It's still not ideal, but I think it's better than using a trigger in this case.
Before converting a project to use mysql, I have questions regarding the best way to avoid loss of a simple record update due to either a server crash or a program shutdown due to exceeding a/the cgi run-time limit.
My project is public and therefore applicable to any / many hosts where high level server side management isn't an option.
I wish to open a list file (or table) and acquire a list of records to parse one at a time.
While parsing each acquired list record, have the program / script perform a task with each record and update a counter (simple table) upon successful completion of each task (alternatively update each record with a success flag).
Do mysql tables get auto updated to the hard drive when "updated" or "added" to, thus, avoiding loss of all table changes to the point of crash if / when the program / script is violently terminated as described?
To have any chance with and do same with simple text files the counter has to be opened and closed for each update (as all content of open files on most O/S get clobbered when crashed).
Any description outline of mysql commands / processes etc to follow, if needed to avoid described losses, would also be very much appreciated.
Also, if any sugestions, are they applicable to both InnoDB and MyISM?
A simple answer comes to mind: SQL TRANSACTIONS. They're like a stack of SQL commands that 1. have to be "commited" 2. would come into action only if the last command is successfully executed.
I think this would help:
http://www.sqlteam.com/article/introduction-to-transactions
If my answer wasn't correct, pls, let me know if i misunderstood your intensions.
I have some games where users health and other attributes are updated ever couple of minutes using MySQL events. I ran into a problem where eventually the events are no longer being run, the SQL in the event doesn't get executed.
I wasn't sure how else to fix it, so I tried restarting MySQL and that fixes it for awhile. I setup MySQL to restart every night in cron, but that's not a very good solution. Sometimes MySQL fails to restart and hangs.
Edit: All of the tables in my databases that use the events are InnoDB.
It could be that you have events that are not completing and holding many locks. Eventually additional jobs will "stack up" each trying to acquire locks but appearing to do no work. This can be especially true if you are using MyISAM tables as they have table level, not row level locking.
Consider configuring pt-stalk (part of the Percona Toolkit) to capture regular snapshots of 'show processlist' and other important details. Then you can track down when things "stop working" and work backwords to when the problem started.
To prevent jobs from "stacking up" use the GET_LOCK function:
SELECT GET_LOCK('THIS_IS_A_NAMED_LOCK', 0) INTO #got_lock;
IF #got_lock = 1 THEN
select 'do something here';
SELECT RELEASE_LOCK('THIS_IS_A_NAMED_LOCK') INTO #discard;
END IF;
If you are using InnoDB, make sure that you issue START TRANSACTION and COMMIT commands in your event to ensure that you are not creating long running transactions.
I ended up taking my iOS games offline because of this problem recently. I couldn't figure out how to implement the answer by Justin Swanhart. If anyone is interested I can make my php/mysql code available to see if you can fix this problem. Just let me know. andy.triboletti#gmail.com
Here's my situation: I have a MySQL database in which I'd like to use triggers to automatically manage the updating of date creation and date modified fields in a few of my tables. Later, I'd like to expand them into logging data changes, but that's neither here nor there at the moment.
The triggers work fine and update the fields as intended. The problem is that the application account is now basically the only account that can affect these tables, actions from other MySQL user accounts (such as mine) fail because the definer is the application account.
I can't seem to find a way to have the trigger fire regardless of who executes a command on the tables and it's quite frustrating. Is there a way to either open up a trigger to fire regardless of user or allow multiple users to fire the trigger?
We're currently running MySQL 5.0.18 - changing this is very unlikely as the folks here in charge of infrastructure are rather resistant to fixing what (in their minds at least) isn't broken.
I have a desktop application that runs on a network and every instance connects to the same database.
So, in this situation, how can I implement a mutex that works across all running instances that are connected to the same database?
In other words, I don't wan't that two+ instances to run the same function at the same time. If one is already running the function, the other instances shouldn't have access to it.
PS: Database transaction won't solve, because the function I wan't to mutex doesn't use the database. I've mentioned the database just because it can be used to exchange information across the running instances.
PS2: The function takes about ~30 minutes to complete, so if a second instance tries to run the same function I would like to display a nice message that it can't be performed right now because computer 'X' is already running that function.
PS3: The function has to be processed on the client machine, so I can't use stored procedures.
I think you're looking for a database transaction. A transaction will isolate your changes from all other clients.
Update:
You mentioned that the function doesn't currently write to the database. If you want to mutex this function, there will have to be some central location to store the current mutex holder. The database can work for this -- just add a new table that includes the computername of the current holder. Check that table before starting your function.
I think your question may be confusion though. Mutexes should be about protecting resources. If your function is not accessing the database, then what shared resource are you protecting?
put the code inside a transaction either - in the app, or better -inside a stored procedure, and call the stored procedure.
the transaction mechanism will isolate the code between the callers.
Conversely consider a message queue. As mentioned, the DB should manage all of this for you either in transactions or serial access to tables (ala MyISAM).
In the past I have done the following:
Create a table that basically has two fields, function_name and is_running
I don't know what RDBMS you are using, but most have a way to lock individual records for update. Here is some pseduocode based on Oracle:
BEGIN TRANS
SELECT FOR UPDATE is_running FROM function_table WHERE function_name='foo';
-- Check here to see if it is running, if not, you can set running to 'true'
UPDATE function_table set is_running='Y' where function_name='foo';
COMMIT TRANS
Now I don't have the Oracle PSQL docs with me, but you get the idea. The 'FOR UPDATE' clause locks there record after the read until the commit, so other processes will block on that SELECT statement until the current process commits.
You can use Terracotta to implement such functionality, if you've got a Java stack.
Even if your function does not currently use the database, you could still solve the problem with a specific table for the purpose of synchronizing this function. The specifics would depend on your DB and how it handles isolation levels and locking. For example, with SQL Server you would set the transaction isolation to repeatable read, read a value from your locking row and update it inside a transaction. Don't commit the transaction until your function is done. You can also use explicit table locks in a transaction on most databases which might be simpler. This is probably the simplest solution given you are already using a database.
If you do not want to rely on the database for whatever reason you could write a simple service that would accept TCP connections from your client. Each client would request permission to run and would return a response when done. The server would be able to ensure only one client gets permission to run at a time. Dead clients would eventually drop the TCP connection and be detected as long as you have the correct keep alive setting.
The message queue solution suggested by Xepoch would also work. You could use something like MSMQ or Java Message Queue and have a single message that would act as a run token. All your clients would request the message and then repost it when done. You risk a deadlock if a client dies before reposting so you would need to devise some logic to detect this and it might get complicated.