Mediawiki Database Error 1205 - mysql

I just installed a fresh copy of mediawiki on http://konton.us/wiki
I was all happy playing around with my wiki, filling up the place with information and suddently, when I created an article by the name of Gameplay_Mechanics, it all went dead.
http://konton.us/wiki/Gameplay_Mechanics
I got this error:
A database query syntax error has occurred. This may indicate a bug in the software. The last attempted database query was:
(SQL query hidden)
from within function "". Database returned error "1205: Lock wait timeout exceeded; try restarting transaction (internal-db.s76387.gridserver.com)".
I was able to fix it by 'emptying' the article and then saving it - only to repopulate it again but...it happened less than 1 day later...again, so I'm kind of wondering what is the ACTUAL ROOT CAUSE of this ridiculous error.
All help is appreciated

Try deleting the page, then recreating it with a slightly different name. It might just be a weird fluke thing having to do with that page specifically.

Are you using MySQL 5.1.26rc for a specific reason? Maybe upgrade to 5.1.49?
http://konton.us/wiki/Special:Version

This looks more like your database server being too busy. This error is often a sign of deadlocked transactions, although I'm not sure MediaWiki even uses transactions.
Are there many users visiting your site? Perhaps you're sharing your hosting with another high-traffic site?

Related

MAMP MySQL innodb and missing errors

I was running MAMP fine until I brought in one particular Drupal site. The site started white-screening, loading with no CSS, and occasionally throwing a PDO error, but only on the front site of the application. I could access the administration part of the site fine.
In the mysql error log I found a bunch of errors like InnoDB: Error: Table "mysql"."innodb_table_stats" not found. and ...required persistent statistics storage is not present or is corrupted. Using transient stats instead.
I dug through some of the Drupal views and found one that executes php from the database (big no-no!). I removed that from the database but I was still getting the same innodb errors.
Next I found this question. Similar sounding issue though different circumstances. I've seen a bunch of issues that seems related, though often involving upgrading to MySQL 5.6. The gist of the issue seems to be a list of tables in the mysql tables are improperly formatted. The solution suggested there and elsewhere is to delete the .frm, .ibd files associated with those tables, then recreate the tables using the query provided.
I did all that ^, and now all the sites in MAMP are sending 500 errors. Bigger than that, the error logs are totally silent, and I'm stuck with no idea where to turn.
NOTE: I uninstalled MAMP, reinstalled it, and the same thing occured.
So, I will happily accept an answer that has a better explanation than what I'm giving here, but I "solved" this issue in the following manner:
The site was also running a module called "Sassy" that compiled CSS on the server, I was suspicious because of the missing CSS, so I configured the module to only compile when I explicitly tell it to.
I dropped and reimported each site's database.
I noticed as stated above I would get different responses on each page load. I persisted on going through the site, both admin and front side, by repeatedly reloading the pages.
In conclusion, I'm no longer getting the innodb errors, pushing through and disabling a sketchy(IMO) module seems to have fixed the white screen and 500 errors and brought back the styling. Which brings me to the big question I have which is, why wasn't I getting any error messages in the logs?

Frequent Mysql gone away (Error 2006) in production environment

I am running a website on django 1.10, python3.4, mysqlclient, mysql5.5.
I have multiple management command running in the background for various tasks (like mail sending, updating tables) at different times. Recently, I have started seeing many Mysql gone away (Error 2006).
I tried to change the connection_timeout to be greater than max_connection age. That solved a small share of problem in which Error 2006 stopped to occur for a running function in most of the case but other cases are as it is and frequently seeing Error 2006.
In some cases, I tried using django.db.connection.close() before running a function or putting it in try - except loop. In those places, this error are not occuring anymore but then I cannot put this try-except loop in every function.
What is the root cause of this repeated error Mysql gone away? What are the different solution and what is the best solution so that I do not have to change much of my codes?
Other Variables:
We upgraded from django 1.7 to 1.10 and updated the mysqlclient package recently and the problem surfaced after that probably.
Just around the update, the traffic on our site also increased multiple times. Can that be a trigger?
As seen elsewhere, the problem can be either that your query is too long, or the server closed the connection due to inactivity. It depends a lot on the circumstances, but it seems that if you're not in the middle of a transaction (which would discard the second possibility), you can apply something similar to Hook available for automatic retry after deadlock in django and mysql setup

How fatal is the maximum execution timeout warning in MySQL

Been working with the XAMP which I installed just under a year. I recently installed few frontend software for MySQL (to see which one am I comfortable with the most).
Now, for the past two days, whenever I go to localhost/phpmysql, I receive this warning
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\phpMyAdmin\lib..
I understand that the maximum execution time required to execute is being exceeded here. Found a few post on stackoverflow which guides you to rectification. All well and good till here.
I have a question and a concern.
Question
Why all of a sudden this error when, clearly remember, I did nothing to upset the default settings of the MySQL?
Concern
I am working on a project which uses a database (important, cannot loose it), the phpmyadmin when refreshed after the warning starts to work normally as there never was a problem. I'll need a couple of week to get done with my project. Can I continue with this timeout error without risking my database or should I try and rectify it right away?
DCoder answer is good, try to change the maximium execution time in MySQL server, also, you can try to find what query is making the noise using the slow-query-log (you can activate it and is very useful in the case of messy querys).
Check this page for more information
If you're using MySQL on windows environments, I strongly suggest to put it on a linux server, because it's lots slower over windows.
About your concern, There is no real danger about your data, and changing the slow-query-log option or maximum-execution-time option doesn't compromise the databases... but do some backup before change anything, just for extra security.

(2006, 'MySQL server has gone away') in WSGI django

I have a MySQL gone away with Django under WSGI. I found entries for this problem on stackoverflow, but nothing with Django specifically. Google does not help, except for workarounds (like polling the website every once in a while, or increasing the database timeout). Nothing definitive. Technically, Django and/or MySQLdb (I'm using the latest 1.2.3c1) should attempt a reconnect if the server hanged the connection, but this does not happen. How can I solve this issue without workarounds ?
show variables like 'wait_timeout';
this is the setting will throw back the "mysql gone away" error
set it to a very large value to prevent it "gone away"
or simple re-ping the mysql connection after certain period
Django developers gave one short answer for all questions like this in https://code.djangoproject.com/ticket/21597#comment:29
Resolution set to wontfix
Actually this is the intended behavior after #15119. See that ticket for the rationale.
If you hit this problem and don't want to understand what's going on, don't reopen this ticket, just do this:
RECOMMENDED SOLUTION: close the connection with from django.db import connection; connection.close() when you know that your program is going to be idle for a long time.
CRAPPY SOLUTION: increase wait_timeout so it's longer than the maximum idle time of your program.
In this context, idle time is the time between two successive database queries.
You could create middleware to ping() the MySQL connection (which
will reconnect if it timed out) before processing the view
You could also add middleware to catch the exception, reconnect, and retry the
view (I think I would prefer the above solution as simpler, but it should technically work and be performant assuming timeouts are rare. This also assumes a failed view has no side effects, which is a desirable property but can be difficult to do, especially if you write to a filesystem as well as a db in your view.)

Preventing Mongrel/Mysql Errno::EPIPE exceptions

I have a rails app that I have serving up XML on an infrequent basis.
This is being run with mongrel and mysql.
I've found that if I don't exercise the app for longer than a few hours it goes dead and starts throwing Errno::EPIPE errors. It seems that the mysql connection get timed out for inactivity or something like that.
It can be restarted with 'mongrel_rails restart -P /path/to/the/mongrel.pid' ... but that's not really a solution.
My collaborator expects the app to be there when he is working on his part (and I am most likely not around).
My question is:
What can I do to prevent this problem from occurring in the 1st place? (e.g. don't time me out!!).
Failing that, is there some code I can insert somewhere to automatically remake the Db connection?
Here's a solution:
https://boxpanel.blueboxgrp.com/public/the_vault/index.php/Mongrel_/_MySQL_Timeout
The timeouts on the above solution seem a little high to me. You don't want your DB timeouts to be too low, because of the amount of memory a connection can use. If a connection is orphaned, you want it to time out reasonably (like not in one week.)
In other places, I also got the following suggestions:
Try setting
config.active_record.verification_timeout to something lower than whatever
your mysql connection timeout setting is.
There's a gem to work around this problem: mysql_retry_lost_connection
http://rubyforge.org/projects/zventstools/
"Reconnect to the MySQL server when you hit a lost connection error".