phpmyadmin doesn't respond after timeout. What can I do? - mysql

I made a query in phpmyadmin that took a long time and in the end it timed out.
After it timed out, I have been unable to access phpmyadmin again.
I don't get an error, the website just keeps loading and nothing happens.
I've tried accessing the database via scripts, and that works fine, just can't use phpmyadmin.
This has happened a few time before, always after timing out. And I've always just had to wait for quite some time. I usually just try again a few hours later or the next day and then it works. But, that is a bit annoying when I am working on something.
Is there anything I can do to prevent this from happening (other than just making sure my queries won't take so long)? It feels like phpmyadmin is still working on the query, even though it timed out, and that's why it doesn't respond, but I would just like it to stop running the query.

You can reboot the service, I mean, the apache in order to clean buffers etc. Also I recommend you to show the current connections with SHOW PROCESSLIST

Related

Server Status showing way too many queries and questions increasing per second on MySQL ClearDB

I have a MySQL DB (ClearDB) serving my backend application hosted on Heroku. That said I have very limited ways to actually see the logs (no access to the logs apparently) to even know what these queries are. I know for sure nobody is using the system right now. On my heroku logs there's really nothing being executed by the backend to trigger any query.
What I can see from MySQL Workbench when looking at Status and Variables is that the values of Queries and Questions increase by the hundreds every second when I refresh it, which to me seems really odd. The value of Threads_connected is always between 120 and 140, although Threads_running is usually lower than 5.
The "Selects Per Second" keep jumping between 200 and 400.
I am mostly a developer with not much skills in being a DBA. Are these values normal? Even when there's no traffic why are they constantly increasing? If not, what are the means I can use to investigate what is actually running there when ClearDB does not give me access to logs?
'show processlist' can only raise my suspicion that something seems off, but then how to procedure from here?

How fatal is the maximum execution timeout warning in MySQL

Been working with the XAMP which I installed just under a year. I recently installed few frontend software for MySQL (to see which one am I comfortable with the most).
Now, for the past two days, whenever I go to localhost/phpmysql, I receive this warning
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\phpMyAdmin\lib..
I understand that the maximum execution time required to execute is being exceeded here. Found a few post on stackoverflow which guides you to rectification. All well and good till here.
I have a question and a concern.
Question
Why all of a sudden this error when, clearly remember, I did nothing to upset the default settings of the MySQL?
Concern
I am working on a project which uses a database (important, cannot loose it), the phpmyadmin when refreshed after the warning starts to work normally as there never was a problem. I'll need a couple of week to get done with my project. Can I continue with this timeout error without risking my database or should I try and rectify it right away?
DCoder answer is good, try to change the maximium execution time in MySQL server, also, you can try to find what query is making the noise using the slow-query-log (you can activate it and is very useful in the case of messy querys).
Check this page for more information
If you're using MySQL on windows environments, I strongly suggest to put it on a linux server, because it's lots slower over windows.
About your concern, There is no real danger about your data, and changing the slow-query-log option or maximum-execution-time option doesn't compromise the databases... but do some backup before change anything, just for extra security.

what is causing random spikes in local mysql server query speeds?

So while playing around on my localhost in phpMyAdmin and doing some stuff with SQL, I realized that I would randomly get huge spikes in the time it took to perform a database query. I have a database table with about 3000 entries, and I was running a very simple query to display the first 2500 of them.
On average, running this query was taking around 0.003 to 0.004 seconds. (Of course, loading the phpMyAdmin page took much longer, but we're just looking at the query times.) However, I noticed that occasionally the query times would go up past 0.01. Once it even shot up to 0.04. So, my curiosity getting the better of me, I decided to repeatedly run the same query, and produced a graph of my results:
I'm not running anything else on my computer that may be interacting with MySQL, and because it's my localhost I'm the only one that's doing anything to mess with my database (right?). Slight outliers are understandable, but what's causing the load times to go up anywhere from 3 to 30 times, completely randomly it seems?
Can anyone help me satiate my curiosity?
I'm not running anything else on my computer that may be interacting with MySQL
But is there anything else running on your computer that might be interacting with your hard drive /CPU on a regular basis? Because that would explain the spikes. Maybe have a scan of running processes, and compare the cpu/disk activity against the spikes.
Even though your database is running on your local host, it's not running in complete isolation. It is competing for your system's resources with every other process you have running.

(2006, 'MySQL server has gone away') in WSGI django

I have a MySQL gone away with Django under WSGI. I found entries for this problem on stackoverflow, but nothing with Django specifically. Google does not help, except for workarounds (like polling the website every once in a while, or increasing the database timeout). Nothing definitive. Technically, Django and/or MySQLdb (I'm using the latest 1.2.3c1) should attempt a reconnect if the server hanged the connection, but this does not happen. How can I solve this issue without workarounds ?
show variables like 'wait_timeout';
this is the setting will throw back the "mysql gone away" error
set it to a very large value to prevent it "gone away"
or simple re-ping the mysql connection after certain period
Django developers gave one short answer for all questions like this in https://code.djangoproject.com/ticket/21597#comment:29
Resolution set to wontfix
Actually this is the intended behavior after #15119. See that ticket for the rationale.
If you hit this problem and don't want to understand what's going on, don't reopen this ticket, just do this:
RECOMMENDED SOLUTION: close the connection with from django.db import connection; connection.close() when you know that your program is going to be idle for a long time.
CRAPPY SOLUTION: increase wait_timeout so it's longer than the maximum idle time of your program.
In this context, idle time is the time between two successive database queries.
You could create middleware to ping() the MySQL connection (which
will reconnect if it timed out) before processing the view
You could also add middleware to catch the exception, reconnect, and retry the
view (I think I would prefer the above solution as simpler, but it should technically work and be performant assuming timeouts are rare. This also assumes a failed view has no side effects, which is a desirable property but can be difficult to do, especially if you write to a filesystem as well as a db in your view.)

Preventing Mongrel/Mysql Errno::EPIPE exceptions

I have a rails app that I have serving up XML on an infrequent basis.
This is being run with mongrel and mysql.
I've found that if I don't exercise the app for longer than a few hours it goes dead and starts throwing Errno::EPIPE errors. It seems that the mysql connection get timed out for inactivity or something like that.
It can be restarted with 'mongrel_rails restart -P /path/to/the/mongrel.pid' ... but that's not really a solution.
My collaborator expects the app to be there when he is working on his part (and I am most likely not around).
My question is:
What can I do to prevent this problem from occurring in the 1st place? (e.g. don't time me out!!).
Failing that, is there some code I can insert somewhere to automatically remake the Db connection?
Here's a solution:
https://boxpanel.blueboxgrp.com/public/the_vault/index.php/Mongrel_/_MySQL_Timeout
The timeouts on the above solution seem a little high to me. You don't want your DB timeouts to be too low, because of the amount of memory a connection can use. If a connection is orphaned, you want it to time out reasonably (like not in one week.)
In other places, I also got the following suggestions:
Try setting
config.active_record.verification_timeout to something lower than whatever
your mysql connection timeout setting is.
There's a gem to work around this problem: mysql_retry_lost_connection
http://rubyforge.org/projects/zventstools/
"Reconnect to the MySQL server when you hit a lost connection error".