How fatal is the maximum execution timeout warning in MySQL - mysql

Been working with the XAMP which I installed just under a year. I recently installed few frontend software for MySQL (to see which one am I comfortable with the most).
Now, for the past two days, whenever I go to localhost/phpmysql, I receive this warning
Fatal error: Maximum execution time of 30 seconds exceeded in C:\xampp\phpMyAdmin\lib..
I understand that the maximum execution time required to execute is being exceeded here. Found a few post on stackoverflow which guides you to rectification. All well and good till here.
I have a question and a concern.
Question
Why all of a sudden this error when, clearly remember, I did nothing to upset the default settings of the MySQL?
Concern
I am working on a project which uses a database (important, cannot loose it), the phpmyadmin when refreshed after the warning starts to work normally as there never was a problem. I'll need a couple of week to get done with my project. Can I continue with this timeout error without risking my database or should I try and rectify it right away?

DCoder answer is good, try to change the maximium execution time in MySQL server, also, you can try to find what query is making the noise using the slow-query-log (you can activate it and is very useful in the case of messy querys).
Check this page for more information
If you're using MySQL on windows environments, I strongly suggest to put it on a linux server, because it's lots slower over windows.
About your concern, There is no real danger about your data, and changing the slow-query-log option or maximum-execution-time option doesn't compromise the databases... but do some backup before change anything, just for extra security.

Related

Server Status showing way too many queries and questions increasing per second on MySQL ClearDB

I have a MySQL DB (ClearDB) serving my backend application hosted on Heroku. That said I have very limited ways to actually see the logs (no access to the logs apparently) to even know what these queries are. I know for sure nobody is using the system right now. On my heroku logs there's really nothing being executed by the backend to trigger any query.
What I can see from MySQL Workbench when looking at Status and Variables is that the values of Queries and Questions increase by the hundreds every second when I refresh it, which to me seems really odd. The value of Threads_connected is always between 120 and 140, although Threads_running is usually lower than 5.
The "Selects Per Second" keep jumping between 200 and 400.
I am mostly a developer with not much skills in being a DBA. Are these values normal? Even when there's no traffic why are they constantly increasing? If not, what are the means I can use to investigate what is actually running there when ClearDB does not give me access to logs?
'show processlist' can only raise my suspicion that something seems off, but then how to procedure from here?

Frequent Mysql gone away (Error 2006) in production environment

I am running a website on django 1.10, python3.4, mysqlclient, mysql5.5.
I have multiple management command running in the background for various tasks (like mail sending, updating tables) at different times. Recently, I have started seeing many Mysql gone away (Error 2006).
I tried to change the connection_timeout to be greater than max_connection age. That solved a small share of problem in which Error 2006 stopped to occur for a running function in most of the case but other cases are as it is and frequently seeing Error 2006.
In some cases, I tried using django.db.connection.close() before running a function or putting it in try - except loop. In those places, this error are not occuring anymore but then I cannot put this try-except loop in every function.
What is the root cause of this repeated error Mysql gone away? What are the different solution and what is the best solution so that I do not have to change much of my codes?
Other Variables:
We upgraded from django 1.7 to 1.10 and updated the mysqlclient package recently and the problem surfaced after that probably.
Just around the update, the traffic on our site also increased multiple times. Can that be a trigger?
As seen elsewhere, the problem can be either that your query is too long, or the server closed the connection due to inactivity. It depends a lot on the circumstances, but it seems that if you're not in the middle of a transaction (which would discard the second possibility), you can apply something similar to Hook available for automatic retry after deadlock in django and mysql setup

How to find out what is causing a slow down of the application?

This is not the typical question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks.
Situation
We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching.
The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens.
Problem
In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour.
We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in:
Can you help me figure out what's wrong and maybe how to fix it?
Additional information
The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11.
If any configuration information would help analyze the problem, just comment and I will add it.
Graphs
info
phpinfo()
apc status
memcache status
collectd
Processes
CPU
Apache
Load
MySQL
Vmem
Disk
New Relic
Application performance
Server overview
Processes
Network
Disks
(Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)
The problem is almost certainly MySQL based. If you look at the final graph mysql/mysql_threads you can see the number of threads hits 200 (which I assume is your setting for max_connections) at 20:00. Once the max_connections has been hit things do tend to take a while to recover.
Using mtop to monitor MySQL just before the hour will really help you figure out what is going on but if you cannot install this you could just using SHOW PROCESSLIST;. You will need to establish your connection to mysql before the problem hits. You will probably see lots of processes queued with only 1 process currently executing. This will be the most likely culprit.
Having identified the query causing the problems you can attack your code. Without understanding how your application is actually working my best guess would be that using an explicit transaction around the problem query(ies) will probably solve the problem.
Good luck!

How to benchmark and optimize a really database-intensive Rails action?

There is an action in the admin section of a client's site, say Admin::Analytics (that I did not build but have to maintain) that compiles site usage analytics by performing a couple dozen, rather intensive database queries. This functionality has always been a bottleneck to application performance whenever the analytics report is being compiled. But, the bottleneck has become so bad lately that, when accessed, the site comes to a screeching halt and hangs indefinitely. Until yesterday I never had a reason to run the "top" command on the server, but doing so I realized that Admin::Analytics#index causes mysqld to spin at upwards of 350+% CPU power on the quad-core, production VPS.
I have downloaded fresh copies of production data and the production log. However, when I access Admin::Analytics#index locally on my development box, while using the production data, it loads in about 10 - 12 seconds (and utilizes ~ 150+% of my dual-core CPU), which sadly is normal. I suppose there could be a discrepancy in mysql settings that has suddenly come into play. Also, a mysqldump of the database is now 531 MB, when it was only 336 MB 28 days ago.  Anyway, I do not have root access on the VPS, so tweaking mysqld performance would be cumbersome, and I would really like to get to the exact cause of this problem. However, the production logs don't contain info. on the queries; they merely report the length that these requests took, which average out to a few minutes apiece (although they seemed to have caused mysqld to stall for much longer than this and prompting me to request our host to reboot mysqld just to get our site back up in one instance).
I suppose I can try upping the log level in production to solicit info. on the database queries being performed by Admin::Analytics#index, but at the same time I'm afraid to replicate this behavior in production because I don't feel like calling our host up to restart mysqld again! This action contains a single database request in its controller, and a couple dozen prepared statements embedded in its view!
How would you proceed to benchmark/diagnose and optimize/fix this action?!
(Aside: Obviously I would like to completely replace this functionality with Google Analytics or a similar solution, but I need fix this problem before proceeding.)
I'd recommend taking a look at this article:
http://axonflux.com/building-and-scaling-a-startup
Particularly, query_reviewer and newrelic have been a life-saver for me.
I appreciate all the help with this, but what turned out to be the fix for this was to implement a couple of indexes on the Analytics table to cater to the queries in this action. A simple Rails migration to add the indexes and the action now loads in less than a second both on my dev box and on prod!

Preventing Mongrel/Mysql Errno::EPIPE exceptions

I have a rails app that I have serving up XML on an infrequent basis.
This is being run with mongrel and mysql.
I've found that if I don't exercise the app for longer than a few hours it goes dead and starts throwing Errno::EPIPE errors. It seems that the mysql connection get timed out for inactivity or something like that.
It can be restarted with 'mongrel_rails restart -P /path/to/the/mongrel.pid' ... but that's not really a solution.
My collaborator expects the app to be there when he is working on his part (and I am most likely not around).
My question is:
What can I do to prevent this problem from occurring in the 1st place? (e.g. don't time me out!!).
Failing that, is there some code I can insert somewhere to automatically remake the Db connection?
Here's a solution:
https://boxpanel.blueboxgrp.com/public/the_vault/index.php/Mongrel_/_MySQL_Timeout
The timeouts on the above solution seem a little high to me. You don't want your DB timeouts to be too low, because of the amount of memory a connection can use. If a connection is orphaned, you want it to time out reasonably (like not in one week.)
In other places, I also got the following suggestions:
Try setting
config.active_record.verification_timeout to something lower than whatever
your mysql connection timeout setting is.
There's a gem to work around this problem: mysql_retry_lost_connection
http://rubyforge.org/projects/zventstools/
"Reconnect to the MySQL server when you hit a lost connection error".