I give you some technical details before going forward to the issue:
MySQL Version: 5.3.1
Engine: Innodb
Server running under Centos
I'm experiencing some troubles in the user connections when running certain queries. For this example to show you my problem I will make a query slow enough to provoke this scenario to happen.
Lets say I have my user "testuser1" and I use it to run a 2 minutes long query (long in purpose). While this query is running I try to run another query with the same user (and different tables) but it doesn't actually run until the first query finish (Kind of sequential).
First thing I thought is... well, maybe those tables are locked? (even tho I cannot even connect to the database at all).
So I tried to use a new user "testuser2" while this query was running. In this case I didn't have any problem at all on connecting to the database and running queries.
After this I thought the user might be blocked, so I tried to connect
my "testuser1" to a different database with no issue.
The next thing I tried was to use "testuser1" on a different IP while the first one was running. However I didn't experience any issue on running my queries.
Does anyone know why I cannot run two queries at a time on the same IP and how to fix this problem?
Thanks a lot!
Related
I have a MySQL DB (ClearDB) serving my backend application hosted on Heroku. That said I have very limited ways to actually see the logs (no access to the logs apparently) to even know what these queries are. I know for sure nobody is using the system right now. On my heroku logs there's really nothing being executed by the backend to trigger any query.
What I can see from MySQL Workbench when looking at Status and Variables is that the values of Queries and Questions increase by the hundreds every second when I refresh it, which to me seems really odd. The value of Threads_connected is always between 120 and 140, although Threads_running is usually lower than 5.
The "Selects Per Second" keep jumping between 200 and 400.
I am mostly a developer with not much skills in being a DBA. Are these values normal? Even when there's no traffic why are they constantly increasing? If not, what are the means I can use to investigate what is actually running there when ClearDB does not give me access to logs?
'show processlist' can only raise my suspicion that something seems off, but then how to procedure from here?
So while playing around on my localhost in phpMyAdmin and doing some stuff with SQL, I realized that I would randomly get huge spikes in the time it took to perform a database query. I have a database table with about 3000 entries, and I was running a very simple query to display the first 2500 of them.
On average, running this query was taking around 0.003 to 0.004 seconds. (Of course, loading the phpMyAdmin page took much longer, but we're just looking at the query times.) However, I noticed that occasionally the query times would go up past 0.01. Once it even shot up to 0.04. So, my curiosity getting the better of me, I decided to repeatedly run the same query, and produced a graph of my results:
I'm not running anything else on my computer that may be interacting with MySQL, and because it's my localhost I'm the only one that's doing anything to mess with my database (right?). Slight outliers are understandable, but what's causing the load times to go up anywhere from 3 to 30 times, completely randomly it seems?
Can anyone help me satiate my curiosity?
I'm not running anything else on my computer that may be interacting with MySQL
But is there anything else running on your computer that might be interacting with your hard drive /CPU on a regular basis? Because that would explain the spikes. Maybe have a scan of running processes, and compare the cpu/disk activity against the spikes.
Even though your database is running on your local host, it's not running in complete isolation. It is competing for your system's resources with every other process you have running.
Hey guys, i had posted this question in another question but i didnt get a helpful response. Partly because i dont think i explained myself properly. So im going to try again.
I've been programming a back end server in vb.net and its been using a mysql database to store information. I was up until a couple of days ago using a webhost's mysql server to do this.
I did not care to renew my webhost so ive moved everything to a home server to continue work on my program. I've got mysql 5.5 installed (which is a newer version then the one on my previous webhost) and everything is working perfectly except for one thing so far.
This program when starting up for the first time sends a query about a million table inserts large. the query looks something like "INSERT into blah VALUES(1,1,1,1,1);INSERT into...." and so on. This used to take about 5-10 mins or so on my webhost (which i had my server program running on my home server machine and it was sending the info via the net to the webhost's mysql.)
Now doing everything locally i was hoping for this to be faster, but it didnt really matter i just needed it to work. So when i send this query it just locks up for a minute or so and then returns a timeout. When i check the table in the database it has loaded exactly 1000 items everytime i try this.
Now im assuming this is some sort of setting issue, however i've played with my "my.ini" to see if it would help, it didnt. I had tried switching to some of the pre-packaged my-huge or my-innodb etc and those did not help either. I would assume that if anything it would just take longer or something, not just timeout immediately after 1000 inserts.
And just for some background info the server machine im using has a quad core core i7, 8gb ram, and a 1tb hd in it running windows 7.
Any help would be great thank you.
Probably your query is too large, you can tune MySQL behavior via max_allowed_packet option.
You can also save some bytes by combining several insert queries into one, like this: INSERT INTO blah VALUES(1,1,1,1,1),VALUES(2,2,2,2,2),VALUES(3,3,3,3,3). But if this large combined query fails, then data in that insert will fail too.
But in my opinion it's not the smartest way to do it. If your application should import huge sql dump on start, it can possible use mysql executable like this mysql -uroot -ppassword db_name < dump.sql and you're done. It will be possibly the most effective way to accomplish this task.
I have developed a windows service using Delphi 2007. It connects to a remote MySql database via the internet using TAdoConnection and TAdoQuery. I have retained the default value of 30 seconds for CommandTimeout property. I also create the connection/query objects on each new query and free them when done (i.e. I don't create the database connection at startup and keep it open).
Every once in a while the service stops and the event viewer shows "Lost connection to MySQL server during query". I have everything wrapped in exceptions. My suspicion is that there is a drop in the network while the query is executing.
Anyone have any resolution/ideas?
What triggers windows to shutdown the service?
Also, I have the service "Recovery" set to restart the service but this never happens.
My next step will be to start logging when each query starts and compare this to the date/time of the shutdown. Because as of now I don't know how log this is.
This is may be not a direct answer, but I had same problem few days ago, and I have the mysql on local server, and I connect using Mydac components.
After many tries, I found the problem came from one table that has BLOB fields, I tried to query on the table like
select * from table
And I got this problem after the query fetch around 1600 rows, after few inspection I found the problem came from few records and seems they corrupted. so when I do query like
Select * from my table where id not
between 1599 and 1650
I had this problem, if I removed Not from they query which fetch only 51 records I got the error, which means there's some records are corrupted, also I did mysqlcheck but it didn't fix the problem, and I tried some other check tools and I got same result, I didn't tried to delete these records, because I need why this problem happen, but I got busy with other things so I left the server for a while.
BTW, I used MySql Query browser for trying to do the queries, because other tools will give me the errors without showing how many records fetched before mysql instance terminated unexpectedly.
Our relatively high traffic website just screeched to a halt, and we're totally stumped. We run on Django and Mysql (InnoDB), and we're trying to figure out why it's all of a sudden totally slow.
Here's what we know so far:
On our mysql server, a simple query (from django shell) runs fast.
On our app server, a simple query (from django shell) runs very slow.
Without having any details on the query or on the tables involved in the query, it is quite difficult to answer this question.
Most likely it is because of a lot of data in the table and a missing index on the field you are querying.
This would explain why it is slow on the production box, but fast on the dev box (since there's less data).
To answer the question better, could you provide us with more details? Table structure, query, number of rows in the table, etc. ?
More assumptions: Disk I/O on the app server could be a problem, maybe the log files in MySql are not properly configured (especially with InnoDB this could lead to a problem). Maybe there's a load-heavy query running too often? Table locks when multiple users write to/read from the same tables?
As I said, without having more details, it is quite difficult to guess. But I hope, at least I could point you in the right direction.
Run EXPLAIN on the SELECT.
Study this page carefully:
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
Understanding the concepts on that page are key to properly index your tables.
Thanks for the responses everyone.
Turns out it was a DNS issue (which was a regression). MySQL is really stupid in that the default is to use DNS lookups. They got really slow, which killed all the network flow between the app server and the db server. It was as simple as adding "skip-name-resolve" to our my.cnf.
Are the 'mysql server' and 'app server' on the same box and talking to the same DB instance?
Your question suggests not, so I'd look for a problem on the network - start by pinging the database server from each box and compare the results.
Once you've done that you'll need to be a little more specific about the problem - were the ping times the same, are you running the same query, etc...