Our relatively high traffic website just screeched to a halt, and we're totally stumped. We run on Django and Mysql (InnoDB), and we're trying to figure out why it's all of a sudden totally slow.
Here's what we know so far:
On our mysql server, a simple query (from django shell) runs fast.
On our app server, a simple query (from django shell) runs very slow.
Without having any details on the query or on the tables involved in the query, it is quite difficult to answer this question.
Most likely it is because of a lot of data in the table and a missing index on the field you are querying.
This would explain why it is slow on the production box, but fast on the dev box (since there's less data).
To answer the question better, could you provide us with more details? Table structure, query, number of rows in the table, etc. ?
More assumptions: Disk I/O on the app server could be a problem, maybe the log files in MySql are not properly configured (especially with InnoDB this could lead to a problem). Maybe there's a load-heavy query running too often? Table locks when multiple users write to/read from the same tables?
As I said, without having more details, it is quite difficult to guess. But I hope, at least I could point you in the right direction.
Run EXPLAIN on the SELECT.
Study this page carefully:
http://dev.mysql.com/doc/refman/5.0/en/using-explain.html
Understanding the concepts on that page are key to properly index your tables.
Thanks for the responses everyone.
Turns out it was a DNS issue (which was a regression). MySQL is really stupid in that the default is to use DNS lookups. They got really slow, which killed all the network flow between the app server and the db server. It was as simple as adding "skip-name-resolve" to our my.cnf.
Are the 'mysql server' and 'app server' on the same box and talking to the same DB instance?
Your question suggests not, so I'd look for a problem on the network - start by pinging the database server from each box and compare the results.
Once you've done that you'll need to be a little more specific about the problem - were the ping times the same, are you running the same query, etc...
Related
Not sure how to state this question.
I have a very busy DB in production with close to 1 million hits daily.
Now I would like to do some research on the real-time data (edit: "real-time" can be a few minutes old).
What is the best way to do this without interrupting production?
Ideas:
in the unix shell, there is the nice concept. It lets me give a low priority to a specific thread so it only uses CPU when the other threads are idle. I am basically looking for the same in a mysql context.
Get a DB dump and do the research offline:
Doesn't that take down my site for the several minutes it takes to get the dump?
Is there a way to configure the dump command so it does the extraction in a nice way (see above)?
Do the SQL commands directly on the live DB:
Is there a way, again, to configure the commands so they are executed in a nice way?
Update: What are the arguments against Idea 2?
From the comments on StackOverflow and in-person discussions, here's an answer for whoever gets here with the same question:
In MySQL, there seems not to be any nice type control over prioritization of processes (I hear there is in Oracle, for example)
Since any "number-crunching" is at most treated like one more visitor to my website, it won't take down the site performance-wise. So it can safely be run in production (read-only, of course...).
We're having issues with a few queries - relatively simple queries - that take too long processing. Everything from 3 000ms to 30 000ms. We are using PHP 5.5 and MySQL 5.5.28-29.1.
We have a few options, but I am posting here to see if anyone has any experience on each of them:
Currently we are accessing views to get our data, this was done to move the processing load from the PHP to the MySQL. Would accessing the tables directly improve the query processing speed? I'm thinking not, cause it would lead to a lot more queries, due to the fact that the views are just collations of data.
If we were to install a cache DB, such as SQLite3, to cache it locally, then sync it to a RDBMS, how would we do that? And would the speed improve?
Thinking about a NodeJS version as well, using Node WebKit. As far as I can understand there is npm packages out there that can act as cache or a db connection, which would rule out the need for PHP. But how about the speed?
Another option is to set up a dedicated server for this environment (we're using a virtual server environment for the moment). Which would most likely speed some parts of it up. But if the MySQL will still be slow on that server - it's kind of wasted.
These are the alternatives I can think of at the moment. Any suggestions are appreciated.
(I can post the slow SQL queries if need be, but would like to see if anyone has anything to say about our options first)
My website has a very heavy read traffic. A lot heavier than write traffic.
To improve the performance of my website I have thought of going with master/slave database configuration.
The octupus gem seems to provide what I want, but since my app is huge I can't go though a millions of source code line to change the query distribution(sending read query to slave server and write query to master server).
MySQL Proxy seems to be a great way to resolve this issue but since it is in alpha version I don't want to use it.
So my question is what is the best way to split read/write queries across master/slave server?
Is it possible to split read/write query without using any gems in rails?
I spiked out 2 gems, Octopus and Makara. I have written a blog comparing these 2 gems : https://ypoonawala.wordpress.com/2015/11/15/octopus-vs-makara-read-write-adapters-for-activerecord-2/
In my opinion, Makara works well and makes up for the issues with Octopus.
With octopus gem, you don't have to change much of your code to make write queries go to the master db server and read queries go to the slaves.
It's a simple configuration file, as stated here.
I've tried this in the past and It worked quite well. The only problem for me is that when the slave is down, it doesn't "redirect" the queries to the master db server, as I asked here.
But, if you want to configure each individual query destination, it will take some work.
I would suggest you to start mapping your more frequent queries and those that are taking longer to respond. Knowing those queries, you can optimize them individually. This may already solve part of the problem.
If you still need master-slave replication after that, use the octopus gem to change the behaviour only those few complicated queries.
I'm not sure if caching would be the correct term for this but my objective is to build a website that will be displaying data from my database.
My problem: There is a high probability of a lot of traffic and all data is contained in the database.
My hypothesized solution: Would it be faster if I created a separate program (in java for example) to connect to the database every couple of seconds and update the html files (where the data is displayed) with the new data? (this would also increase security as users will never be connecting to the database) or should I just have each user create a connection to MySQL (using php) and get the data?
If you've had any experiences in a similar situation please share, and I'm sorry if I didn't word the title correctly, this is a pretty specific question and I'm not even sure if I explained myself clearly.
Here are some thoughts for you to think about.
First, I do not recommend you create files but trust MySQL. However, work on configuring your environment to support your traffic/application.
You should understand your data a little more (How much is the data in your tables change? What kind of queries are you running against the data. Are your queries optimized?)
Make sure your tables are optimized and indexed correctly. Make sure all your query run fast (nothing causing a long row locks.)
If your tables are not being updated very often, you should consider using MySQL cache as this will reduce your IO and increase the query speed. (BUT wait! If your table is being updated all the time this will kill your server performance big time)
Your query cache is set to "ON". Based on my experience this is always bad idea unless your data does not change on all your tables. When you have it set to "ON" MySQL will cache every query. Then as soon as they data in the table changes, MySQL will have to clear the cached query "it is going to work harder while clearing up cache which will give you bad performance." I like to keep it set to "ON DEMAND"
from there you can control which query should be cache and which should not using SQL_CACHE and SQL_NO_CACHE
Another thing you want to review is your server configuration and specs.
How much physical RAM does your server have?
What types of Hard Drives are you using? SSD is not at what speed do they rotate? perhaps 15k?
What OS are you running MySQL on?
How is the RAID setup on your hard drives? "RAID 10 or RAID 50" will help you out a lot here.
Your processor speed will make a big different.
If you are not using MySQL 5.6.20+ you should consider upgrading as MySQL have been improved to help you even more.
How much RAM does your server have? is your innodb_log_buffer_size set to 75% of your total physical RAM? Are you using innodb table?
You can also use MySQL replication to increase the read sources of the data. So you have multiple servers with the same data and you can point half of your traffic to read from server A and the other half from Server B. so the same work will be handled by multiple server.
Here is one argument for you to think about: Facebook uses MySQL and have millions of hits per seconds but they are up 100% of the time. True they have trillion dollar budget and their network is huge but the idea here is to trust MySQL to get the job done.
I've had a tough time setting up my replication server. Is there any program (OS X, Windows, Linux, or PHP no problem) that lets me monitor and resolve replication issues? (btw, for those following, I've been on this issue here, here, here and here)
My production database is several megs in size and growing. Every time the database replication stops and the databases inevitably begin to slide out of sync i cringe. My last resync from dump took almost 4 hours roundtrip!
As always, even after sync, I run into this kind of show-stopping error:
Error 'Duplicate entry '252440' for key 1' on query.
I would love it if there was some way to closely monitor whats going on and perhaps let the software deal with it. I'm even all ears for service companies which may help me monitor my data better. Or an alternate way to mirror altogether.
Edit: going through my previous questions i found this which helps tremendously. I'm still all ears on the monitoring solution.
To monitor the servers we use the free tools from Maatkit ... simple, yet efficient.
The binary replication is available in 5.1, so I guess you've got some balls. We still use 5.0 and it works OK, but of course we had our share of issues with it.
We use a Master-Master replication with a MySql Proxy as a load-balancer in front, and to prevent it from having errors:
we removed all unique indexes
for the few cases where we really needed unique constraints we made sure we used REPLACE instead of INSERT (MySql Proxy can be used to guard for proper usage ... it can even rewrite your queries)
scheduled scripts doing intensive reports are always accessing the same server (not the load-balancer) ... so that dangerous operations are replicated safely
Yeah, I know it sounds simple and stupid, but it solved 95% of all the problems we had.
We use mysql replication to replicate data to close to 30 servers. We monitor them with nagios. You can probably check the replication status and use an event handler to restart it with 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; Start Slave;'. That will fix the error, but you'll lose the insert that caused the error.
About the error, do you use memory tables on your slaves? I ask this because the only time we ever got a lot of these error they where caused by a bug in the latests releases of mysql. 'Delete From Table Where Field = Value' will delete only one row in memory tables even though they where multiple rows.
mysql bug descritpion