MySql write is extremely slower after separating database from web server - mysql

I have both Apache/PHP and MySql running on the same server but during peak time (about 2-3 hours a day) the server is very slow. So, following the most common recommendation, I setup MySql in a separate machine (on the same local network with 1 Gbit interconnect). Now, during off-peak, both reads and writes are slightly slower (seems logical since I use TCP/IP instead of sockets), however, during peak time, the reads are much faster but the writes are extremely slow. Any recommendations on what might be the cause? Any suggestions for optimization? Please let me know if you need any tests/logs. Thank you.

I Would Suggest You to Use RestAPI in order to service your application with resources. I have tried using MySql/Apache combo but i faced a lot of difficulties with the many requests.

Related

How many writes/second can a single SQL master-slave handle?

I came across this article https://github.com/donnemartin/system-design-primer/blob/master/solutions/system_design/pastebin/README.md which says 4 writes per second should be doable for a single SQL write master-slave. In another article, it is mentioned that 2000 writes per second is too much for a single SQL write master-slave. Not having worked on setting up SQL databases directly, my question is: How can I tell how much can a single write master-slave handle? I would like to understand:
(1) What are the typical write QPS that this setup can handle in modern machines? This is for general intuition.
(2) Suppose my application is using this setup for its database. How should I load test the database first to identify write QPS capacity , and then how should I monitor it as there is more usage?
There is no way to determine the exact number of queries you can run on a master/slave system as it depends on a lot of variables.
How powerful is the CPU, is a SSD or HDD used, what exactly are the writes/reads, database version, network connectivity ect.
4 writes/seconds is laughably low, depending on your setup you should be able to consistently do thousands of writes per second.
I would recommend first testing a master/slave system with a test load and to determine if it's feasible for your case from there. If you don't actually have a working system in place and are just wondering if you should start with a master/slave , you can safely start at such, you will most likely not hit bottlenecks related to it anytime soon.

Options for speeding up slow SQL queries

We're having issues with a few queries - relatively simple queries - that take too long processing. Everything from 3 000ms to 30 000ms. We are using PHP 5.5 and MySQL 5.5.28-29.1.
We have a few options, but I am posting here to see if anyone has any experience on each of them:
Currently we are accessing views to get our data, this was done to move the processing load from the PHP to the MySQL. Would accessing the tables directly improve the query processing speed? I'm thinking not, cause it would lead to a lot more queries, due to the fact that the views are just collations of data.
If we were to install a cache DB, such as SQLite3, to cache it locally, then sync it to a RDBMS, how would we do that? And would the speed improve?
Thinking about a NodeJS version as well, using Node WebKit. As far as I can understand there is npm packages out there that can act as cache or a db connection, which would rule out the need for PHP. But how about the speed?
Another option is to set up a dedicated server for this environment (we're using a virtual server environment for the moment). Which would most likely speed some parts of it up. But if the MySQL will still be slow on that server - it's kind of wasted.
These are the alternatives I can think of at the moment. Any suggestions are appreciated.
(I can post the slow SQL queries if need be, but would like to see if anyone has anything to say about our options first)

Which would be more efficient, having each user create a database connection, or caching?

I'm not sure if caching would be the correct term for this but my objective is to build a website that will be displaying data from my database.
My problem: There is a high probability of a lot of traffic and all data is contained in the database.
My hypothesized solution: Would it be faster if I created a separate program (in java for example) to connect to the database every couple of seconds and update the html files (where the data is displayed) with the new data? (this would also increase security as users will never be connecting to the database) or should I just have each user create a connection to MySQL (using php) and get the data?
If you've had any experiences in a similar situation please share, and I'm sorry if I didn't word the title correctly, this is a pretty specific question and I'm not even sure if I explained myself clearly.
Here are some thoughts for you to think about.
First, I do not recommend you create files but trust MySQL. However, work on configuring your environment to support your traffic/application.
You should understand your data a little more (How much is the data in your tables change? What kind of queries are you running against the data. Are your queries optimized?)
Make sure your tables are optimized and indexed correctly. Make sure all your query run fast (nothing causing a long row locks.)
If your tables are not being updated very often, you should consider using MySQL cache as this will reduce your IO and increase the query speed. (BUT wait! If your table is being updated all the time this will kill your server performance big time)
Your query cache is set to "ON". Based on my experience this is always bad idea unless your data does not change on all your tables. When you have it set to "ON" MySQL will cache every query. Then as soon as they data in the table changes, MySQL will have to clear the cached query "it is going to work harder while clearing up cache which will give you bad performance." I like to keep it set to "ON DEMAND"
from there you can control which query should be cache and which should not using SQL_CACHE and SQL_NO_CACHE
Another thing you want to review is your server configuration and specs.
How much physical RAM does your server have?
What types of Hard Drives are you using? SSD is not at what speed do they rotate? perhaps 15k?
What OS are you running MySQL on?
How is the RAID setup on your hard drives? "RAID 10 or RAID 50" will help you out a lot here.
Your processor speed will make a big different.
If you are not using MySQL 5.6.20+ you should consider upgrading as MySQL have been improved to help you even more.
How much RAM does your server have? is your innodb_log_buffer_size set to 75% of your total physical RAM? Are you using innodb table?
You can also use MySQL replication to increase the read sources of the data. So you have multiple servers with the same data and you can point half of your traffic to read from server A and the other half from Server B. so the same work will be handled by multiple server.
Here is one argument for you to think about: Facebook uses MySQL and have millions of hits per seconds but they are up 100% of the time. True they have trillion dollar budget and their network is huge but the idea here is to trust MySQL to get the job done.

MySql localhost vs Amazon RDS instance

I have surprise with some of mysql performance.
When I run simple query 'SELECT 1;' on my local host (mysql 5.6.x) using workbench, its execute in 0.000s, but the same query I ran on Amazon RDS (medium mysql 5.5.x) it tooks almost 0.094s.
I can not understand this behavior of mysql.
I would propose that you go for simplicity of maintenance and scalability (which RDS apparently provides much better than local MySQL) over performance for now.
Later on, when you get insufficient output for dollar paid for Amazon, you could start measuring carefully to find bottlenecks.
Nonetheless, if you are used to maintain private tightly packed VPS servers — local MySQL could be more simple to maintain, and you should only go for external services much later :)
The query SELECT 1 nearly requires no parsing and no table access so its execution is quick. For remote servers however there's also the time to transmit the request and shared resources like RDS are not real-time resources, so it might take a millisecond or two to get the task executed. If there's no bigger difference then just ignore this little extra time.

MySQL performance - 100Mb ethernet vs 1Gb ethernet

I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run.
My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps?
(Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results.
If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections.
Thanks
Rob
Benchmark. MySQL has many tools for determining how long queries take. Odds are you have really bad queries. Use the slow query log.
Why are you transmitting/storing 500MB of data from/in MySQL?
Divide the amount of data by the time of your query, you'll get your answer. If you're nearing the capacity of 100Mbps , you'll have IO problems.
My suspicion is yes. It should be.
In the MySQL shell, I would run:
show full processlist
on the machine and check out the state of the queries. If you see any states similar to: "reading from net" or "writing to net" that would imply that network transmission is directly impacting MySQL. You can also look at IOStat results to see how much IO the system is using. If the system is on a managed switch, you might also want to check the load there.
Ref: show processlist
Ref: Status definitions