Options for speeding up slow SQL queries - mysql

We're having issues with a few queries - relatively simple queries - that take too long processing. Everything from 3 000ms to 30 000ms. We are using PHP 5.5 and MySQL 5.5.28-29.1.
We have a few options, but I am posting here to see if anyone has any experience on each of them:
Currently we are accessing views to get our data, this was done to move the processing load from the PHP to the MySQL. Would accessing the tables directly improve the query processing speed? I'm thinking not, cause it would lead to a lot more queries, due to the fact that the views are just collations of data.
If we were to install a cache DB, such as SQLite3, to cache it locally, then sync it to a RDBMS, how would we do that? And would the speed improve?
Thinking about a NodeJS version as well, using Node WebKit. As far as I can understand there is npm packages out there that can act as cache or a db connection, which would rule out the need for PHP. But how about the speed?
Another option is to set up a dedicated server for this environment (we're using a virtual server environment for the moment). Which would most likely speed some parts of it up. But if the MySQL will still be slow on that server - it's kind of wasted.
These are the alternatives I can think of at the moment. Any suggestions are appreciated.
(I can post the slow SQL queries if need be, but would like to see if anyone has anything to say about our options first)

Related

How do you debug slowness in mysql

I have recently upgraded a system from mysql5.6 to mysql5.7 and I am experienceing significant lose of speed in the app.
Beside going over the available config values one by one and learn them + tweka them, Is there a way to identify/ debug the problem?
Google will give me info on how to use the slow query log, but that wont help, a I am not debugging a specific slow query, but rather a slownes that settms maybe from bad installation/configuration of the entire DB.

Rails: How to split write/read query across master/slave database

My website has a very heavy read traffic. A lot heavier than write traffic.
To improve the performance of my website I have thought of going with master/slave database configuration.
The octupus gem seems to provide what I want, but since my app is huge I can't go though a millions of source code line to change the query distribution(sending read query to slave server and write query to master server).
MySQL Proxy seems to be a great way to resolve this issue but since it is in alpha version I don't want to use it.
So my question is what is the best way to split read/write queries across master/slave server?
Is it possible to split read/write query without using any gems in rails?
I spiked out 2 gems, Octopus and Makara. I have written a blog comparing these 2 gems : https://ypoonawala.wordpress.com/2015/11/15/octopus-vs-makara-read-write-adapters-for-activerecord-2/
In my opinion, Makara works well and makes up for the issues with Octopus.
With octopus gem, you don't have to change much of your code to make write queries go to the master db server and read queries go to the slaves.
It's a simple configuration file, as stated here.
I've tried this in the past and It worked quite well. The only problem for me is that when the slave is down, it doesn't "redirect" the queries to the master db server, as I asked here.
But, if you want to configure each individual query destination, it will take some work.
I would suggest you to start mapping your more frequent queries and those that are taking longer to respond. Knowing those queries, you can optimize them individually. This may already solve part of the problem.
If you still need master-slave replication after that, use the octopus gem to change the behaviour only those few complicated queries.

MySql write is extremely slower after separating database from web server

I have both Apache/PHP and MySql running on the same server but during peak time (about 2-3 hours a day) the server is very slow. So, following the most common recommendation, I setup MySql in a separate machine (on the same local network with 1 Gbit interconnect). Now, during off-peak, both reads and writes are slightly slower (seems logical since I use TCP/IP instead of sockets), however, during peak time, the reads are much faster but the writes are extremely slow. Any recommendations on what might be the cause? Any suggestions for optimization? Please let me know if you need any tests/logs. Thank you.
I Would Suggest You to Use RestAPI in order to service your application with resources. I have tried using MySql/Apache combo but i faced a lot of difficulties with the many requests.

Slow remote queries

I am working on a Rails application and was using SQLite during development and the speed had been very fast.
I have started to use a remote MySQL database hosted by Amazon and am getting very slow query times. Besides trying to optimize the remote database, is there anything on the Rails side of things I can do?
Local database access vs. remote will show a significant difference in speed. Since you've not provided any specifics I can't zero in on the issue but I can make a suggestion:
Try caching your queries and views as much as possible. This will reduce the amount of queries you need to do. This works well especially for static data like menus.
Optimization is the key. Make sure you eliminate as many unnecessary queries as you can, and those queries you make only request the fields you need using the select method.
Profile the various components involved. The database server itself is one of them. The network latency is another. While for the second one probably there is little you can do, probably you can tweak alot the first part. Starting from profiling the queries and going to tweaking the server itself.
Knowing where to look for will help you start with the best approach. As for caching, always keep that in mind, but that can prove to be quite problematic depending on the nature of your application.

Using a MySQL database is slow

We have a dedicated MySQL server, with about 2000 small databases on it. (It's a Drupal multi-site install - each database is one site).
When you load each site for the first time in a while, it can take up to 30s to return the first page. After that, the pages return at an acceptable speed. I've traced this through the stack to MySQL. Also, when you connect with the command line mysql client, connection is fast, then "use dbname" is slow, and then queries are fast.
My hunch is that this is due to the server not being configured correctly, and the unused dbs falling out of a cache, or something like that, but I'm not sure which cache or setting applies in this case.
One thing I have tried is the innodb_buffer_pool size. This was set to the default 8M. I tried raising it to 512MB (The machine has ~ 2GB of RAM, and the additional RAM was available) as the reading I did indicated that more should give better performance, but this made the system run slower, so it's back at 8MB now.
Thanks for reading.
With 2000 databases you should adjust the table cache setting. You certainly have a lot of cache miss in this cache.
Try using mysqltunner and/or tunning_primer.sh to get other informations on potential issues with your settings.
Now drupal makes Database intensive work, check you Drupal installations, you are maybe generating a lot (too much) of requests.
About the innodb_buffer_pool_size, you certainly have a lot of pagination cache miss with a little buffer (8Mb). The ideal size is when all your data and indexes size can fit in this buffer, and with 2000 databases... well it is quite certainly a very little size but it will be hard for you to grow. Tunning a MySQL server is hard, if MySQL takes too much RAM your apache won't get enough RAM.
Solutions are:
check that you do not make the connexion with DNS names but with IP
(in case of)
buy more RAM
set MySQL on a separate server
adjust your settings
For Drupal, try to set the session not in the database but in memcache (you'll need RAM for that but it will be better for MySQL), modules for that are available. If you have Drupal 7 you can even try to set some of the cache tables in memcache instead of MySQL (do not do that with big cache tables).
edit: last thing, I hope you have not modified Drupal to use persistent database connexions, some modules allows that (or having an old drupal 5 which try to do it automatically). With 2000 database you would kill your server. Try to check mysql error log for "too many connections" errors.
Hello Rupertj as I read you are using tables type innodb, right?
innodb table is a bit slower than myisam tables, but I don't think it is a major problem, as you told, you are using drupal system, is that a kind of mult-sites, like a word-press system?
If yes, sorry about but this kind of systems, each time you install a plugin or something else, it grow your database in tables and of course in datas.. and it can change into something very very much slow. I have experiencied by myself not using Drupal but using Word-press blog system, and it was a nightmare to me and my friends..
Since then, I have abandoned the project... and my only advice to you is, don't install a lot of plugins in your drupal system.
I hope this advice help you, because it help me a lot in word-press.
This sounds like a caching issue in Drupal, not MYSQL. It seems there are a few very heavy queries, or many, many small ones, or both, that hammer the database-server. Once that is done, Drupal caches that in several caching layers. After which only one (or very few) queries are all that is needed to build up a page. Slow in the beginning, fast after that.
You will have to profile it to determine what the cause is, but the table cache seems like a likely suspect.
However, you should also be mindful of persistent connections - which should absolutely definitely, always be turned off (yes, for everyone, not just you). Apache / PHP persistent connections are a pessimisation that you and everyone else can generally do without.