Slow remote queries - mysql

I am working on a Rails application and was using SQLite during development and the speed had been very fast.
I have started to use a remote MySQL database hosted by Amazon and am getting very slow query times. Besides trying to optimize the remote database, is there anything on the Rails side of things I can do?

Local database access vs. remote will show a significant difference in speed. Since you've not provided any specifics I can't zero in on the issue but I can make a suggestion:
Try caching your queries and views as much as possible. This will reduce the amount of queries you need to do. This works well especially for static data like menus.
Optimization is the key. Make sure you eliminate as many unnecessary queries as you can, and those queries you make only request the fields you need using the select method.

Profile the various components involved. The database server itself is one of them. The network latency is another. While for the second one probably there is little you can do, probably you can tweak alot the first part. Starting from profiling the queries and going to tweaking the server itself.
Knowing where to look for will help you start with the best approach. As for caching, always keep that in mind, but that can prove to be quite problematic depending on the nature of your application.

Related

Django and mysql on different servers performance

I hvae django app that needs to be extremely fast, and it works good for now.
So my question is, is it better to put django app on one server and mysql on another server, or on one server both?
I ask because of communication between then.
I use digitalocean, and both are on one server.
It depends how well the application is written.
Poorly written django will generate a lot of queries so maybe it's beneficial to have it on the same server. Well written Django should leverage the database to do the heavy lifting, in which case its better to have it on a separate server, so the server can be tuned for a database. (In general having a separate database server is the way to go).
The best thing to do would be to add Django debug toolbar to your application and see if it is generating a lot of queries or not, and tune the application from there.
You have couple of options but let's stick to these two.
One server for everything
Good for setting up an application quickly, as it is the simplest setup possible, but it offers little in the way of scalability and component isolation.
There are a lot of pros, it's fast, simple to work with. It does not meet latency problems. From cons: you cannot horizontally scale.
Server for web application and server for database.
First of all, I would recommend to use Postgres, since the latest version (9.6) can now work on multiple cores, which makes it way faster than mysql.
It is good for setting up an application quickly, but keeps application and database from fighting over the same system resources.
From pros it does not fight over resources (RAM / CPU / I/O).
It may also increase security by removing database from DMZ.
From cons, it is harder to setup and when high-latency is going on, the queries might take longer to execute.
To sum up. I would use first option for small and medium applications which does not require a lot of requests.
I would consider moving DB to another server/servers, whenever the application hosts thousands of users per day.

Options for speeding up slow SQL queries

We're having issues with a few queries - relatively simple queries - that take too long processing. Everything from 3 000ms to 30 000ms. We are using PHP 5.5 and MySQL 5.5.28-29.1.
We have a few options, but I am posting here to see if anyone has any experience on each of them:
Currently we are accessing views to get our data, this was done to move the processing load from the PHP to the MySQL. Would accessing the tables directly improve the query processing speed? I'm thinking not, cause it would lead to a lot more queries, due to the fact that the views are just collations of data.
If we were to install a cache DB, such as SQLite3, to cache it locally, then sync it to a RDBMS, how would we do that? And would the speed improve?
Thinking about a NodeJS version as well, using Node WebKit. As far as I can understand there is npm packages out there that can act as cache or a db connection, which would rule out the need for PHP. But how about the speed?
Another option is to set up a dedicated server for this environment (we're using a virtual server environment for the moment). Which would most likely speed some parts of it up. But if the MySQL will still be slow on that server - it's kind of wasted.
These are the alternatives I can think of at the moment. Any suggestions are appreciated.
(I can post the slow SQL queries if need be, but would like to see if anyone has anything to say about our options first)

How significant is the "amount of queries used" to site performance/speed?

I've been wondering how significant is the amount of queries used to site performance/speed.
I have two websites, running with different engines (one with IPB and another with MyBB). The IPB one has less queries used (only 14 on average), but it runs slower than MyBB with more queries used (average on 20).
I thought this is because the IPB is heavily-modded, so I run a fresh-install on my localhost.
But it still results the same. The IPB (which has less queries) runs slower than MyBB (with more queries).
This makes me wonder, so how is queries used affecting the site performance? Is it significant?
Well quantity of queries is one factor. But you also have to consider each individual query, does it do joins? Lots of Math? Queries within queries? Do the tables have indexes etc.
Take a look at these links on optimisation, knowing how to optimise something can tell you what can cause it to slow down.
http://www.fiftyfoureleven.com/weblog/web-development/programming-and-scripts/mysql-optimization-tip
http://msdn.microsoft.com/en-us/library/ff650689.aspx
http://hungred.com/useful-information/ways-optimize-sql-queries/
I believe this question has many answers. There are many things to consider. For example, querying a db which will return A LOT of lines will take more time than a DB which has ust a few records. Another thing to consider is the number of queries, just as you are doing within your post. Another case i can think of is how you do the queries. If you are using jquery or javascript ajax calls to make the queries they will take a lot more time.
Sheer number of queries per page does not correlate with site responsiveness and speed.
What does matter is:
What those queries actually are?
How well DB server can cope with the load that they generate?
Are they executed serially or in parallel?
It depends on what You consider 'significant'.
Let's consider two scenarios:
We have 'lots' of queries that take quite some time to execute
It's bad... Just because it takes much time to process them all
We have 'lots' of queries that take very little time to execute
If Your database server is on different machine as web server it might be a problem due to communication overhead. Web server and database server will most probably spend more time on communication then on processing each query (think of network latency).
If Your database server is on the same machine as web server it might not affect site performance much as communication between web server and database server will be very quick. BUT there are other things to consider. For example You might be locking some tables for update/select queries A LOT and it will decrease site performance considerably.
It's always better to execute less queries.
You should remember that on an average website ~80% is about frontend performance and only 20% about backend performance.
So for website performance it is in most cases more relevant to optimize frontend performance. Here you can make the big points quickly.
Well, of course there are scenarios with sites heavily displaying LOTS of data coming from a DB. Here it is worth to think about optimizing backend performance. And optimizing backend performance means optimizing database stuff in most cases.
A good book about this is "High Performance Web Sites" from Steve Souders.

PostgreSQL and PQC

I'm using MySQL with Memcached, but I'm planning to start using PostgreSQL instead of MySQL.
I know Memcached can work with PostgreSQL, but I found this online: PostgreSQL Query Cache. I've seen a presentation online, and it says memcached is used in this. But I don't understand: memcached, I have to "program" in my PHP-code, and PQC, not?
What's it all about? Is PQC the same as memcached, and could it replace memcached? For example: I have a table with all countries. It never changes, so I want to cache this instead of retrieving it from the database every time. Will PQC do this automatically?
PQC is an implementation of caching that uses Memcached. It sits in front of your database server and caches query results for you. If you are running a lot of identical queries, this will make your database load a whole lot less and your return times a whole lot faster. It is not a substitute for good design of your application, but it can certainly help, and the cost of implementing it is extremely low since it takes advantage of an existing layer of abstraction.
Memcached is a lower level tool. A well designed application will leave you a nice place to put code between the business logic and the database layer to cache results, and this is where you put your memcached calls. In other words, if your code is designed to allow this abstraction, fantastic. Otherwise, you're looking at a lot more work to implement.

Consider scaling on Rails, would you write hand coded SQL? Use Sequel gem?

If I wanted to scale a Rails application by distributing its database on a different machine based on its authorization rules (location and user roles). So any resource attributed to that location would be sitting in a database dedicated to that location.
Should I get down to basic SQL writing, use something like Sequel gem or keep the niceness and magic of ActiveRecord?
It is true that raw SQL execution speed is more than execution of the ActiveRecord's nice magical queries. However, if you talk about scaling then there comes this question of how well would the queries be manageable when the application really grows large.
By far, a lot of complicated database operations can be managed well by caching, and proper indexing and proper eager-loading. In some cases MySQL views also help to improve performance, and Rails treats MySQL views fairly. After this, if one is able to corner out the really slow queries, then it might be worth it to convert them to raw SQL and save some time. Also, Rails offer caching of the database queries. MySQL also has a caching mechanism. Before executing raw SQL directly I would make sure these options (actually many more like avoiding unnecessary joins, as a join operation is resource intensive) are not able to give me what I am looking for. Hope this helps.
It sounds like you are partitioning your database, which Sequel has built-in support for (http://sequel.rubyforge.org/rdoc/files/doc/sharding_rdoc.html). I recommend using Sequel, but as the lead developer, I'm biased.