Django and mysql on different servers performance - mysql

I hvae django app that needs to be extremely fast, and it works good for now.
So my question is, is it better to put django app on one server and mysql on another server, or on one server both?
I ask because of communication between then.
I use digitalocean, and both are on one server.

It depends how well the application is written.
Poorly written django will generate a lot of queries so maybe it's beneficial to have it on the same server. Well written Django should leverage the database to do the heavy lifting, in which case its better to have it on a separate server, so the server can be tuned for a database. (In general having a separate database server is the way to go).
The best thing to do would be to add Django debug toolbar to your application and see if it is generating a lot of queries or not, and tune the application from there.

You have couple of options but let's stick to these two.
One server for everything
Good for setting up an application quickly, as it is the simplest setup possible, but it offers little in the way of scalability and component isolation.
There are a lot of pros, it's fast, simple to work with. It does not meet latency problems. From cons: you cannot horizontally scale.
Server for web application and server for database.
First of all, I would recommend to use Postgres, since the latest version (9.6) can now work on multiple cores, which makes it way faster than mysql.
It is good for setting up an application quickly, but keeps application and database from fighting over the same system resources.
From pros it does not fight over resources (RAM / CPU / I/O).
It may also increase security by removing database from DMZ.
From cons, it is harder to setup and when high-latency is going on, the queries might take longer to execute.
To sum up. I would use first option for small and medium applications which does not require a lot of requests.
I would consider moving DB to another server/servers, whenever the application hosts thousands of users per day.

Related

Increased loading time of two websites sharing one database

Our main website remotely accessed the database of our other website which is on a different domain hosting. My problem is our main website is very slow in loading a page while the second website is not experiencing the problem of our main website(database is hosted on our second website).
Why we're experiencing this problem on our main website?
What would be the possible reasons?
What would be the possible solutions for this?
Edit:
We just transfer the other domain to the same hosting of our main website.
Maybe the problem is the database authentication process between two hosting.
This is a very, very wide question - I can only give general advice.
I'd start by making sure the slow website is properly written. Run the website on a controlled development environment, with a copy of your production database, and use a tool like Apache JMeter to subject it to load; make sure it is "fast" in that environment. "Fast" is a movable concept, but I'd be expecting to see sub-second response times up to hundreds of concurrent users.
If the site is slow in this context, it will be slow on production; find out where the bottleneck is, tune, optimize etc.
If that isn't the problem, I'd replicate that setup with the other website connecting to the same database, and throw load at both sites simultaneously. You might just have reached the scalability limits of the system, and you may be seeing performance issues related to that - unlikely if the first website responds quickly and the second doesn't, but it's possible you're seeing deadlocks or other concurrency issues.
If the website behaves well on "perfect" infrastructure, but not in production, you need to work out what the issue is on production. The best way is to use a profiler on the production environment; this might mean creating a copy of the website which isn't publicly accessible, and installing the profiler there. XDebug works nicely for PHP.
The profiler will show you where your application slows down; it could be in the PHP code, it could be in the authentication section, it could be executing the SQL queries.
Once the profiler tells you where the problem is, you can work out how to fix it.
However, as a rule of thumb, running database queries outside a single network cage is a terrible idea; it's not secure, it exposes your database queries to arbitrary internet performance problems, and it eats into your bandwidth allocation. It's not really to do with the domain in the sense of "www.company.com" - one hosting environment can run multiple domains - but if you're routing your database traffic over the public internet, you give up any control over performance.

Slow remote queries

I am working on a Rails application and was using SQLite during development and the speed had been very fast.
I have started to use a remote MySQL database hosted by Amazon and am getting very slow query times. Besides trying to optimize the remote database, is there anything on the Rails side of things I can do?
Local database access vs. remote will show a significant difference in speed. Since you've not provided any specifics I can't zero in on the issue but I can make a suggestion:
Try caching your queries and views as much as possible. This will reduce the amount of queries you need to do. This works well especially for static data like menus.
Optimization is the key. Make sure you eliminate as many unnecessary queries as you can, and those queries you make only request the fields you need using the select method.
Profile the various components involved. The database server itself is one of them. The network latency is another. While for the second one probably there is little you can do, probably you can tweak alot the first part. Starting from profiling the queries and going to tweaking the server itself.
Knowing where to look for will help you start with the best approach. As for caching, always keep that in mind, but that can prove to be quite problematic depending on the nature of your application.

PostgreSQL and PQC

I'm using MySQL with Memcached, but I'm planning to start using PostgreSQL instead of MySQL.
I know Memcached can work with PostgreSQL, but I found this online: PostgreSQL Query Cache. I've seen a presentation online, and it says memcached is used in this. But I don't understand: memcached, I have to "program" in my PHP-code, and PQC, not?
What's it all about? Is PQC the same as memcached, and could it replace memcached? For example: I have a table with all countries. It never changes, so I want to cache this instead of retrieving it from the database every time. Will PQC do this automatically?
PQC is an implementation of caching that uses Memcached. It sits in front of your database server and caches query results for you. If you are running a lot of identical queries, this will make your database load a whole lot less and your return times a whole lot faster. It is not a substitute for good design of your application, but it can certainly help, and the cost of implementing it is extremely low since it takes advantage of an existing layer of abstraction.
Memcached is a lower level tool. A well designed application will leave you a nice place to put code between the business logic and the database layer to cache results, and this is where you put your memcached calls. In other words, if your code is designed to allow this abstraction, fantastic. Otherwise, you're looking at a lot more work to implement.

Restart MySQL server without disrupting users

What are some generally accepted strategies for restarting a MySQL server on a busy website without interrupting current users? I am using a LAMP setup. I don't mind taking down the site for a time if need be, but if certain user activities are interrupted I could wind up with corrupted data. I do have the ability to bring up a second server if that helps in the transition. I need a solution that results in no corrupted data / data loss.
I suspect this could be a common problem without an easy solution, but not sure what the best approach would be. Any guidance would be appreciated.
Thanks, Brian
Any solution for high availability depends on redundancy.
The most popular strategy today is to run two MySQL servers. Configure the two servers to replicate bidirectionally. This comes with its own challenges; you must manage your applications carefully to write to only one server at a time, to avoid creating update conflicts. When you need to restart one MySQL server, switch your apps to use the other MySQL server.
Even with this configuration, you can't make the switchover without interrupting connections, even if the interruption is brief.
Another solution is MySQL Cluster, in which both MySQL Servers and storage are redundant, but this is also complex to set up and manage, requires high-end hardware resources, and shards your data in ways that make it hard to optimize for general SQL queries.

How does database tiering work?

The only good reference that I can find on the internet is this whitepaper, which explains what database tiering is, but not how it works:
The concept behind database tiering is
the seamless co-existence of multiple
(legacy and new) database technologies
to best solve a business problem.
But, how does it implemented? How does it work?
Any links regarding this would also be helpful. Thanks.
I think the idea of that document is you to put "cheap" databases in front of the "expensive" databases to reduce costs.
For example. Let's assume you have an "expensive" db...something like Oracle, or DB2 or even MSSQL (more realistically it's probably more of an issue with a legacy DB system that is not supported much or you need specialized resources to maintain). A database engine that costs a lot to purchase and maintain (arguably these are not expensive when you take all factors into consideration. But let's use them for the example).
Now if you suddenly get famous and your server starts to get overloaded what do you do? Do you buy a bigger server and migrate all your data to that new server? That could be incredibly expensive.
With the tiering solution you put several "cheap" databases in front of you "expensive" database to take the brunt of the work. So your web servers (or app servers) talk to a bunch of MySQL servers, for example, instead of directly to the your expensive server. Then these MySQL servers handle the majority of the calls. For example, they could handle all read-only calls completely on their own and only need to pass write-calls back to the main database server. These MySQL servers are then kept in sync via standard replication practices.
Using methods like this you could in theory scale out your expensive server to dozens, if not hundreds, of "cheap" database servers and handle a much higher load.
Database tiering is just a specific style of tiering. There are also application tiering and service tiering. It's a form of scalability.
What exactly are you asking? This question is rather vague.
This is a PDF from a course at Ohio State. What it discusses is a bit over my head, but hopefully you might understand it better.