What Grid Setup would you recommend - mysql

I have a web that currently runs off one Mediatemple VPS. I'm now at the stage were the site is getting bogged down with scaling issues and I need to move to a better setup.
Is this a sensible setup:
php on one server
mysql database on another
Cloud Files CDN used to server images, javascript and css
My main thinking is to put the MySQL database on its own server away from the rest of the files as it seems to be causing most of the problems.

Try grabbing the lowest hanging fruit by tackling the scaling problem that are easiest to resolve and don't involve hardware solutions. Identify the bottleneck that has the greatest effect on performance. If its your SQL server that is running slow and being funky, read the sql logs, and do some googling :).
Just make sure that your hardware is the problem and not your application.

Related

Django and mysql on different servers performance

I hvae django app that needs to be extremely fast, and it works good for now.
So my question is, is it better to put django app on one server and mysql on another server, or on one server both?
I ask because of communication between then.
I use digitalocean, and both are on one server.
It depends how well the application is written.
Poorly written django will generate a lot of queries so maybe it's beneficial to have it on the same server. Well written Django should leverage the database to do the heavy lifting, in which case its better to have it on a separate server, so the server can be tuned for a database. (In general having a separate database server is the way to go).
The best thing to do would be to add Django debug toolbar to your application and see if it is generating a lot of queries or not, and tune the application from there.
You have couple of options but let's stick to these two.
One server for everything
Good for setting up an application quickly, as it is the simplest setup possible, but it offers little in the way of scalability and component isolation.
There are a lot of pros, it's fast, simple to work with. It does not meet latency problems. From cons: you cannot horizontally scale.
Server for web application and server for database.
First of all, I would recommend to use Postgres, since the latest version (9.6) can now work on multiple cores, which makes it way faster than mysql.
It is good for setting up an application quickly, but keeps application and database from fighting over the same system resources.
From pros it does not fight over resources (RAM / CPU / I/O).
It may also increase security by removing database from DMZ.
From cons, it is harder to setup and when high-latency is going on, the queries might take longer to execute.
To sum up. I would use first option for small and medium applications which does not require a lot of requests.
I would consider moving DB to another server/servers, whenever the application hosts thousands of users per day.

Increased loading time of two websites sharing one database

Our main website remotely accessed the database of our other website which is on a different domain hosting. My problem is our main website is very slow in loading a page while the second website is not experiencing the problem of our main website(database is hosted on our second website).
Why we're experiencing this problem on our main website?
What would be the possible reasons?
What would be the possible solutions for this?
Edit:
We just transfer the other domain to the same hosting of our main website.
Maybe the problem is the database authentication process between two hosting.
This is a very, very wide question - I can only give general advice.
I'd start by making sure the slow website is properly written. Run the website on a controlled development environment, with a copy of your production database, and use a tool like Apache JMeter to subject it to load; make sure it is "fast" in that environment. "Fast" is a movable concept, but I'd be expecting to see sub-second response times up to hundreds of concurrent users.
If the site is slow in this context, it will be slow on production; find out where the bottleneck is, tune, optimize etc.
If that isn't the problem, I'd replicate that setup with the other website connecting to the same database, and throw load at both sites simultaneously. You might just have reached the scalability limits of the system, and you may be seeing performance issues related to that - unlikely if the first website responds quickly and the second doesn't, but it's possible you're seeing deadlocks or other concurrency issues.
If the website behaves well on "perfect" infrastructure, but not in production, you need to work out what the issue is on production. The best way is to use a profiler on the production environment; this might mean creating a copy of the website which isn't publicly accessible, and installing the profiler there. XDebug works nicely for PHP.
The profiler will show you where your application slows down; it could be in the PHP code, it could be in the authentication section, it could be executing the SQL queries.
Once the profiler tells you where the problem is, you can work out how to fix it.
However, as a rule of thumb, running database queries outside a single network cage is a terrible idea; it's not secure, it exposes your database queries to arbitrary internet performance problems, and it eats into your bandwidth allocation. It's not really to do with the domain in the sense of "www.company.com" - one hosting environment can run multiple domains - but if you're routing your database traffic over the public internet, you give up any control over performance.

Restart MySQL server without disrupting users

What are some generally accepted strategies for restarting a MySQL server on a busy website without interrupting current users? I am using a LAMP setup. I don't mind taking down the site for a time if need be, but if certain user activities are interrupted I could wind up with corrupted data. I do have the ability to bring up a second server if that helps in the transition. I need a solution that results in no corrupted data / data loss.
I suspect this could be a common problem without an easy solution, but not sure what the best approach would be. Any guidance would be appreciated.
Thanks, Brian
Any solution for high availability depends on redundancy.
The most popular strategy today is to run two MySQL servers. Configure the two servers to replicate bidirectionally. This comes with its own challenges; you must manage your applications carefully to write to only one server at a time, to avoid creating update conflicts. When you need to restart one MySQL server, switch your apps to use the other MySQL server.
Even with this configuration, you can't make the switchover without interrupting connections, even if the interruption is brief.
Another solution is MySQL Cluster, in which both MySQL Servers and storage are redundant, but this is also complex to set up and manage, requires high-end hardware resources, and shards your data in ways that make it hard to optimize for general SQL queries.

Any reason NOT to use subdomain for development?

I was originally planning on using a local machine on our network as the development server.
Then I had the idea of using a subdomain.
So if the site was at www.example.com then the development could be done at dev.example.com.
If I did this, I would know that the entire software stack was configured exactly the same for development and production. Also development could use the same database as production removing the hassle of syncing the data. I could even use the same media (images, videos, etc.)
I have never heard of anyone else doing this, and with all these pros I am wondering why not?
What are the cons to this approach?
Update
OK, so its seems the major no no of this approach is using the same DB for dev and production. If you take that out of the equation, is it still a terrible idea?
The obvious pro is what you mentioned: no need to duplicate files, databases, or even software stacks. The obvious con is slightly bigger: you're using the exact same files, databases, or even software stacks. Needless to say: if your development isn't working correctly (infinite loops, and whatnot), production will be pulled down right alongside with it. Obviously, there are possibilities to jail both environments within the OS, but in that case you're back to square one.
My suggestion: use a dedicated development machine, not the production server, for development. You want to split it for stability.
PS: Obviously, if the development environment missed a "WHERE id = ?", all information in the production database is removed. That sounds like a huge problem, doesn't it? :)
People do do this.
However, it is a bad idea to run development against a production database.
What happens if your dev code accidentally overwrites a field?
We use subdomains of the production domain for development as you suggest, but the thought of the dev code touching the prod database is a bit hair-raising.
In my experience, using the same database for production and development is nonsence. How would you change your data model without changing your code?
And also 2 more things:
Its wise to prepare all changes in SQL script, that is run after testing from different environment not your console. Some accidental updates to live system made me headake for weeks.
Once happend to me, that restored backup didn't reproduced live system problem, because of unordered query result. This strange baviour of backup later helped us find the real problem simplier, than retrying on live system.
Using the production machine for development takes away your capacity to experiment. Trying out new modules/configurations can be very risky in a live environment. If I mess up our dev machine with an error in the apache conf, I will just slightly inconvenience my fellow devs. You will be shutting down the live server while people are trying to give you their money.
Not only that but you will be sharing resources with the live enviroment. You can forget about stress testing when the dev server also has to deal with actual customers. Any mistakes that can cause problems on the development server (infinite loop taking up the entire CPU, running out of HDD space, etc) suddenly become a real issue.

How to keep databases synchronized between hosting account and a local testing server?

I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.