A little bit stumped here, can't seem to find anyone with the question. I feel like my question is a lot simpler than it probably sounds. Basically I want to have an exact copy of my rails database on a different server, as it is being populated. Let me explain with an example
I have a production website that needs to be up at all times is the bottom line here.
Currently if the website goes down, I have to use the latest copy that I have of the database (Because the server is down), and when the site comes back up, I have to manually import anything new into the original mysql server. So I am looking for a way to keep the mysql server on both servers in sync with each other so that if one goes down, they both still have the same information.
I understand that this can add a lot of overhead in the rails app, which I am not that concerned about as I can find ways to defer the mysql queries. Unless someone knows a better way to do this?
Related
I'm a little (very) paranoid. I work with workbench for my own projects as well as for work. One of the things I am completely frightened about is performing dangerous SQL commands like delete from x whilst connected to my work's remote server erroneously (thinking it is my development machine). The question then is, is there a way to configure workbench to prevent you from making stupid (and usually tired) mistakes, or is there an alternative that does? Or is it just a thing of being more controlled?
Getting a confirmation from the user as suggested in the comments is probably not a good solution, depending on the frequency you send queries. After the first 20-30 of such confirmation dialogs you get tired of them and just click them away.
A much better way is to establish 2 simple habits:
Give your users only the absolute minimum of privileges, so that they can do their work. This limits the damage they can cause.
Make backups. There's no question if you need a backup, only when.
I'm wondering what could possibly cause this? I am stumped as I have been searching for an answer for 2 days now.
I have a big table, around 390.000~ rows. So there's no problem with the 50% treshhold. I have been building this site on my test server using XAMPP. I have now moved everything to my server using Ubuntu. Website files + mysql tables.
I have also set the only two settings I can think of in my my.cnf, min_word_length and ft_stopword_file. On the test server I get perfect results from my searches. On my live server I get barely any results (although some, sometimes).
I am just wondering what settings I could've forgotten to get my live server to work?
I know this is a fuzzy question but I think it could be useful for many people in the same situation in the future.
Thank you in advance!
I've been doing a lot of research, reading on replication, etc but just not sure as to what mysql solution would work.
This is what I'm looking at:
when my mysql fails for some reason or there are certain queries that are taking really long to execute and locking some tables, I want the other insert/update/select queries to still function at normal speed without having to wait for locks to be released or for the main database to be back up. I'm thinking there should be a second mysql server for this to happen, but is what I mentioned possible even if there is and would it involve a lot of change in my existing programming logic?
when my database is being backed up, I would still like my site to function normally, all inserts/selects/updates should function as normal.
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
So what do I need to do to get all this done and also would it require changing plenty of existing coding to suit the new set up? [My site has a lot of reads and writes]
There's no easy way. You're asking for a highly-available MySQL-based setup, and that requires a lot of work at the server and client ends.
Some issues, for example:
when I need to alter a large table, I wouldn't like it to affect my application, there should be a backup server to work from.
If you're altering the table, you can't trivially create a copy to work from during the update. What about the changes that are made to your copy while the first update is taking place?
Have a search for "High Availability MySQL". It's mostly a solved problem, but the solution depends heavily on your exact requirements. You cannot just ask for "I want my SQL server to run at full speed always forever no matter what I throw at it".
Not a MySQL specific answer, but a general one. Have a read only copy of your DB for site to render, which will be synced to the master DB regularly. This way, you can keep your site working even if the master DB is under load/lock due to insert/delete. For efficiency, keep this copy as denormalized as you can.
I have several databases hosted on a shared server, and a local testing server which I use for development.
I would like to keep both set of databases somewhat synchronized (more or less daily).
So far, my ideas to solve the problem seem very clumsy. Anyway, for reference, here is what I have considered so far:
Make a database dump from online databases, trash local databases, and recreate the databases from the dump. It's a lot of work and requires a lot of download time (which guarantees I won't do it as much as I would like it to be done)
Write a small web service to access the new data, and write a small application locally to communicate with said web service, download the newest data, and update the local databases.
Both solutions sound like a lot of work for a problem that is probably already solved a zillion times over. Or maybe it's even an existing feature which I completely overlooked.
Is there an easy way to keep databases more or less in synch? Ideally something that I can set up once, schedule and forget about.
I am using MySQL 5 (MyISAM) databases on both servers.
=============
Edit: I had a look at replication, but it seems that I can't go that route because the shared hosting does not give me enough control on the server itself (I got most permissions on my databases, but not on the MySQL server itself)
I only need to keep the data synchronized, nothing else. Is there any other solution that doesn't require full control on the server?
Edit 2:
Sorry, I forgot to mention I am running on a LAMP stack on the shared server, so Windows-only solutions won't work.
I am surprised to see that there is no obvious off-the-shelves solution for this problem.
Have you considered replication? It's not to be trifled with but may be what you want. See here for more details... http://dev.mysql.com/doc/refman/5.0/en/replication-configuration.html
Take a look at Microsoft Sync Framework - you will need to code in .net, but it can resolve your issues.
http://msdn.microsoft.com/en-in/sync/default(en-us).aspx
Here is a sample for SQL server, but it can be adapted to mysql as well using ado.net provider for Mysql.
http://code.msdn.microsoft.com/sync/Release/ProjectReleases.aspx?ReleaseId=4835
You will need the additional tables for change tracking and anchors (keeping track of last synchronization) for this to work, in your mysql database, but you wont need full control as long as you can access the db.
Replication would have simpler :), but this might just work in your case.
I have a site set up using CakePHP and MySQL and I want to work on a test database without disrupting my live site in case something goes wrong.
I have another busy site, but my test site runs off the live database which can be occasionally nerve wracking.
What do I do if I change a table name in the test db and I want it changed in the live database? Or if I remove a record from the test database. Is there a way to diff the changes? How do I even merge those changes?
How does this interfere with live user edits and things of that nature?
Hopefully some of you working devs can share some insight!
As I said in the comment, there are too many questions at once here IMO.
However, as for this question:
What do I do if I change a table name in the test db and I want it changed in the live database
this is comparably easy to do manually: Any mySQL client will show you the exact SQL query that was made to change a table or a record. You would keep track of every change, and build "changesets" from those queries, i.e. just series of queries that you then run on your live database, for example after putting the site into maintenance mode for a moment.
This is enough in many, many small to mid-size use cases.
To get answers on more sophisticated topics like database replication, clustering and such, I think you will need to refine your answer.