I have a small app running on a production server. In the next update the db schema will change; this means the production database schema will need to change and there will need to be some data manipulation.
What's the best way to do this? I.E run a one off script to complete these tasks when I deploy to the production server?
Stack:
Nodejs
Expressjs
MySQL using node mysql
Codeship
Elasticbeanstalk
Thanks!
"The best way" depends on your circumstances. Is this a rather seldom occurrence, or is it likely to happen on a regular basis? How many production servers are there? Are there other environments, e.g. for integration tests, staging etc.? Do your developers have an own DB environment on their machines? Does your process involve continuous integration?
The more complex your landscape is, the better it is to use solutions like Todd R suggested (Liquibase, Flywaydb).
If you just have one production server and it can be down for maintenance for a few hours, the it could be sufficient to
Schedule a maintenance downtime with your stakeholders and users
Shutdown the server
Create a backup
Update the database structure and contents as necessary
Deploy software updates
Restart the server
Test the result (manually or automatically)
Inform your stakeholders and users
If anything goes wrong, rollback to a backed up version of the database and your software.
Having database update scripts is advisable. Having tested them once or more is advisable even more. Creating a backup in advance is essential.
http://www.liquibase.org/ or http://flywaydb.org/ - pretty "heavy" for one time use, but if you'll need to change the schema again in the future, probably worth investing the time to learn one of these.
Related
I am developing a Drupal site using MariaDB.
The import process of a 77MB dump file locally (docker container running maria db) takes about 2 minutes.
The same import to an Amazon RDS (db.m4.large) running a MariaDB database takes more than 30 minutes.
Isn't the Amazon RDS supposed to be quicker ?
What is the recommended practice for having a quick dev environment for SQL ? (the local docker service is running too slow)
Thanks,
Yaron
If you are already on RDS, just use a snapshot.
Take a snapshot from production. (or find one of the automated snapshots)
Create a new DB from the snapshot
It's very fast and doesn't have the issues of latency and running millions of queries which an import has.
However, this is just one very crude approach to making a dev environment.
Some people have scripts that create the data sat for DEV from scratch. This might be more appropriate and even necessary, if for example you have a large database and developers that like to work locally on their computer.
Some people have scripts that sanitize DEV to eliminate sensitive and personal data, which you could run after the snapshot.
Some people even have DEV as a replica of the main DB and modify the DEV db so that additional usage doesn't clash with the replicated changes. This is a bit delicate though.
Often Dev and Tests use dummy data, and Staging uses real data (cloned from Production and possibly sanitized).
I have question about data backup.
We are developing backed for mobile application.
So we have a few EC2 servers, one for api sub-domain and one for admin sub-domain. One RDS Mysql server for the database, also with 2 databases.
But I'm worried about one thing, RDS snapshots is good for database structure. If we will have some errors in application, or will need to revert some changes in structure.
I will just restore from yesterday snapshot. And how about content, because its adding every minute.
Maybe some one can describe mechanism or tools to prevent data our lost. Replications or something like that.
I think I've found the answer - bin log
https://dev.mysql.com/doc/refman/5.5/en/binary-log.html
I develop websites for several small clients and I would love to be able to keep my local databases up to date with each client's production servers. (I'm thinking nightly updates) Many of their databases are in the hundreds of megabytes, so I feel like creating and transferring complete dumps every night is excessive.
Here are the harebrained ideas I have come up with so far:
Create a dump on the server and rsync with the previous night's dump
on my local machine. That should only transfer the changed bits of
the file right?
Create a dump on the server, and locally diff it from last night's
dump. Transfer only that diff. Maybe it could also send to md5 of the
original dump so I could be sure I was applying the diff to the same
base file.
Getting ssh access to (most) of these servers is possible, but requires getting my clients to call their hosting providers with that rather technical request, which is something I would rather not have to ask them to do. Bonus points if you can suggest a solution that I could implement via ftp/php.
Edit:
#Jacob's answer prompted some further Googling which lead to this thread: Compare two MySQL databases
This question (in several forms) seems pretty common. Most of the need, and therefore most of the tools, seems to focus around keeping the schema up to date rather than the data. Also, most of the tools seem to be commercial and GUI.
So far the best looking option seems to be pt-table-sync from the Percona Toolkit, although it looks like it might be a pain to setup on OSX.
I am downloading Navicat to test right now. I runs on OSX but is a commercial GUI program.
I think the most efficient way to set this up is to use rsync to backup the files is by using rsync to run each night , doing a incremental backup by syncing yesterday's back to todays backup then running the rsync for that day. In terms of backing-up the different websites i would look at having the sites as slaves which back up to your server (master) using the replicator function this will allow you to keep all the databases upto date and backed up
http://dev.mysql.com/doc/refman/5.0/en/replication.html
I have a MySQL install on a shared server and have access through phpMyAdmin. I want to make a continuous, real time clone of that database to a cloud mySQL database (we have created an Nginx-ready MySQL server specially for this database) I want to create a real time clone of the old one, then update code to point to the new database...
I think you will have difficulty doing real-time replication of a MySQL in a shared server environment. Since you appear to be moving db servers, I would be inclined to do a hot copy of your data, and install that on the new db server. At the same time as taking that copy, you should switch on query logging on your application.
Your switch over would then consist of running logged queries against the new database (faster than they were logged!) and finally, at a point that all logged queries have been run, switching the configuration of the app so that the new db is used.
Edit: the problem with a hot copy is that data is being written to the db at the same time as it is being copied. That means that the 'last updated' time will be different for each table. On that basis, is it possible in your application to set up a 'last_updated' column for each row? If so you will be able to tell for each table which logged queries still need to be copied.
What you're looking for is replication. It has far to many options to cover here in a single post.
http://dev.mysql.com/doc/refman/5.5/en/replication.html
If your going to do replication over the internet you'll want to secure it.Your host might allow a virtual local area network So this doesn't use up your bandwidth resources.
A great set of tools from percona you should look at are maatkit
https://launchpad.net/percona-toolkit
Documentation and usage examples
http://www.maatkit.org/doc/
It's good for other tasks but it also allows you to replicate a live database quickly.
When your working with live databases make sure your backups are upto date.
ENVIRONMENT:
Ubuntu hardy LTS, Apache 2, Passenger
Virtual Server One: rails 2.3.8 application accessing MySQL common_database
Virtual Server Two: rails 3.0.4 application accessing the same MySQL common_database
The first application gets regular use from our clients. The second application is released but seeing light usage currently. The database seems to be working fine.
Someone advised me that this configuration could wind up corrupting the database. We anticipate that the second application will eventually get heavy usage. It would kill some momentum if the database became corrupted. A few more facts:
Neither application has the ability to change database structure.
Both apps will be accessing the same tables at the same time.
It is impossible for both apps to attempt to update the same record at the same time.
Has anyone experienced corruption of a MySQL database that is accessed in this kind of configuration? Were you able to overcome the problem? How?
What was the case made for risk of corruption?
As long as you aren't using rake db:migrate and both apps have the same models, you have roughly the same risks as if two or more web servers with the same application version have been spun up. If there is a divergence in expectation of what the database schema looks like between the two apps, you may run into issues, especially if the divergence is related to foreign keys, or application logic that is based on magic values.
With any concurrent manipulation of a database you run into a category of issues around modifying the same records simultaneously (who wins when two clients try to modify the same piece of data; what kind of locking model do you use, etc).