SQL Server 2008 backup best practices - sql-server-2008

Without not reason, I lose all my data in my database. Fortunately this was just test data, but this made me to think what will happen if this was done with a production db.
Ultimately, every developer got a db problem and want to rollback the db. We don't do things to protect the db, as we think its a DBA work, but then we got into trouble...
What are your backup best practices?

Since all the developers are also the DBAs where I work, we're collectively responsible for our backup strategy as well - if you care about the data, make sure you're at least informed about how the backups work, even if you're not part of the actual decisions.
The VERY first thing I do (before I even have any databases set up) is set up nightly maintenance plans that include a full backup, and direct those backups to a central network share on a different computer (our NAS). At the very least, for the love of your job, don't put the backups on the same physical storage that your database files sit on. What good are backups if you lose them at the same time you lose the disk?
We don't do point-in-time restores, so we don't do log backups (all our databases are set to Simple recovery mode), but if you want logs backed up, make sure you include those as well, as an acceptable interval as well.
On a side note, SQL 2008 supports compressed backups, which speeds up backup time considerably and makes the files much, much smaller - I can't think of an instance where you wouldn't want to use this option. I'd love to hear one, though, and I'm willing to reconsider!

Here are some points from my own experience:
Store your backups and SQL Server database files on the different physical storages. Otherwise, if your physical storage failed you will lose both backups and database files.
Create your own SQL Server database backup schedule.
Test your SQL Server backup. If you never tested your backups I have a doubt that you will be able to restore your database if the failure occurs. Time to time you need to have the practice to restore your backup on a test server.
Test your recovery strategy - here is another tip. If the failure occurs how much time do you need to restore your database to the working state?
Backup SQL Server's system databases.
And this isn't a whole list, you can find more tips in my article https://sqlbak.com/blog/backup-and-recovery-best-practices/

Choosing the right backup strategy is one of the most important factors a DBA should consider at the point of developing the DB.
However the backup strategy you choose depends on a number of factors:
How frequent is transaction carried out on the DB: are there thousands of transactions going on every minute or maybe few transactions per day? For very busy database, i would say, take full nightly backups and transaction log backups every 10 mins or even less.
How critical the data content is: may it be employee payroll data? Then you'll have no acceptable excuse or you may have a few angry faces around your car when you want to drive home! For very critical database, take nightly backups possibly in two locations and transaction log backups every 5 mins. (Also think of implementing mirroring).
Location of your backup location: if your backup location is close to the DB location, then you can afford to take more frequent backup than when it is located several hops away with not excellent bandwidth between.
But in all, i would say, schedule a nighly backup every night, and then transaction log backups at intervals.

Related

MySQL server with 2 TB of Data.

What is the best strategy to backup the 2 TB MySQL data and How often should one schedule the data backup ?
I am using replication as a backup strategy, but i know that's not the good practice.
Please note : I am new to MySQL servers and this question may sound very basic and unsuitable to some old users.But I am trying to learn.
Thanks.
Size matters most in the fact that all operations take longer. There's no getting around that. Otherwise a lot of backup strategy remains the same.
First off, Replication is not a backup. It's for availability and scalability. Replication (with a delayed slave apply) is at best a single snapshot. Once a bad update/delete/truncate is replicated, the data is gone.
Your "best strategy" depends on several factors:
- Recovery Time Objective (how fast you need to restore).
- Recovery Point Objective (to what point in time to restore).
- Many small databases? One 2 TB database?
- How much money do you have to spend on resources.
- Are you held to regulatory requirements for being able to restore data for 1,3,7, years, etc.
A physical backup using Persona Xtrabackup will be able to take a point in time snapshot of all databases on your server. (caveat being non-transactional tables using myisam engine)
A logical backup with mysqldump may be faster to backup, be smaller, and compress better, but on restore, it needs to build indexes, so may take longer.
So... In a perfect situation, take regular physical and logical backups. Take continuous backups of the binary logs (https://www.percona.com/blog/2012/01/18/backing-up-binary-log-files-with-mysqlbinlog/). As long as your slave is up to date, you can do your backups there, as to not impact your master. To determine your frequency of backups, restore a backup, and time how long it takes to apply 1 week of logs. Did you meet your "Recovery Time Objective"? No, more frequent backups are needed.
Also, hang out at https://dba.stackexchange.com to get some more insight into these operational challenges of owning a database :)

Huge sql server database with varbinary entries

We have to design an SQL Server 2008 R2 database storing many varbinary blobs.
Each blob will have around 40K and there will be around 700.000 additional entries a day.
The maximum size of the database estimated is 25 TB (30 months).
The blobs will never change. They will only be stored and retrieved.
The blobs will be either deleted the same day they are added, or only during cleanup after 30 months. In between there will be no change.
Of course we will need table partitioning, but the general question is, what do we need to consider during implementation for a functioning backup (to tape) and restore strategy?
Thanks for any recommendations!
Take a look at the "piecemeal backup and restore" - you will find it very useful for your scenario, which would benefit from different backup schedules for different filegroups/partitions. Here are a couple of articles to get you started:
http://msdn.microsoft.com/en-us/library/ms177425(v=sql.120).aspx
http://msdn.microsoft.com/en-us/library/dn387567(v=sql.120).aspx
I have had the pleasure in the past of working with several very large databases, the largest environment I have worked with being in the 5+ TB range. Going even larger than that, I am sure that you will encounter some unique challenges that I may not have faced.
What I can say for sure is that any backup strategy that you are going to implement is going to take awhile, so you should plan to have at least one day a week devoted to backups and maintenance where the database while available should not be expected to perform at the same levels.
Second, I have found the following MVP article to be extremely useful in planning backups which are taken through the native MSSQL backup operations. There are some large database specific options for the backup command which could assist in reducing your backup duration. While these increase throughput, you can expect performance impact. Specifically the options that had the greatest impact in my testing is buffercount, blocksize, and maxtransfersize.
http://henkvandervalk.com/how-to-increase-sql-database-full-backup-speed-using-compression-and-solid-state-disks
Additionally, assuming your data is stored on a SAN, you may wish as an alternative to investigate the use of SAN level tools in your backup strategy. Some SAN vendors provide software which integrates with SQL Server to perform SAN style snapshot backups while still integrating with the engine to handle things like marking backup dates and forwarding LSN values.
Based on your statement that the majority of the data will not change over time, inclusion of differential backups seems like a very useful option for you allowing you to reduce the number of transaction logs which would be have to be restored in a recovery scenario.
Please feel free to get in touch with me directly if you would like to discuss further.

WHM / cPanel: mySQL back-up

I am setting a new server, with WHM / cPanel installed.
It is very important that I can take a full mySQL back-up once or twice a day.
Right now the databases are pretty small (20 mb), but volume will increase rapidly as soon s we get more customers.
I now there is a possibility to create a cron job and have the back-up emailed.
However, I think it is a shitty solution due to the future size of these back-up's.
What are your best advices regarding daily mySQL back-ups?
You can have a cron job set up to back up your database and dump it to a directory of your choosing:
mysqldump --all-databases --skip-lock-tables | gzip -9 > /your/backup/dir/here/`date +%Y%m%d`_backup.sql.gz
What you do from there is up to you, but I agree that you should not email it due to the size. Perhaps you can use SCP to send it to a different server.
A common practice for backups is as follows
Do a daily backup.
Once the first daily reaches a week old, start deleting things, but keep the one that hit a week old.
Start keeping monthly backups, deleting all weekly once the first one hits a month old.
You can continue to do this with yearly as well.
The whole point of this is that you should catch any errors in information before your backups hit the week mark. You shouldn't really need to keep the REALLY old backups for any specific purpose, so that is why you start deleting things.
Unless you have unlimited space on various disks, you would probably be best to follow this practice, and just make sure nothing happens to your data.
Cheers.
If you think that your data will grow to considerable size to where it will cause significant database downtime to make dumps, you first ought to look into implementing a replicated set up to where you can take the dumps off the slave.
You can certainly run a simple mysqldump command from within cron, placing dumps into a drive on the machine. You can then worry about moving the data off the machine by some other means (you don't want to email it). Just keep in mind that it will lock your database up briefly for as long as your data dump takes to complete.
If you want to think out of the box, you could just implement a MySQL instance on Amazon RDS. It can do the replication, database snapshots, etc. with basically push-button ease.

Can server snapshots potentially damage MySQL transactions?

Let's say your taking daily snapshots of your server as a whole (public_html, sql files, etc) for later restoration in case of system failure.
Is it possible that you can restore damaged a MySQL InnoDB if you took the snapshot while an uncommitted transaction was taking place? Or will InnoDB do just fine and discard "incomplete" transactions on restoration?
From the databases viewpoint we are dealing with an unclean shutdown (i.e. power went off) and a lost connection so it will discard all transactions that are not committed.
If you are taking a snapshot of the server that's just like freezing everything in a cryogenic sleep, then after a restore the database would just awake expecting to talk to a non existing application.
The only issue that i can see is not from a transaction itself but from the fact that the database itself resides inside files. What if you freeze a file half-written to disk. I can see how that might be a problem. On the other hand there's probably some architectural design in place to prevent this as the same is true for a power outage and a database should live through that too.
To best of my knowledge, during a transaction, nothing gets saved into the database until commit has occurred. That's the ACID compliance right there. So no database files get written during a transaction, only after.
Also, in my opinion, database 'snapshots' should be done via dump. I'm not a server administrator so I wouldn't know for a fact, but it'd be a lot safer to restore the data that way.
Although, MySQL is a gray area to me, I'm more confident with SQL Server, so don't take this as a fact, as SQL Server has its own ways of getting backups made.

SQL Server 2008 - Best Backup solution

I'm setting up a SQL Server 2008 server on a production server, which way is the best to backup this data? Should I use replication and then backup that server? Should I just use a simple command-line script and export the data? Which replication method should i use?
The server is going to be pretty loaded so I need an efficent method.
I have access to multiple computers that I can use.
A very simple yet good solution is to run a full backup using sqlcmd (formerly osql) locally, then copy the BAK file over the network to a NAS or other store. It's sub-optimal in terms of network/disk usage, but it's very safe because every backup is independent and given that the process is very simple it is also very robust.
Moreover, this even works in Express editions.
The "best" backup solutions depends upon your recovery criteria.
If you need immediate access to the data in the event of a failure, a three server database mirroring scenario (live, mirror and witness) would seem to fit - although your application may need to be adapted to make use of automatic failover. "Log shipping" may produce similar results (although without automatic failover, or need for a witness).
If, however, there's some wiggle room in the recovery time, regular scheduled backups of the database (e.g., via SQL Agent) and it's transaction logs will allow you to do point-in-time restores. The frequency of backups would be determined by database size, how frequently the data is updated, and how far you are willing to rollback the database in the event of complete failure (unless you can extract a transaction log backup out of a failed server, you can only recover to the latest backup)
If you're looking to simply rollback to known-good states after, say, user error, you can make use of database snapshots as a lightweight "backup" scenario - but these are useless in the event of server failure. They're near instantaneous to create, and only take up room when the data changed - but incur a slight performance overhead.
Of course, these aren't the only backup solutions, nor are they mutually exclusive - just the ones that came to mind.