Veeam B/up and azure storage - veeam

We have Veeam on-premises and want to also use Azure blob storage with it. We want to backup data locally and then create a secondary copy in the cloud (Azure) like D2D2C (disk-to-disk-to-cloud). Is it possible?

Veeam can send data to S3-compatible object storage and Azure blob storage using 'Capacity Tier' which is part of your Scale-out Backup Repository (SOBR), which requires an Enterprise license.
Under 'Backup Infrastructure', 'Scale-out Repositories', you can create a new Scale-out Repository and add normal Veeam Backup Repositories that are not set to receive incompatible data types (transaction log backups, configuration backups, etc) as part of your 'Performance Tier'. The Veeam Object Storage Repository becomes part of your 'Capacity Tier'.
Note: As of writing, in Veeam 9.5 U4, you're allowed to have two Scale-out Repositories each with 1 object storage repository and 3 active standard backup repositories in Enterpise edition. Enterprise Plus removes these limitations.
https://helpcenter.veeam.com/docs/backup/vsphere/backup_repository_sobr.html?ver=95u4
You don't need to create a secondary copy job for this, instead all you would do is point your job to the Scale-out Repository, and let it handle the rest. The Scale-out Repository 'Capacity Tier' settings allow you to set how soon data is archived off to the cloud. https://helpcenter.veeam.com/docs/backup/vsphere/capacity_tier.html?ver=95u4

Related

MySQL Change data capture with binary logging (binlog) disabled

I have a scenario, where I need to capture the ongoing data changes (CDC) from MySQL and push it to the AWS data lake (S3). The challenge here is that the MYSQL binlog is disabled. We don't have control over the database configuration as the MySQL database server is in the vendor's scope and we cannot enable the binlog.
Hence we are not able to use services like Debezium and AWS DMS for which binlog is mandatory.
So, I'm looking for a low-code opensource tool or AWS services to push ongoing changes from MySQL to AWS datalake without relying on the MYSQL binlog file.
Note: Full data load is working with AWS DMS, however the problem is only with the CDC

How to create "Write" clone of a production RDS Aurora instance?

I have a production DB that is on RDS Aurora MySQL. I would like to create a "staging" version of it, so I need a complete duplicate/clone of the production version.
Most importantly I need the staging version to have write access to the new instance.
Is this possible?
Review Cloning Databases in an Aurora DB Cluster in the RDS User Guide.
Clones are not the same thing as replicas. A replica, in Aurora, has read-only access to the same data store allowing you to spread out your read workload across multiple instances... but a clone is a readable/writable moment-in-time fork of your original database. Any changes after the clone is created don't change the data on the original database instances (or on any other clones, and up to 15 independent clones are currently supported).
You can also create a new Aurora cluster from a snapshot of your production database, but a clone is probably the preferred solution for two reasons: it's faster to create a clone... but perhaps more importantly, clones use copy-on-write, so until you change the data on either the clone or the master it was cloned from, they share common storage space in the Aurora Cluster Volume that stores the data -- so you're only paying once for storage of the data that never gets changed. How this works is explained, with diagrams, in the RDS User Guide at the link, above.
You can take Backup ( Database snapshot ) on prod and restore the backup into new RDS Aurora servers ( during RDS Aurora instance creation) . It is simple GUI interface in AWS. You can change your permission after database has been restored into stage.

Sync data from local db to Central SQL Server

I have a requirement to sync local db data with central SQL server. The remote users (mostly around 10 people) will be using laptop which will host application and local db. The internet connection is not 24x7. During no connectivity, the laptop user should be able to make changes in local db and once the connection is restored, the data should be synced with central SQL server automatically. The sync is just usually data updates. I have looked at options Sync framework and Merge replication. I can’t use sync framework as I am not C# expert. For Merge replication, additional hardware is required I believe which is not possible. The solution should be easy to develop and maintain.
Are there any other options available? Is it possible to use SSIS in this scenario?
I would use Merge replication for this scenario. I'm unaware of any "additional hardware" requirements.
SSIS could do this job but it does not give you any help out-of-the-box - you would be reinventing the wheel for a very common and complex scenario.
an idea...
Idea requires an intermediate database (exchange database).
On the exchange database you have tables with data for each direction of synchronization. And using change tracking on exchange db, and central.
On the local database side could mean rows with flags:
row is created on local db
row comes with exchange db
row required resynchronisation (when is updated, ect.)
Synchronisation localdb-exchange db.
When synchronizing, first send the data in localdb (marked as created locally or required resynchronisation), later download the data from exchange db (marked by change trancking as changed).
Synchorisation beetween exchange db and central db is simply, basen on change tracking with the database engine.
About Change Trancking here!

Is it possible to real-time synchronize 2 SQL Server databases

I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.

How can I backup a MySQL database on AWS?

I've been playing with AWS EC2 and really like it. There is one drawback though, the instance could disappear due to hardware failure or whatever reason. This happened to me in my first week of operation. I was wondering whether there are good solutions to backup a MySQL database so that I don't lose my customer credentials?
You can transfer mysql database directly from EC2 machine to S3bucket but you will consume more cost for bandwidth and storage. You go for a third party application (which is safe) to backup your mysql or any plugins. Because they compress your data & encrypt and then save in S3 storage. Also, you can enable snap shot and take snap shots for volumes (hard drives)
I suggest you to use 'StoreGrid' backup software to backup your mysql database in EC2 machine. check this following link to know more about Online Backup Service on Amazon EC2/S3 http://storegrid.vembu.com/online-backup/amazon-ec2-s3-cloud-online-backup.php
Check this following link to configure MySQL database BACKUP http://storegrid.vembu.com/online-backup/mysql-backup.php?ct=1
Note: You have mentioned Hardware failure occurs often ! --- you can backup entire hard drives too using the above software.
I hope, now your MySQL data base is backed up from EC2 instance and stored in S3 storage safely.
Cheers !
Amazon now offers Relational Database Storage, that is, pre-configured EC2 instances, without any OS access to host MySQL (or Oracle, or T-SQL for real) for you, but aim to solve much of the availability, reliability and durability issues one faces when trying to host transactional data store yourself on a bare EC2 instance.
http://aws.amazon.com/rds/
"automated backups, DB snapshots, automatic host replacement, and Multi-AZ deployments"