We are having a 3 nodes Couchbase cluster of the community edition. We are taking the full backup of the cluster using cbbackup CLI tool. It never completed 100% of backup. It stops at a certain percentage of backup and shows the backup is completed.
enter image description here
Related
I've run into an issue where I can no longer connect to my VM that I could connect to before.
Checking the VMs Observability Tab as well as the servers serial port output shows:
Aug 2 09:27:14 my-server systemd[1]: systemd-logind.service: Failed to run 'start' task: No space left on device
Aug 2 09:27:14 my-server systemd[1]: systemd-logind.service: Failed with result 'resources'.
Aug 2 09:27:14 my-server systemd[1]: Failed to start Login Service.
Aug 2 09:27:14 my-server systemd[1]: systemd-logind.service: Scheduled restart job, restart counter is at 1.
So the likely culprit is simply, that the disk is full.
I've found this troubleshooting guide which I followed,by:
Shutting down the VM.
Increasing the Boot Disk size from 1000 to 1400 and 1450 in a second attempt.
Starting the VM.
Unfortunately that didn't help and I still can't connect. Further steps require me to connect via SSH. It might be worth noting, that I did attempt to increase the size through the web interface first before I found the troubleshooting page. At that point the VM might or might not have been shut down properly. Another change I noticed is, that the VM does no longer report Disk Space Utilization or Memory Utilization in the Observability Tab since it was restarted, however it sill reports other stats.
Under File system issues a it is mentioned that under Debian images ... expand-root.sh[..]: Resizing ext4 filesystem on /dev/sda1. should appear. While this is an Ubuntu image, I couldn't find anything similar using grep.
So, how could I gain access to the stuff on the VM?
I would suggest to:
Create a snapshot from the affected disk.
Create an additional disk using the created snapshot
After creating the disk from the snapshot, attach it to an another working Linux VM Instance "instance-1" for example, as additional disk, not as the boot disk
Then mount the attached disk to a specific directory.
After mounting, You need to free up some space from the mounted disk, to have space for the services needed to recognize the resizing of the disk.
After freeing up some space, you need to re-attach the disk as the boot disk to test if you had regain control of the VM Instance.
From my understanding, Aws RDS facilitate backup for the mysql database, but it is not cheap.
While using docker image for mysql may save us more in terms of cost? Because we only need to download the docker image for dockerhub and directly use it for free(e.g. create an instance and run the container).
Is there another reason of using RDS other than facilitating backup for the database?
I list several features of RDS which may warrant using it over self-managed MySQL docker container on an EC2 insistence or ECS:
RDS is managed service, so all OS updates, MySQL patches are managed by AWS and you don't have to worry about them.
RDS supports storage auto-scaling - you can start with small db, and RDS will extend storage automatically as needed.
Point-in-time recovery allowing you to "rewind" your recent db changes.
Read replicas - you can create up to 5 read replicas of your database to off-load read intensive applications from your primary db instance.
Cross-region read replica - you can have your replica in different region which is good for disaster recovery (entire AWS region goes down)
Automated and manual backups, including backups to a different region.
IAM authentication to your db instead of regular username/password.
Multi-AZ - RDS can keep a stand-by replica of your primary database instance in different availability zone, for quick recovery if it fails.
CloudWatch integrated db metrics and logs.
RDS event notifications allow you for straight-forward development of automations e.g. invoke lambda automatically for every backup, or if something fails.
Easier integration with other services, e.g. use of RDS Proxy in Lambda functions.
All these and other features of RDS make it much more expensive then hosting a self-managed MySQL docker container. But if MySQL in docker container meets all your requirements, then there is no need to use RDS. You can always start with the docker, and if your data and requirements grow, you can migrate to RDS.
I have done different type of backup a lot many times before, but I did everything in LINUX, because thats what people use normally :P.
This time its a new app and it runs on client system independently and its windows. So xtrabackup/percona of no help now :(.
And I am not i a favor of using binLog for differential/incremental backup as, to me its both risky and time consuming.
Can any of you please help me out with a reliable option by which I can perform incremental backup in a windows system.( I can not purchase a tool for backup for every system our app will be used).
There is a way of running Percona XtraBackup given that you are familiar with that tool. Although Percona don't plan to create a version that is native to windows, you can run Percona Xtra Backup in a Docker container.
In summary, once you have set up Docker and given it the necessary access, you can run Percona XtraBackup from within the container and it will write the backups to a folder in your C drive.
The full information can be found in this blog post: https://www.percona.com/blog/2017/03/20/running-percona-xtrabackup-windows-docker/
I give you that reference rather than repeating how-to in full because if there are any updates to the procedure then that post is likely to be updated before this answer. I hope this helps.
Disclosure: I work for Percona
There are three ways how you can perform incremental backups on Windows:
Binglog backups
You need to copy new binlog files from time to time to your destination place. This is not very difficult, and in it can be achieved with a simple script.
The main drawback: is a longer recovery time.
mysqlbackup
MySQL Enterprise Edition includes a backup utility: mysqlbackup. It allows you to make physical backups of MySQL, as well as incremental backups. This utility is similar to Percona backup. More details are in the official documentation.
The main drawback: is the price.
XtraBackup in a Docker container
XtraBackup can be run under Windows using Docker. Just map /var/opt/mysql to directory with DB files in Windows C:\Program Files\MySQL\MySQL Server 5.6\data. It is a good alternative to mysqlbackup
The main drawback: is difficulties when using Docker on Windows.
More details about the way how MySQL Incremental backups can be performed please find in this blog post.
--
Choose what you like. For a small database, I would choose a binlog backup - unlike the physical backup that XtraBackup and mysqlbackup do. Binlog backup is a logical backup, binlogs are less "capricious" when restoring, and their redundancy can be fixed by compression.
Currently the database on Amazon RDS is automatically backed up once a day - that seems to be a default behavior.
When I look on the "Backup retention period", 1 day is the smallest option. How do I do an hourly (or every 30 minutes) backup and (ideally) save the backup to my Amazon S3? Is this supported by Amazon RDS or do I need to do a manual mysqldump and upload the backup to S3 through my own script?
I haven't found any 2016 answer.
There isn't any need. From Backing up and restoring an Amazon RDS DB instance:
In addition to the daily automated backup, Amazon RDS archives database change logs. This enables you to recover your database to any point in time during the backup retention period, up to the last five minutes of database usage.
In short, RDS already has this covered.
I have been doing some research on best backup procedures for largish (27.678gb) MYSQL database tables.
Currently we are using a program called Rapidsync (which is a offsite backup tool) but it is slow and it locks the tables it's currently backing up therefore causing downtime/slowness of sql.
Our current server is running Windows 2008 r2 with SQL server 2008 (on the same box) also.
Hardware specs for the dedicated server are:
16gb Ram
CPU intel xeon E3-1230 V2 # 3.30GHz
1 TB hard drive
In terms of databases we have 58 in total varying in size in which some need to backed up weekly ideally or even daily.
Through a program we use called Navicat you can tunnel to a database using SSH and copy databases manually, is this a reliable and feasible option if we were to install it on our local machine and copy them across? Or would it be more secure/efficient to use SQL Dump maybe?
I hope I have given all of the necessary info but please do ask if you need to know more.
P.S Only free options at this moment as we are on a tight budget! :)
Thanks in advance
You mentioned SSH, so I suppose the backups can be done also on other server than the database server itself. For Unix, you can use great tool Percona xtrabackup, which is free and supports online (without locking) backup of InnoDB database files and also incremental backups. Maybe it is possible to compile it also on Windows (part of it is written in C, part in Perl).
So you can setup weekly full backup and daily incremental backups. The tool will keep track of which pages of the InnoDB datafile has been changed and will copy only those.