Incremental backups (mariabackup) and dropped databases - mysql

I tried to search for this case everywhere but, couldn't find anything that answers this - probably weird - question: What happens to the incremental backups taken from a mariadb server using mariabackup if one of the databases is dropped?
Suppose you dropped one of the databases in a mariadb server, then you created an incremental backup afterwards, where the base full backup certainly includes the dropped database, does applying the incremental backup when preparing to restore include that removal? or that dropped database will still be present in the fully prepared backup?
PS: I realize that mariabackup uses InnoDB LSN to backup the changes / diffs only but, do those diffs include the removal of a table or a database?
My guess is that when preparing the incremental backup over the base, it would remove the tables and / or databases which are missing from the latest delta backups but, I might be wrong so, that's why I'm asking.

Well, after trying out the scenario I've found out that the dropped databases do exist in the full integral backup but, their tables are removed.
So, I think that a database structure changes are also included in the incremental backup e.g. modifications in table columns, foreign keys, indices, table creation and dropping etc.. are all tracked but, dropping the database itself is NOT tracked however, a dropped database will have all its tables missing from the result backup of integrating all incremental backups to the base one.

Related

How to re-replicate ignored tables

I'm currently thinking about the following problem:
A customer has set up a simple master/slave replication between two mariaDB systems. For unknown reasons they have set the flag "Replicate_Wild_Ignore_Table" to skip "logdb.%". Obviously, they decided to skip the skipping of that database and want the logdb to be included in the replication again.
I'm curious now, is it possible to somehow remove that flag and have the database in question be replicated as the rest or is there no way to circumvent the "stop slave, dump master, import dump, recreate replication based on current logpos, start slave" procedure?
You can't assume that the master still has all relevant binlogs that once contained updates to the logdb.% tables. That is, even if you could re-apply those updates, do you have enough history to account for all changes to the tables?
Another risk is if you use statement-based replication, if there were ever statements that referenced both a table in logdb.% and a table in another database, the replication filter has skipped that statement. So for example:
INSERT INTO mydb.mytable SELECT * FROM logdb.othertable;
Therefore even the tables that are not in logdb.% might be compromised. The point is you don't know for sure.
The bottom line is that you should definitely reinitialize the replica now by taking a current backup of the master, and avoid using replication filters in the future.
If you use InnoDB tables, you might consider using Percona XtraBackup to make the process easier. See https://www.percona.com/doc/percona-xtrabackup/2.3/howtos/setting_up_replication.html

Can I create an index in mysql without invoking creation of a temporary table the size of my target table?

Mysql 5.05 that is hosting an older application that still gets lots of love from the users. Unfortunately, I'm not anything other than a hack dba at best, and am very hesitant in my skills to safely migrate to a new version of the database unless absolutely necessary. We are in the process of procuring a new application to take over responsibilities for the old application, but are probably a year out or so.
Anyway, I was patching the application the other day and added a column to a table, the command took a while to complete and in the meantime nearly filled up my drive hosting the datafiles. (table is roughly 25G) I believe this was a function of the creation of a temporary table. For reasons I'm not clear on, the space did not become free again after the column was added; i.e., I lost roughly 25G of disk space. I believe (?) this was due to the fact that the database was created with a single datafile; I'm not really sure on the whys, but I do know that I had to free up some space elsewhere to get the drive to an operable state.
That all being said, I've got the column added, but it is worthless to the application without an index. I held off adding the index trying to figure out if it is going to create another massive, persistent 'temporary' table at index creation time. Can anyone out there give me insight into:
Will a create index and or alter table create index statement result in the creation of a temporary table the same size as the existing table?
How can I recover the space that got added to ibdata1 when I added the column?
Any and all advice is greatly appreciated.
MySQL prior to version 5.1 adds/removes indices on InnoDB tables by building temporary tables. It's very slow and expensive. The only way around this is to either upgrade MySQL to 5.1, or to dump the table with e.g. mysqldump, drop it, recreate it with the new indices, and then restore it from the dump.
You can't shrink ibdata1 at all. Your only solution is to rebuild from scratch. It is possible to configure MySQL so it doesn't use one giant ibdata1 file for all the databases - read that answer and it will explain how to configure MySQL/InnoDB so this doesn't happen again, and also how to safely dump and recreate all your databases.
Ultimately, you probably want to
Make a complete dump of your database
Upgrade to MySQL 5.1 or newer
Turn on InnoDB one-file-per-table mode
Restore the dump.

How to retrieve deleted records from MySQL

Is there any methods to retrieve deleted records from a mysql database?
No.
Deleted records are gone (or munged so badly you can't recover them). If you have autocommit turned on, the system commits each statement as you complete it (if you have auto commit turned off, then do a rollback NOW - phew, you're saved -- but you are running with autocommit, aren't you?).
One other approach is to reply the activity that created the missing records - can you do that? You can either re-run whatever programs did the updates, or replay them from a binary log (if you still have the binary log). That may not be possible, of course.
So you need to recover the data from somewhere - either a backup of your db (made using mysqldump) or of your file system (the data files of MyISAM tables are all simply structured and on the disk - recovering InnoDB tables are complicated by the shared use of ibdata files).
There is a possible way to retrieve deleted records (depending upon your situation). Please check here:
https://stackoverflow.com/a/72303235/2546381

SQL Server synchronization with cutoff

I have a production DB (running on SQL Server 2008) with some ever-growing tables (orders etc). These tables are large and keep growing, so I want to make a cutoff at some point, but naturally, I do not want to lose the history entirely. So, I thought along the lines of:
One time: Backup the entire DB to another server
Periodically:
Back up differentially / synchronize from Production DB to Backup DB
In Production DB, delete all rows older the cutoff period
This would not, of course, replace the regular backup plan of the production server, but rather would allow shrinking its size while keeping the historical data available off-site, where I can use it for statistics and whatnot.
Does this make sense? And if it does, could you point me towards some solution / tool which allow this, other than manually writing code for EACH of the ever-growing tables.
Any advice will be appreciated.
Micky
May be partioning will help you.
It helps you to split table on different datafiles and filegroups. You can backup and restore each partition independenly.

mysqldump skipping some tables while taking a backup

I'm trying to take a full dump of my database. While taking a dump, mysqldump skips a few tables, especially those with foreign keys. It's not that every table with foreign keys is skipped. Some specific tables only!
I tried the -f switch. It forced it to include a few tables but still two tables are being skipped.
Is this normal? I mean, does this happen? Does my schema has some problems? How can this be solved?
In reference to #Nikhil's comment on McAfee. I ran into a situation where McAfee was trying to read (and thus was blocking) the temporary files that MySQL creates when queries move from in memory to temporary. We had to create a specific rule that prevented McAfee from trying to scan the temporary file so that MySQL wouldn't have issues. In this situation my educated guess would be that McAfee was doing something similar with the MySQL dump process.