Move Indexes and Users from Couchbase 5.1 to Couchbase 6.0 - couchbase

There was a question asked here that's similar but does not solve my trouble.
I am running a Couchbase 5.1 server and would like to transfer EVERYTHING to a newly created Couchbase 6.0 server. Documents and Views have transferred as expected, but the primary and global secondary indexes have not been transferred as should have been according to Couchbase documentation here(for cbbackup_5.1) and here(for cbrestore_6.0). Both clusters are single node.
The commands used are:
./cbbackup http://source_IP:8091 ~/backups -u Username -p Password
./cbrestore ~/backups http://dest_IP:8091 -b Bucketname -u Username -p Password
::restore command run multiple times to cover multiple buckets
::received positive success messages in all cases.
The number of indexes are so many that re-creating each would be a major bother, and version 5.1 did not have the Alter Index capability.
Plus, any way to transfer users too? I have failed to find any documentation on this.

Related

What is the risk of `mysql_upgrade` without doing `mysqldump`?

After upgradig mysql to 5.7 form 5.5 few months ago, I forgot to do mysql_upgrade.
And facing some problems.. mysql, sys, performance_schema databases are missing and root privileges are broken. A lot of Access denied for user 'root'... messages pop up, when I try to do some mysql user privileges things.
This stack answer will have to solve my problem. But I need to know it won't affect any of the schemas, data... ect.
Because my database is pretty huge. It amounts to 10 GB and consists of about 50 tables. I'm afraid some bad things could happend. I know the answer will be the mysqldump.
But the full backup will cost a long time, maybe an hour. And the business won't accept that downtime.
So what is the risk of mysql_upgrade without doing mysqldump?
The risk of doing anything administrative to your database without backups is unacceptably high... not because of any limitations in MySQL per se, but because we're talking about something critical to your business. You should be backing it no less often than the time interval of data you are willing to lose.
If you are using InnoDB, then use the --single-transaction option of mysqldump and there should be no locking, because MVCC handles the consistency. If you are not using InnoDB, that is a problem itself, but using --skip-lock-tables should mimimize locking.
Note that it should be quite safe to kill a mysqldump in progress if you find it is causing issues -- find the thread-id of the dump using SHOW PROCESSLIST; and then KILL QUERY #; where # is the ID of the dump connection from the process list.
The potential problem with the answer you cited is that 5.1 > 5.5 is a supported upgrade path, because those two versions are sequential. 5.5 > 5.7 isn't. You should have upgraded to 5.6 and then 5.7, running the appropriate versions of mysql_upgrade both before and after each step (appropriate meaning the version of the utility matching the version of the server running at the moment).
You may be in a more delicate situation than you imagine... or you may not.
Given similar circumstances, I would not want to do anything less than stop the server completely, clone it by copying all the files to a new machine, and test the remediation steps against the clone.
If this system is business-critical, it should have a live, running replica server, that could be promoted to master and permanently replace this machine in the event of a failure. In a circumstance like this one, you would apply your fixes to the replica and promote it.
Access denied for user 'root'... may or may not be related to the schema incompatibilites.

How to confirm mysql-mariadb database migration is OK?

I've recently migrated databases (from a Ubuntu server) to a mariadb database (on a CentOS7 server) using 'mysqldump' and them importing with the 'mysql' command. I have this setup a a phpmyadmin environment and although the migration appears to have been successful, I've noticed phpmyadmin is reporting different disk space used and also showing slightly different row numbers for some of the tables.
Is there any way to determine if anything has been 'missed' or any way to confirm the data has all been copied across with the migration?
I've run a mysqlcheck on both servers to check db consistency but I don't think this really confirms the data is the same.
Cheers,
Tim
Probably not a problem.
InnoDB, when using SHOW TABLE STATUS, gives you only an approximation of the number of rows.
The dump and reload rebuilt the data and the indexes. This is very likely to lead to files of different sizes, even if the logical contents is identical.
Do you have any clues of discrepancies other than what you mentioned?

How to recover couchbase to the status of a backup?

I use cbbackup to backup couchbase. Then do some operations to the db, like adding and deleting documents. After the operation, I want to discard all the operations and recover db to the status of the backup.
I did the following test. cbbackup and cbrestore don't help during my test. How to achieve my goal?
Backup db
$ rm -rf /tmp/cbbackup
$ /opt/couchbase/bin/cbbackup http://cb_ip:8091 /tmp/cbbackup -u 'xxx' -p '***' -v
Remember the item count of the social bucket.
Count: 33
Delete one document from social bucket
Delete id: existing-item-110
Add two new document to social bucket
Ids: new-item-1, new-item-2
Remember the item count of the social bucket.
Count: 34
Restore social bucket
$ /opt/couchbase/bin/cbrestore /tmp/cbbackup http://cb_ip:8091 -u 'xxx' -p '***' -b mybucket -v
Verify if the deleted document is back and if the added documents are removed.
Result: Count: 34, no change
The deleted item is not back.
The added items are not deleted.
Conclution: cbrestore can't recover the db to a backup time point. The changes after the backup time point are not removed.
Use cbtransfer to restore data. The result and conclusion is the same as cbrestore.
$ /opt/couchbase/bin/cbtransfer /tmp/cbbackup http://cb_ip:8091 -u 'xxx' -p '***' -b mybucket -v
Before I directly answer you're question let me explain two important concepts about cbbackup and cbrestore.
These tools do not transfer raw data files during the backup and restore process. During backup data is streamed out of the server and written to disk and during restore data is put into the database using set operations.
Couchbase has the ability to do conflict resolution during sets. This means that if you backed up a key, then updated it, then do a restore and conflict resolution is enabled then the set during the restore will be discarded since it is not the latest update.
Below are two backup scenarios that are applicable to your use case.
First let's take a look at the point-in-time restore scenario. In order to achieve this you should delete and recreate your bucket and then run cbrestore. The reason is that cbrestore will not be aware of new keys you have added after the backup and will not be able to delete them.
Let's say in another scenario you just want to force overwrite all of the data in your bucket with the data you backed up. In this case you want to disable conflict resolution and you can do this with the "-x conflict_resolve=0" flag. This would work in a case when I backed up 1000 keys, then updated them, and then wanted to revert the updates I did after the backup. (Note that the conflict_resolve flag was accidentally removed in Couchbase 4.0 and 4.1, but will be added back in 4.1.1 and 4.5)
On a final note, I would recommend against using cbtranfer since it is not tested as well as cbbackup and cbrestore and that tool is generally only used as a last resort.

MYSQLDUMP failing. Couldn't execute 'SHOW TRIGGERS LIKE errors like (Errcode: 13) (6) and (1036) [duplicate]

This question already has answers here:
mysqldump doing a partial backup - incomplete table dump
(4 answers)
Closed 9 years ago.
Does anyone know why MYSQLDUMP would only perform a partial backup of a database when run with the following instruction:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" databaseSchema -u root --password=rootPassword > c:\backups\daily\mySchema.dump
Sometimes a full backup is performed, at other times the backup will stop after including only a fraction of the database. This fraction is variable.
The database does have several thousand tables totalling about 11Gb. But most of these tables are quite small with only about 1500 records, many only have 150 - 200 records. The column counts of these tables can be in the hundreds though because of the frequency data stored.
But I am informed that the number of tables in a schema in MySQL is not an issue. There are also no performance issues during normal operation.
And the alternative of using a single table is not really viable because all of these tables have different column name signatures.
I should add that the database is in use during the backup.
Well after running the backup with instruction set:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" mySchema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\mySchema_error.log > c:\backups\daily\mySchema.dump
I get this:
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': Error on delete of 'C:\Windows\TEMP\#sql67c_10_8c5.MYI' (Errcode: 13) (6)
Which I think is a permissions problem.
I doubt any one table in my schema is in the 2GB range.
I am using MySQL Server 5.5 on a Windows 7 64 bit server with 8 Gb of memory.
Any ideas?
I am aware that changing the number of files which MySQL can open, the open_files_limit parameter, may cure this matter.
Another possibility is interference from anti virus products as described here:
How To Fix Intermittent MySQL Errcode 13 Errors On Windows
There are a few possibilities for this issue that I have run into and here is my workup:
First: Enable error/debug logging and/or verbose output, otherwise we won't know of an error that could be creating the issue:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualRSS.dump
So long as debug is enabled in your distribution, you should now
both be able to log errors to a file, as well as view output on a
console. The issue is not always clear here, but it is a great first
step.
Have you reviewed your error or general logs? Not often useful information for this issue, but sometimes there is, and every little bit helps with tracking these problems down.
Also watch SHOW PROCESSLIST while you are running this. See if you are seeing status columns like: WAITING FOR..LOCK/METADATA LOCK which would indicates the operation is unable to acquire a lock because of another operation.
Depending on info gathered above: Assuming I found nothing and had to shoot blind, here is what I would do next with some common cases I have experienced:
Max Packet Size errors: If you receive an error regarding max-allowed-packet-size, which, you can add --max_allowed_packet=160M to your parameters to see if you can get it large enough:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --max_allowed_packet=160M > c:\backup\visualRSS.dump
Try to reduce run time/size using --compact flag. mysqldump will add everything you need to create the schema and insert the data along with other information: You can significantly reduce run-time and file size by just requiring the dump contain only the INSERTS to your schema and avoid all statements to create the schema and other non-critical info within ea. insert.This can mitigate a lot of problems is appropriate for use, but you will want to use a separate dump with the --nodata to export your schema ea. run to allow you to create all the empty tables etc.
/Create Raw data, exclude add-drop table, comment, lock and key check statements/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --compact > c:\backup\visualRSS.dump
/Create Schema dump with no data:/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --nodata > c:\backup\visualRSS.dump
Locking Issues: By default, mysqldump uses the LOCK TABLE (unless you specify
single transaction) to read a table while it is dumping and wants to acquire a read-lock on the table, DDL operations and your global lock type may create this case. Without seeing the hung query you will typically see a small backup file size as you described, and usually the mysqldump operation will sit until you kill it, or the server closes the idle connection. You can use the --single-transaction flag to set a REPEATABLE READ type for the transaction to essentially take a snapshot of the table without blocking operations or being block, saved for some older server vers that have issues with ALTER/TRUNCATE TABLE while in this mode.
FileSize issues: If I read incorrectly that this backup HAS NOT successfully run before, indication the 2GB filesize
potential issue, you can try piping mysqldump output straight into
something like 7zip on the fly:
mysqldump |7z.exe a -si name_in_outfile
output_path_and_filename
If you continue to have issues or there is an unavoidable issue prohibiting mysqldump from being used. Percona XtraBackup is what I prefer, or there is the Enterprise Backup for MySQL from Oracle. It is open source, far more versatile than mysqldump, has a very reliable group of developers working on it and has many great features that mysqldump does not have, like streaming/hot backups, etc. Unfortunately the windows build is old, unless you can compile from binary or run a local linux VM to handle that for you.
Very important I noticed that you are not backing up your information_schema table, this needs to be mentioned exclusively if it is of significance to your backup scheme.

Mysql slow.log user specific

I use a slow.log logging on my MySQL server to catch bottlenecks in my scripts, but at the same time I use phpmyadmin on this server. My scripts and phpmyadmin has different MySQL user accounts and now, when I analyzing the slow.log file, I see a lot of spam from phpmyadmin queries, is it possible, to configure the MySQL to logging a slow queries only from specific users?
If using MySQL 5.6, you can use the performance schema, and look at the different statements summaries there.
There are summaries by account (someuser#somehost), by username alone (someuser) or by host alone (somehost).
See the following tables:
performance_schema.events_statements_summary_by_account_by_event_name
performance_schema.events_statements_summary_by_user_by_event_name
performance_schema.events_statements_summary_by_host_by_event_name
performance_schema.events_statements_summary_global_by_event_name
http://dev.mysql.com/doc/refman/5.6/en/performance-schema.html