How to confirm mysql-mariadb database migration is OK? - mysql

I've recently migrated databases (from a Ubuntu server) to a mariadb database (on a CentOS7 server) using 'mysqldump' and them importing with the 'mysql' command. I have this setup a a phpmyadmin environment and although the migration appears to have been successful, I've noticed phpmyadmin is reporting different disk space used and also showing slightly different row numbers for some of the tables.
Is there any way to determine if anything has been 'missed' or any way to confirm the data has all been copied across with the migration?
I've run a mysqlcheck on both servers to check db consistency but I don't think this really confirms the data is the same.
Cheers,
Tim

Probably not a problem.
InnoDB, when using SHOW TABLE STATUS, gives you only an approximation of the number of rows.
The dump and reload rebuilt the data and the indexes. This is very likely to lead to files of different sizes, even if the logical contents is identical.
Do you have any clues of discrepancies other than what you mentioned?

Related

Change row format on production servers?

I am currently prepping to upgrade from MySQL 5.7 to MySQL 8.
I am using RDS on AWS with a master server and read replicas. The read replicas use MySQL replication but are read-only copies.
One of the issues I need to resolve prior to upgrade is that I have some tables on production databases with COMPACT row format which need updating to DYNAMIC.
I know I can do this with the following and have a script which will find and update all tables needed.
ALTER TABLE `tablename` ROW_FORMAT=DYNAMIC;
There are a large number of potentially large tables (millions of rows) that need updating.
What does this change actually do in the background? Is it safe to run this on a production server whilst it is in use? Does it lock the tables whilst it makes the change?
I have run a test on a restored copy of the server. This takes a while as I'd expect, and as such it's hard for me to test to be sure everything is working fine during this whole process. It does complete successfully eventually though

mysql master-slave. can i add slave when the master server already has a lot of data

is there a way to replicate mysql while the master server already has a lot of data.I tried the normal way, but I had difficulty getting the MASTER_LOG_POS value. how can the slave server be able to replicate data that previously existed on the master server.
Generally you start with an exact full copy of your existing database. This means creating a real copy of your MySQL data directory (while the server is off), go with a (consistent) snapshot, or use a tool like Percona XtraBackup.
Only after you have 2 identical MySQL servers, you can start replicating. Note that using a tool like mysqldump is not a good idea for consistent snapshots.
If you have a relatively small amount of data you could use mysqldump --master-data=1 --single-transaction. This will create a snapshot with the correct master-binlog and position required. This should not be used for production environments or large amounts of data.

MYSQL insert/update slow after updating schema and loading via source

I have an j2ee web app with tomcat/mysql I'm devloping at home, I have it deployed on a home server. I spent some time upgrading it and I made some changes to the db schema.
I re-wrote the java/jsp/javascript side of it, and I then dumped the database into a text file on my local desktop, copied it to the server, and then loaded that file via the source command, making it the production database.
When I did that, I immediately noticed that inserts/updates were extremely slow. I had never had an issue with that in the previous version of the database.
I tried dropping the database altogether and re-creating, again using the mysql source command. Writes still slow.
Both the production and test versions of the db are mysql running on ubuntu.
test : 5.7.22-0ubuntu18.04.1
server: 5.7.20-0ubuntu0.16.04.1
I don't know if the 16.04.1 makes a difference, but the previous version of the database had no problems.
I've done some searching, and most of the results are related to InnoDB settings. But since the previous version worked with no issues, I'm wondering it it's something obvious, like the text file importing some setting I'm not seeing.
All the tables in the mysqldump file have this at the top:
LOCK TABLES `address` WRITE;
/*!40000 ALTER TABLE `address` DISABLE KEYS */;
Not sure if this is part of the problem? My limited understanding of table locks is it's related to a user and their current session? But again, previous versions used mysql dump files without this issue.
All the tables use smallint auto increment values for primary keys, and the db is small, most tables only have about 1000 rows and I am currently the only user.
Also, the test version of the database, which has an identical schema, runs with no problems.
Any ideas?
thanks!
I was able to resolve by adding var in /etc/mysql/mysql.conf.d/mysqsld.cnf:
innodb_flush_log_at_trx_commit=2
Found a couple questions here, re innodb settings and here for checking db settings

MySQL DB running in Ubuntu, but not running in Windows

I have a MySQL DB running in Ubuntu Server (a live server), and it goes well. But I copy the DB folder for development and will run in Windows. When I just copied that DBfolder into my windows-based XAMPP, it doesn't go well. Some table error and "in use" error info on the collation column. The error table engine is InnoDB, and the rest is MyISAM. I'm wondering why it happens.
You shouldn't be coping any folder there is a export and import utility for migrating databases from one system database to another system.
For InnoDB, there are three choices:
Copy the entire tree, not just one table or database. And do it with MySQL shutdown. This does not allow for any kind of mix and match of tables.
Replication -- One server is the Master, the Other is the Slave. But thin, all writes must go to the Master. And it provides only for maintaining consistency, not for initially providing it.
"Transportable tablespaces". This is a way to disconnect a single table (or partition) so that you can copy a file to another server. Then you perform some other magic to connect the table on the other server.
https://dev.mysql.com/doc/refman/5.6/en/tablespace-copying.html
https://dev.mysql.com/doc/refman/5.7/en/innodb-transportable-tablespace-examples.html

Innodb Tables Do Not Exist

System Type: 64-bit
Windows Edition: Windows Server 2008 R2 Enterprise
Microsoft Windows Server: 6.1
MySQL Workbench Version: 6.3
I manage a multi-site WordPress and it has grown to 33,000 tables so it's getting really slow. So I'm trying to optimize our installation. I've been working on a DEV server and end up deleting the whole site. Assuming that copying the live server is not an option at this point (and please trust me that it isn't) can you please help me with the following:
I highlighted and copied tables from the live server to paste them into the DEV server folder. Workbench recognizes the table in the Schemas area but when I write a SELECT query, for an Innodb tables, it says that they don't exist. The MyISAM tables, however, run successfully.
I'm just confused because I know the tables are in the right folder but for some reason they don't query. I saw a solution that says to create the tables with a regular query and then overwrite them in the folder but this isn't realistic for me because there are 33,000 tables. Do any of you have some ideas as to how I can get these Innodb tables working again?
You cannot copy individual InnoDB tables via the file system.
You can use "transportable tablespaces" to do such. See the documentation for the MySQL version you are using. (This is not the same version as for Workbench.)
It is not wise to do the above, but it is possible. Instead, you should use some dump/load mechanism, such as mysqldump or xtrabackup.
WordPress has the design flaw of letting you get to 33,000 tables. This puts a performance strain on the OS because of all the files involved.
In moving to InnoDB, I recommend you carefully thing through the choices of innodb_file_per_table. The thoughts involve which MySQL you are using, how big the tables are, and whether you will be using "transportable tablespaces".
I do have one strong recommendation for changing indexes in WP: See slide 63 of http://mysql.rjweb.org/slides/cook.pdf . It will improve performance on many of the queries.