MYSQL insert/update slow after updating schema and loading via source - mysql

I have an j2ee web app with tomcat/mysql I'm devloping at home, I have it deployed on a home server. I spent some time upgrading it and I made some changes to the db schema.
I re-wrote the java/jsp/javascript side of it, and I then dumped the database into a text file on my local desktop, copied it to the server, and then loaded that file via the source command, making it the production database.
When I did that, I immediately noticed that inserts/updates were extremely slow. I had never had an issue with that in the previous version of the database.
I tried dropping the database altogether and re-creating, again using the mysql source command. Writes still slow.
Both the production and test versions of the db are mysql running on ubuntu.
test : 5.7.22-0ubuntu18.04.1
server: 5.7.20-0ubuntu0.16.04.1
I don't know if the 16.04.1 makes a difference, but the previous version of the database had no problems.
I've done some searching, and most of the results are related to InnoDB settings. But since the previous version worked with no issues, I'm wondering it it's something obvious, like the text file importing some setting I'm not seeing.
All the tables in the mysqldump file have this at the top:
LOCK TABLES `address` WRITE;
/*!40000 ALTER TABLE `address` DISABLE KEYS */;
Not sure if this is part of the problem? My limited understanding of table locks is it's related to a user and their current session? But again, previous versions used mysql dump files without this issue.
All the tables use smallint auto increment values for primary keys, and the db is small, most tables only have about 1000 rows and I am currently the only user.
Also, the test version of the database, which has an identical schema, runs with no problems.
Any ideas?
thanks!

I was able to resolve by adding var in /etc/mysql/mysql.conf.d/mysqsld.cnf:
innodb_flush_log_at_trx_commit=2
Found a couple questions here, re innodb settings and here for checking db settings

Related

Change row format on production servers?

I am currently prepping to upgrade from MySQL 5.7 to MySQL 8.
I am using RDS on AWS with a master server and read replicas. The read replicas use MySQL replication but are read-only copies.
One of the issues I need to resolve prior to upgrade is that I have some tables on production databases with COMPACT row format which need updating to DYNAMIC.
I know I can do this with the following and have a script which will find and update all tables needed.
ALTER TABLE `tablename` ROW_FORMAT=DYNAMIC;
There are a large number of potentially large tables (millions of rows) that need updating.
What does this change actually do in the background? Is it safe to run this on a production server whilst it is in use? Does it lock the tables whilst it makes the change?
I have run a test on a restored copy of the server. This takes a while as I'd expect, and as such it's hard for me to test to be sure everything is working fine during this whole process. It does complete successfully eventually though

Orphaned tables crashing MySql

I have a database that is creating loads of orphaned tables and the hash at the beginning #sql-whatever is causing MySql to crash. This has started happening weekly so I've created a cron job to remove the files every 5 minutes as a band-aid fix.
How can I find the root cause of this issue?
CMS: Drupal 7
Server Setup:
Apache: 2.4.34
PHP: 5.6.37
MySQL: 5.6.39
Perl: 5.26.0
This usually happens when InnoDB is interrupted when performing an ALTER TABLE command. You should not remove the files themselves but rather perform a DROP TABLE on the table(s) in question.
To determine the actual root cause of the issue we would need quite a bit more information such as what app / software / framework etc. are you using.

How to confirm mysql-mariadb database migration is OK?

I've recently migrated databases (from a Ubuntu server) to a mariadb database (on a CentOS7 server) using 'mysqldump' and them importing with the 'mysql' command. I have this setup a a phpmyadmin environment and although the migration appears to have been successful, I've noticed phpmyadmin is reporting different disk space used and also showing slightly different row numbers for some of the tables.
Is there any way to determine if anything has been 'missed' or any way to confirm the data has all been copied across with the migration?
I've run a mysqlcheck on both servers to check db consistency but I don't think this really confirms the data is the same.
Cheers,
Tim
Probably not a problem.
InnoDB, when using SHOW TABLE STATUS, gives you only an approximation of the number of rows.
The dump and reload rebuilt the data and the indexes. This is very likely to lead to files of different sizes, even if the logical contents is identical.
Do you have any clues of discrepancies other than what you mentioned?

Innodb Tables Do Not Exist

System Type: 64-bit
Windows Edition: Windows Server 2008 R2 Enterprise
Microsoft Windows Server: 6.1
MySQL Workbench Version: 6.3
I manage a multi-site WordPress and it has grown to 33,000 tables so it's getting really slow. So I'm trying to optimize our installation. I've been working on a DEV server and end up deleting the whole site. Assuming that copying the live server is not an option at this point (and please trust me that it isn't) can you please help me with the following:
I highlighted and copied tables from the live server to paste them into the DEV server folder. Workbench recognizes the table in the Schemas area but when I write a SELECT query, for an Innodb tables, it says that they don't exist. The MyISAM tables, however, run successfully.
I'm just confused because I know the tables are in the right folder but for some reason they don't query. I saw a solution that says to create the tables with a regular query and then overwrite them in the folder but this isn't realistic for me because there are 33,000 tables. Do any of you have some ideas as to how I can get these Innodb tables working again?
You cannot copy individual InnoDB tables via the file system.
You can use "transportable tablespaces" to do such. See the documentation for the MySQL version you are using. (This is not the same version as for Workbench.)
It is not wise to do the above, but it is possible. Instead, you should use some dump/load mechanism, such as mysqldump or xtrabackup.
WordPress has the design flaw of letting you get to 33,000 tables. This puts a performance strain on the OS because of all the files involved.
In moving to InnoDB, I recommend you carefully thing through the choices of innodb_file_per_table. The thoughts involve which MySQL you are using, how big the tables are, and whether you will be using "transportable tablespaces".
I do have one strong recommendation for changing indexes in WP: See slide 63 of http://mysql.rjweb.org/slides/cook.pdf . It will improve performance on many of the queries.

Mysql dump: Copying MYI files in production is safe?

Want to copy some big tables which are used very often from main server to local server in production mode ofc.
Is it safe?
Some suggestions, tools are welcome. :)
I'm guessing from asking about MYI tables that all of your tables are MyISAM tables and not InnoDb ones. Each MyISAM table is made up of three files: .frm, .MYD and .MYI they contain the structure, data and index respectively.
People advise against copying these raw files from running systems but I've found that as long as you're sure nothing's writing to the tables then copying them works fine (I've done this more times than I care to remember).
If you're doing this on a replica then just stop replication on the slave before copying. If it's just a single server or the master I'd recommend running FLUSH TABLES WITH READ LOCK before you start copying files, this will prevent any process from writing to the tables. When you're done release the lock with UNLOCK TABLES.
I would always recommend doing a CHECK TABLE on the tables you've copied in this manner. I think this is what I've used in the past mysqlcheck --all-tables --fast --auto-repair
If you're using LVM on the server then taking an LVM snapshot could be another way of getting hold of a clean snapshot.
If you're going to be doing this regularly I'd recommend either using replication to keep the local server up to date or to setup a slave which you can use for taking backups (as it's not the main database there's no problem with you stopping it, dumping tables, etc)
Yes it's alright if you
Shut the mysql server down cleanly before you copy the files, or at least take a global read lock
Take the .frm .MYI and .MYD files at the same time.
Have identical my.cnf and run the same MySQL version on each system
So it's basically ok, but not necessarily a good idea.
If you have different versions of mysql (You can only generally move to a more recent version, not an older one), or are running with a significantly different my.cnf (i.e. any of the fulltext indexing parameters differ, and you have fulltext indexes), you may need to rebuild the table. Rebuilding the table can be done on the destination server with ALTER TABLE blah ENGINE=MyISAM; this might still be a bit faster than a mysqldump / restore.