Orphaned tables crashing MySql - mysql

I have a database that is creating loads of orphaned tables and the hash at the beginning #sql-whatever is causing MySql to crash. This has started happening weekly so I've created a cron job to remove the files every 5 minutes as a band-aid fix.
How can I find the root cause of this issue?
CMS: Drupal 7
Server Setup:
Apache: 2.4.34
PHP: 5.6.37
MySQL: 5.6.39
Perl: 5.26.0

This usually happens when InnoDB is interrupted when performing an ALTER TABLE command. You should not remove the files themselves but rather perform a DROP TABLE on the table(s) in question.
To determine the actual root cause of the issue we would need quite a bit more information such as what app / software / framework etc. are you using.

Related

Drop table times out for non-empty tables; already adjusted timeout interval

I'm having trouble deleting a table in MySQL v8.0 (on Windows 10) either from MySQL Workbench or via Python script (using mysql-connector-python). In both cases, the drop table command times out with "Error Code: 2013. Lost connection to MySQL server during query"
I previously set DBMS connection read timeout interval to 500 sec to try and work around this, but no luck.
The table in question has several hundred rows of data, and the entire .ibd file is 176kb. I suppose deleting the .ibd file directly isn't the greatest database practice?
I can create a new table and delete it, no problem. I'm running MySQL server locally.
Any suggestions on what to try next?
#obe's suggestion to restart the server resolved the issue. So it seems like that particular table got locked due to access from both Workbench and python. Database itself was not locked, since I could create/drop other tables.

MYSQL insert/update slow after updating schema and loading via source

I have an j2ee web app with tomcat/mysql I'm devloping at home, I have it deployed on a home server. I spent some time upgrading it and I made some changes to the db schema.
I re-wrote the java/jsp/javascript side of it, and I then dumped the database into a text file on my local desktop, copied it to the server, and then loaded that file via the source command, making it the production database.
When I did that, I immediately noticed that inserts/updates were extremely slow. I had never had an issue with that in the previous version of the database.
I tried dropping the database altogether and re-creating, again using the mysql source command. Writes still slow.
Both the production and test versions of the db are mysql running on ubuntu.
test : 5.7.22-0ubuntu18.04.1
server: 5.7.20-0ubuntu0.16.04.1
I don't know if the 16.04.1 makes a difference, but the previous version of the database had no problems.
I've done some searching, and most of the results are related to InnoDB settings. But since the previous version worked with no issues, I'm wondering it it's something obvious, like the text file importing some setting I'm not seeing.
All the tables in the mysqldump file have this at the top:
LOCK TABLES `address` WRITE;
/*!40000 ALTER TABLE `address` DISABLE KEYS */;
Not sure if this is part of the problem? My limited understanding of table locks is it's related to a user and their current session? But again, previous versions used mysql dump files without this issue.
All the tables use smallint auto increment values for primary keys, and the db is small, most tables only have about 1000 rows and I am currently the only user.
Also, the test version of the database, which has an identical schema, runs with no problems.
Any ideas?
thanks!
I was able to resolve by adding var in /etc/mysql/mysql.conf.d/mysqsld.cnf:
innodb_flush_log_at_trx_commit=2
Found a couple questions here, re innodb settings and here for checking db settings

MariaDB - online move/archive tables

We have a script that "rotates"/archive the Syslog tables in MySQL. This script:
at Linux level, renames the "MyISAM" tables files then compress them
then
inside MySQL, rename these tables
The 2 steps are "online". No MySQLd restart is required.
Now I built a new Syslog database in MariaDB (Debian Stretch). The tables are using InnoDB and not MyISAM. This script fails at the 2nd execution to rename the table inside MySQL after moving the file:
ERROR 1050 (42S01): Table 'SystemEvents_1' already exists
A reference of the table is kept somewhere (tablespace internal system table?) which prevents from doing that.
My question:
would it work if I migrate my tables to the ARIA engine with transactional=0?
Thanks, Vince
I think it is no longer possible.
I converted my tables to MyISAM (and even Aria with transactional=0) and had the same error message. I think the best is to use mysqldump to export the tables instead of directly renaming the filesystem files. Less convenient but mysqldump will always work regardless the choosen engine.

How to confirm mysql-mariadb database migration is OK?

I've recently migrated databases (from a Ubuntu server) to a mariadb database (on a CentOS7 server) using 'mysqldump' and them importing with the 'mysql' command. I have this setup a a phpmyadmin environment and although the migration appears to have been successful, I've noticed phpmyadmin is reporting different disk space used and also showing slightly different row numbers for some of the tables.
Is there any way to determine if anything has been 'missed' or any way to confirm the data has all been copied across with the migration?
I've run a mysqlcheck on both servers to check db consistency but I don't think this really confirms the data is the same.
Cheers,
Tim
Probably not a problem.
InnoDB, when using SHOW TABLE STATUS, gives you only an approximation of the number of rows.
The dump and reload rebuilt the data and the indexes. This is very likely to lead to files of different sizes, even if the logical contents is identical.
Do you have any clues of discrepancies other than what you mentioned?

Innodb Tables Do Not Exist

System Type: 64-bit
Windows Edition: Windows Server 2008 R2 Enterprise
Microsoft Windows Server: 6.1
MySQL Workbench Version: 6.3
I manage a multi-site WordPress and it has grown to 33,000 tables so it's getting really slow. So I'm trying to optimize our installation. I've been working on a DEV server and end up deleting the whole site. Assuming that copying the live server is not an option at this point (and please trust me that it isn't) can you please help me with the following:
I highlighted and copied tables from the live server to paste them into the DEV server folder. Workbench recognizes the table in the Schemas area but when I write a SELECT query, for an Innodb tables, it says that they don't exist. The MyISAM tables, however, run successfully.
I'm just confused because I know the tables are in the right folder but for some reason they don't query. I saw a solution that says to create the tables with a regular query and then overwrite them in the folder but this isn't realistic for me because there are 33,000 tables. Do any of you have some ideas as to how I can get these Innodb tables working again?
You cannot copy individual InnoDB tables via the file system.
You can use "transportable tablespaces" to do such. See the documentation for the MySQL version you are using. (This is not the same version as for Workbench.)
It is not wise to do the above, but it is possible. Instead, you should use some dump/load mechanism, such as mysqldump or xtrabackup.
WordPress has the design flaw of letting you get to 33,000 tables. This puts a performance strain on the OS because of all the files involved.
In moving to InnoDB, I recommend you carefully thing through the choices of innodb_file_per_table. The thoughts involve which MySQL you are using, how big the tables are, and whether you will be using "transportable tablespaces".
I do have one strong recommendation for changing indexes in WP: See slide 63 of http://mysql.rjweb.org/slides/cook.pdf . It will improve performance on many of the queries.