Low performance in Mysql 5.6.12 - mysql

I have the table with about 30 columns, and then I add one column to that table. In mysql 5.6.12 this operation takes about 0.7 seconds, when in mysql 5.0.51 it takes more more smaller.
The whole database takes about 2.7 Mb. We ues InnoDb. All settings are default.
Could anyone tell me why in mysql 5.6.12 altering queries are executed so slowly? Is there any tweak related to this? Maybe is there any known bug related to this issue?
any help would be appreciated!

Please check following points
How many rows are there in table ?
Is there any default assignment being done when a new column is getting added ?
Check space available in data directory of mysql-5.6 and compare it to mysql-5.5 installation 'data' directory.
Check steps which were followed to migrate/upgrade from mysql-5.5 to mysql-5.6 ? May be something was wrong there.

Related

Lost rows when upgrading from MySQL 5.6 to 8.0

I tried to migrate the database of existing application from MySQL 5.6 to MySQL 8.0
I have created the backup .sql file from old engine and deplayed it to new 8.0 server - I received no errors , no warnings - process finished successfully.
BUT - my database size was 62 MB in MySQL 5.6 and when I moved it to MySQL 8.0 size changed to 56 MB. I have tried to check which tables where different and noticed that in some tabled the number of rows decreased.
Can anyone tell me why such strange thing happend? Why database size and row number decreased- even throw the process finished without errors and warnings.
Are there some important thinks that I need to know in migration process - that will allow me not to loose any data?
Make sure you have few things listed out.
1) Total number of tables in 5.6
2) highest rows tables.
First try to restore dump in same 5.6 version, double check above information.
decreasing size in few MB's is very common because of compression and restoration is different at another server.
All you need to make sure total number of rows should not decreased.

Orphaned tables crashing MySql

I have a database that is creating loads of orphaned tables and the hash at the beginning #sql-whatever is causing MySql to crash. This has started happening weekly so I've created a cron job to remove the files every 5 minutes as a band-aid fix.
How can I find the root cause of this issue?
CMS: Drupal 7
Server Setup:
Apache: 2.4.34
PHP: 5.6.37
MySQL: 5.6.39
Perl: 5.26.0
This usually happens when InnoDB is interrupted when performing an ALTER TABLE command. You should not remove the files themselves but rather perform a DROP TABLE on the table(s) in question.
To determine the actual root cause of the issue we would need quite a bit more information such as what app / software / framework etc. are you using.

MySQL 5.5 to 5.6.35 upgrade - is the time/datetime/timestamp upgrade required?

We are getting ready to upgrade a fairly large MySQL 5.5 database to 5.6.35. The upgrade notes indicate an "incompatibility issue" associated with changes to the time/datetime/timestamp structures.
We know it's possible to run alter table ... force to upgrade the affected tables after upgrading to 5.6. However, given the size of this database we've confirmed it will take literally days to complete.
We can't use the online DDL feature [1] because according to the docs the time/datetime/timestamp alter won't work with the INPLACE algorithm.
We've also read that running a 5.5 DB on 5.6 will cause problems when replication to a 5.6 slave, which we need to do. But we can't confirm this issue without running an actual test.
Thus my question: are we required to alter the tables? We don't need the 5.6 microsecond feature, and never will. Can we just upgrade to 5.6 and be done with it provided we don't need the microsecond feature?
Thank you
Jason
[1] https://dev.mysql.com/doc/refman/5.6/en/innodb-online-ddl.html
We determined it is indeed possible to run a 5.6 replication slave in conjunction with a 5.5 master. The alter table ... force is not necessary either.

MySQL innodb “The Table is Full” error

i will really appreciate if someone help me with this.
I have spend like 8hours googling around and found no solution to problem.
I have MySQL server version 5.7.7 on Windows server 2008 R2
Table engine is innodb
innodb_file_per_table = 1
I get error "Table is full" when table reaches 4Gb.
MySQL documentation sais that there is actualy only one limit on table size, filesystem.
(http://dev.mysql.com/doc/refman/5.7/en/table-size-limit.html)
HDD where are data stored uses NTFS, just to be sure i created 5Gb file without problems. And sure there is more than 10Gb of free space.
I understand setting "innodb_data_file_path" is irrelevant if "innodb_file_per_table" is enabled, but i tried to set it. No differences.
I have tried to do clean install of mysql. Same result.
EDIT
Guy that installed MySQL server before me accidentally installed 32bit version. Migration to 64bit mysql solved that problem
About the only way for 4GB to be a file limit is if you have a 32-bit version of MySQL. Also check for 32-bit Operating system. (Moved from comment, where it was verified.)
i am also not sure but read this it may help you.
http://jeremy.zawodny.com/blog/archives/000796.html
one more thing one guy had same problem.he had made changes to
NNODB settings for the innodb_log_file_size and innodb_log_buffer_size!changes were :
1) shutdown mysql
2) cd /var/lib/mysql
3) mkdir oldIblog
4) mv ib_logfile* oldIblog
5) edit /etc/my.cnf find the line innodb_log_file_size= and increase it to an appropriate value (he went to 1000MB as he was dealing with a very large dataset... 250million rows in one table). If you are not sure I suggest doubling the number every time you get a table is full error. he set innodb_log_buffer_size to 1/4 of the size of his log file and the problems went away.
I didnt find solution to this, i have no idea why mysql is unable to create more than 4Gb table.
As a workaround i moved only this table back to ibdata by setting "innodb_file_per_table" back to 0 and recreated that table.
Interesting is that even ibdata1 reported table is full when it reached 4Gb, even without setting max and enabled autoexpand.
So i created ibdata2 and let it autoexpand, now i am able to write new data to that table.

Lost connection to mysql during query, mysql workbench

I have the same problem as this when I want to index a very large table on one of its non-unique columns which is an integer, and I tried all the solutions that are proposed in that post that has at least one vote up. I still couldn't fix it. Any other ideas?
I have enough memory:
max_allowed_packet: 2G,
innodb_buffer_pool_size: 9G
All the time out settings mentioned in this post and here are set to much higher numbers than the default.
While this is not necessary an answer for loosing connection in mysql workbench it is a workaround. When it comes to long running queries in mysql workbench, even if one changes the mysql workbench parameters, there seems to be a connection time_out issue still occuring. So, run the query from mysql command line and see if it works. If it works when you run from the mysql command line and not from workbench, you know its just a mysql workbench issue and not some other issue.