I'm working on an application that involves some dynamic creation of database schemas.
On my dev machine, MySQL server takes something like five minutes to run a dozen CREATE TABLE and some hundreds ALTER TABLE queries. Tables are completely empty during this process, so indexes should not play a relevant role here. Strange thing is that on the production server (which is really less powerful than my PC!) the process lasts no more than 10 seconds. How is that possible?
I'm using InnoDB both on dev and production. MySQL version is 5.6 on dev and 5.5 on production. Queries are run using the same exact PHP script.
I would like to speed up development time by reducing this process' duration, since I need to run lot of tests and debugging on this process.
Edit: I found a workaround in the comments.
Related
I am currently prepping to upgrade from MySQL 5.7 to MySQL 8.
I am using RDS on AWS with a master server and read replicas. The read replicas use MySQL replication but are read-only copies.
One of the issues I need to resolve prior to upgrade is that I have some tables on production databases with COMPACT row format which need updating to DYNAMIC.
I know I can do this with the following and have a script which will find and update all tables needed.
ALTER TABLE `tablename` ROW_FORMAT=DYNAMIC;
There are a large number of potentially large tables (millions of rows) that need updating.
What does this change actually do in the background? Is it safe to run this on a production server whilst it is in use? Does it lock the tables whilst it makes the change?
I have run a test on a restored copy of the server. This takes a while as I'd expect, and as such it's hard for me to test to be sure everything is working fine during this whole process. It does complete successfully eventually though
System Type: 64-bit
Windows Edition: Windows Server 2008 R2 Enterprise
Microsoft Windows Server: 6.1
MySQL Workbench Version: 6.3
I manage a multi-site WordPress and it has grown to 33,000 tables so it's getting really slow. So I'm trying to optimize our installation. I've been working on a DEV server and end up deleting the whole site. Assuming that copying the live server is not an option at this point (and please trust me that it isn't) can you please help me with the following:
I highlighted and copied tables from the live server to paste them into the DEV server folder. Workbench recognizes the table in the Schemas area but when I write a SELECT query, for an Innodb tables, it says that they don't exist. The MyISAM tables, however, run successfully.
I'm just confused because I know the tables are in the right folder but for some reason they don't query. I saw a solution that says to create the tables with a regular query and then overwrite them in the folder but this isn't realistic for me because there are 33,000 tables. Do any of you have some ideas as to how I can get these Innodb tables working again?
You cannot copy individual InnoDB tables via the file system.
You can use "transportable tablespaces" to do such. See the documentation for the MySQL version you are using. (This is not the same version as for Workbench.)
It is not wise to do the above, but it is possible. Instead, you should use some dump/load mechanism, such as mysqldump or xtrabackup.
WordPress has the design flaw of letting you get to 33,000 tables. This puts a performance strain on the OS because of all the files involved.
In moving to InnoDB, I recommend you carefully thing through the choices of innodb_file_per_table. The thoughts involve which MySQL you are using, how big the tables are, and whether you will be using "transportable tablespaces".
I do have one strong recommendation for changing indexes in WP: See slide 63 of http://mysql.rjweb.org/slides/cook.pdf . It will improve performance on many of the queries.
I have a 3Gb large table with 2M+ rows that is a mysql dump (.sql) file.
Quick info:
It takes 3.69 seconds to run a count query on a table on production with 950k rows, while on my local macbook it takes 1.45 seconds to run the same count query on a similar table with 2M+ rows.
I need to import it into a DB on a live production server while the server is doing the following:
1. Running crons throughout the night.
2. Numerous select and create queries are happening on other tables within the same db.
3. DB backup will be happening at some point in the night.
Will my carrying out this command:
source tablename_dump.sql
cause the the system to experience:
1. Memory shortage (assuming I do not have infinte memory and that these crons do take up a lot of memory)
2. Locking up of crons / backups.
3. Any other problems I may not have considered?
If there are issues that I should be aware of, how can I import this table into the production MySQL database.
I'm hoping that since a dump file is a series of single insert statements, MySQL will not peak, and will carry out the process in a moderate fashion till all the records have been fed in without causing any of the above issues.
I have a development database and a production one. Both the database servers, hosted on AWS RDS (MySQL environment), were working fine till the production one has encountered major speed issues with SELECT statements for strings (using either = and LIKE). The size of the table (on production) I am searching from is about 100k rows but it can take 1min + to do a simple string search on phpmyadmin. The development db table is about 30k rows and results are returned in less than 0.1s.
Is this an AWS issue? Or some MySQL config problem. Note that this has only started occurring recently.
Extra: After I do a string search on the table, my read/write latency and queue depth shoots up. The only way I can resolve it is by making a snapshot of the server (reboot does not help).
I have a a query which only takes 0.004s on my development machine (Windows 7 running WampServer on an HDD) but takes 0.057s on my production server (CENTOS 6.5 running on an SSD) -- a difference of 14x. Both MySQL versions are the same.
The explain results are identical on both servers, as are the databases (I exported the database from my production server and imported it into my development machine). I also ran optimize table on both servers, and tried putting in SQL_NO_CACHE, but that didn't make a difference on either one.
Navicat shows this under the Profile tab:
Production
Development
The execution times for the queries are consistent on both servers.
The database versions are the same, the content is the same, and the explain results are the same. Is there any way to determine why the query is taking 14x longer on my production server?
EDIT: In an attempt to determine if the MySQL server is under load, I found the Process List area in Navicat and can see that there are only a few processes, all of which are for "Sleep" commands. So I don't think the production server is under any load.
The production server seems to be slower in every parameter listed. There could be many factors involved, so you should check each one:
First of all, check if there is any other load on the production server. Is the server doing something else in the meanwhile? Use Linux command top to see running process and check if any of them is using a lot of computing power. Use the MySQL command SHOW STATUS to get info about the MySQL server status (memory, open tables, current connections, etc.)
Check the hardware: nowadays some desktop PCs are more powerful than cheap virtual servers (CPU, RAM frequency and access times, ...)
MySQL could use different settings in the two environments
Make sure you have the same indexes on both databases.