I have a dedicated server (Intel® Core™ i7-2600 Quadcore incl. Hyper-Threading Technology 16GB DDR3, 2 x 3 TB SATA 6 Gb/s HDD 7200 rpm (Software-RAID 1)) and have installed nginx+apache+mysql from debian stable.
I have a db with a table of 2+ million rows (around 400MB of data). When I drop an index db is very slow. For example I am dropping now an index in a single column for around 8 minutes. From iotop I see mysql has around 8Mb/sec. Isn't this too slow?
When you are altering a table (including adding or dropping an index) in innodb, the whole table is rewritten on the disk. (data is copied, indexes are regenerated). This does not happen if you use InnoDB Plugin in MySQL 5.1, but by default MySQL 5.1 is not setup with InnoDB Plugin but with the old build-in InnoDB.
Related
I have ran a OPTIMIZE TABLE Table name; where Table name size is around 54GB. to reduce the size of this innodb table i ran optimize table command.But this increased the size of table to 59 GB! Any reasons for this will be more helpful.
Edit1: This table is used for logging purposes.
Edit2: This table is used earlier in MySQL and it then moved to MariaDB finally now its running in MySQL 5.7.
I am running MySQL 5.7 in an Ubuntu version having RAM around 50 GB.
Why MySQL server took time compare than MySQL inside wamp ?
Machine 1 installed MySQL 5.6.17(inside the wamp)
Machine 2 installed MySQL 5.7.24(separate server)
Both machines are same configuration and same OS.
I imported same DB dump file to Machine1 and Machine2.
Now I execute query (the query get data from 6 join tables) and return 400 rows.
Took time:
Machine 1 (5.6.17) inside wamp- Below 30 sec's
Machine 2 (5.7.24) - Morethan 230 sec's
Shall I use MySQL(wamp) instead of MySQL server?
I think MySQL server need to increase Innodb_bufferpool_size on my.ini which located from C;\Program Data (Default hidden folder)
Default Innodb_bufferpool_size is 8M
innodb_buffer_pool_size: This is a very important setting to look immediate after the installation using InnoDB. The InnoDB is the buffer pool where the data is indexed the cached, which has a very large possible size that will make sure and use the memory no the disk space for most of the read-write operations, generally the size of InnoDB values are 5-6GB for 8GB RAM.
Fix: Increase innodb_buffer_pool_size
innodb_buffer_pool_size=356M
I have performance issue in MySQL 5.6 upgrade.
OS version : Solaris 10
Language : Perl script
MySQL version : MySQL upgraded from 5.1 (logical upgrade- Installed 5.6 on same sever and restored it in 5.6)
Memory : 64 GB
I have upgraded MYSQL form 5.1 to 5.6 in solaris 10 and converted all mysql tables from MYISAM to INNODB because mysql 5.6 have default storage engine as INNODB. My database size is 4.5 GB and added the parameters for the innodb.
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=512 M
innodb_buffer_pool_size=5 G
My application is creating some temporary tables while doing the transactions so I have created tmp_table_size and max_heap_table_size to 512 M .
In the application there are 3 modules and 2 are working fine and in third module it is very slow compared to MySQL 5.1. It contains nearly 20+ update statements and doing some joins with the temporary tables and master tables contains 2 million records in some tables.
I have the explain plan and done the profiling on the queries. In the profiling I observed sending data is taking huge time because of this performance is degraded.
Can anyone suggest on this to improve the performance.
I have also similar issues with solaris10 and mysql upgrade from 5.1 to 5.6.
I did the logical upgrade and while restoration it took lat time to restore 1Gb database. I have tried with inplace upgrde and found the similar issues with performance.
Just I have solaris 10 server which is running from the past 3.5 yeasrs. so i have restarted my server so that all the cache was cleared and after the restart I have started my all databases services.
NOTE : Before doing the restart stop all the services and better to create another 5.1 database version and take the backup and start the inplace upgrade activity. It is better to go with inplace rather than doing the logical upgrade. Both are same but while restore it will take more time.
I have two tables with millions of rows (20+ GB) in a MySQL database. Adding a foreign key through phpMyAdmin times out after 16 hours (current default). The server environment is as follows:
OS: Windows 2008 R2 Server
Stack: WAMP
CPU: Xeon X3464
RAM: 8Gb
(Virtualized)
Is this server setup/MySQL capable of handling a data set of this size?
Yes, MySQL can handle a table that size.
innodb_buffer_pool_size should be something like 5G on your 8GB machine. But it should not be so large that mysql swaps.
You should probably increase the timeout that is causing it to fail.
Read the details in http://dev.mysql.com/doc/refman/5.6/en/alter-table.html -- based on exactly which version you are using and whether you are using ALGORITHM=INPLACE and whether you have other alters in the same statement.
We're in the middle of an upgrade from mysql 5.1 to mysql 5.5. Our only remaining holdout is the master, at 5.1, while all the slaves are at 5.5.
However, we have a stats process that runs on an admin slave, which rsyncs the myisam files to the master, who mount it in a stats DB, then imports those stats into the main production db.
All the tables use the default row_format for myisam, which is static. We're only using simple column types like int, varchar, enum, datetime.
We've already run this process, and it didn't fall down, which is encouraging. But I'm concerned with errors that didn't make it into the log.
So the question is: what are the complications of this process? The MyISAM file is created by a 5.5 instance, then moved to a 5.1 instance.
What's going to go wrong?