Innodb_Buffer_Pool_Size Performance - mysql

I need a real and clear definition about the innodb_buffer_pool_size. It is suppose to increase performance but I'm not sure how.
Operative System: Debian 8
Memory: 1468mb
Mysql 5.5
What i did for my test was to create a table with 750001 rows with the default innodb_buffer_pool_size=134217728 and I made it on 1min 17.01sec.
INSERT INTO luis_test3 SELECT * FROM luis_test;
Query OK, 750001 rows affected (1min 17.01 sec)
After that test, I changed innodb_buffer_pool_size=402653184 and made the insert to another table (empty table) and made it on
INSERT INTO luis_test2 SELECT * FROM luis_test;
Query OK, 750001 rows affected (2min 3.17 sec)
Seems it doubles the size and no performance was improve, maybe affected on the insert but I'm not sure where will I improve performance. By the way, i have to note that I restarted my service on each change.
I have a whole server with full MyIsam tables to Innodb, but I need to follow the right conventions to set up everything on Innodb. Actually I want to prepare it for make hot backups using Percona Xtrabackup (so MyIsam is not considered for my plan).
I copy my data table sizes:
table------------------test
luis_test 316.47
luis_test2 272.83
luis_test3 272.83
luis_test_myIsam 254.16
table_prueba 0.02
None of the tables are indexed.

There are many more aspects to tuning than what you have mentioned.
The bottom line is pretty simple:
Set innodb_buffer_pool_size to about 70% of available memory. (That is, after allowing for other applications.)
Set key_buffer_size to about 40M.
To change from MyISAM to InnoDB is optimally just ALTER TABLE t ENGINE=InnoDB.
However, there are a lot of things that may cause degraded performance, especially in the way of indexes. I have enumerated them and discussed them. Probably very few will be critical.
The ALTER TABLE does virtually the same things as CREATE TABLE ...; INSERT ... SELECT; RENAME TABLE ..., just a lot shorter to type.
If, after studying my blog, you see the need to change the indexes, etc, then doing all the steps in a single ALTER is optimal:
ALTER TABLE t ADD PRIMARY KEY(foo), MODIFY COLUMN ..., ... ENGINE=InnoDB;
Yes, you need to do one table at a time.
If you have a lot of large tables, it is best to set innodb_file_per_table=ON before doing the ALTER or INSERT...SELECT.
Why did one test take twice as long as the other? Again, there are lots of things going on, so I can't give you a simple answer. One deadly possibility: Was 402653184 so big that it led to swapping? How much RAM do you have? That number would probably be too big for a 2GB server.

Related

MySQL change engine big millions rows table

I want to change engine of 2 million rows table from MyISAM to InnoDB. I am afraid of this long time operation, so I created similar structure InnoDB table and now I want to copy all data from old one to this new one. What is the fastest way? SELECT INSERT? What about START TRANSACTION? Please, help. I dont want to hang my server.
Do yourself a favor: copy the whole setup to your local machine and try it all out there. You'll have a much better idea of what you are getting into. Just be aware of potential differences in hardware between your production server and your local machine.
The fastest way is probably the most straightforward way:
INSERT INTO table2 SELECT * FROM table1;
I suspect that you cannot do it any faster than what is built into the ALTER. And it does have to copy over all the data and rebuild all the indexes.
Be sure to have innodb_buffer_pool_size raised to prepare for InnoDB. And lower key_buffer_size to allow room. Suggest 35% and 12% of RAM, respectively, for the transition. After all tables are converted, suggest 70% and a mere 20MB.
One slight speedup is to do some select that fetches the entire table and the entire PRIMARY KEY (if it can be cached). This will do some I/O before really starting. Example: SELECT avg(id) FROM tbl where id is the primary key. And SELECT avg(foo) FROM tbl where foo is not indexed but it numeric. These will force a full scan of the PK index and the data, thereby caching the stuff that the ALTER will have to read.
Other tips on converting: http://mysql.rjweb.org/doc.php/myisam2innodb .

MySQL performance of adding a column to a large table

I have MySQL 5.5.37 with InnoDB installed locally with apt-get on Ubuntu 13.10. My machine is i7-3770 + 32Gb memory + SSD hard drive on my desktop. For a table "mytable" which contains only 1.5 million records the following DDL query takes more than 20 min (!):
ALTER TABLE mytable ADD some_column CHAR(1) NOT NULL DEFAULT 'N';
Is there a way to improve it?
I checked
show processlist;
and it was showing that it is copying my table for some reason.
It is disturbingly inconvenient. Is there a way to turn off this copy?
Are there other ways to improve performance of adding a column to a large table?
Other than that my DB is relatively small with only 1.3Gb dump size. Therefore it should (in theory) fit 100% in memory.
Are there settings which can help?
Would migration to Precona change anything for me?
Add: I have
innodb_buffer_pool_size = 134217728
Are there other ways to improve performance of adding a column to a large table?
Short answer: no. You may add ENUM and SET values instantly, and you may add secondary indexes while locking only for writes, but altering table structure always requires a table copy.
Long answer: your real problem isn't really performance, but the lock time. It doesn't matter if it's slow, it only matters that other clients can't perform queries until your ALTER TABLE is finished. There are some options in that case:
You may use the pt-online-schema-change, from Percona toolkit. Backup your data first! This is the easiest solution, but may not work in all cases.
If you don't use foreign keys and it's slow because you have a lot of indexes, it might be faster for you to create a copy of the table with the changes you need but no secondary indexes, populate it with the data, and create all indexes with a single alter table at the end.
If it's easy for you to create replicas, like if you're hosted at Amazon RDS, you may create a master-master replica, run the alter table there, let it get back in sync, and switch instances after finished.
UPDATE
As others mentioned, MySQL 8.0 INNODB added support for instant column adds. It's not a magical solution, it has limitations and side-effects -- it can only be the last column, the table must not have a full text index, etc -- but should help in many cases.
You can specify explicit ALGORITHM=INSTANT LOCK=NONE parameters, and if an instant schema change isn't possible, MySQL will fail with an error instead of falling back to INPLACE or COPY. Example:
ALTER TABLE mytable
ADD COLUMN mycolumn varchar(36) DEFAULT NULL,
ALGORITHM=INPLACE, LOCK=NONE;
https://mysqlserverteam.com/mysql-8-0-innodb-now-supports-instant-add-column/
MariaDb 10.3, MySQL 8.0 and probably other MySQL variants to follow have an "Instant ADD COLUMN" feature whereby most columns (there are a few constraints, see docs) can be added instantly with no table rebuild.
MariaDb: https://mariadb.com/resources/blog/instant-add-column-innodb
MySQL: https://mysqlserverteam.com/mysql-8-0-innodb-now-supports-instant-add-column/
I know this is a rather old question but today i encountered a similar problem. I decided to create a new table and to import the old table in the new table. Something like:
CREATE TABLE New_mytable LIKE mytable ;
ALTER TABLE New_mytable ADD some_column CHAR(1) NOT NULL DEFAULT 'N';
insert into New_mytable select * from mytable ;
Then
START TRANSACTION;
insert into New_mytable select * from mytable where id > (Select max(id) from New_mytable) ;
RENAME TABLE mytable TO Old_mytable;
RENAME TABLE New_mytable TO mytable;
COMMIT;
This does not make the update process go any faster, but it does minimize downtime.
Hope this helps.
What about Online DDL?
http://www.tocker.ca/2013/11/05/a-closer-look-at-online-ddl-in-mysql-5-6.html
Maybe you would use TokuDB instead:
http://www.tokutek.com/products/tokudb-for-mysql/
There is no way to avoid copying the table when adding or removing columns because the structure changes. You can add or remove secondary indexes without a table copy.
Your table data doesn't reside in memory. The indexes can reside in memory.
1.5 million records is not a lot of rows, and 20 minutes seems quite long, but perhaps your rows are large and you have many indexes.
While the table is being copied, you can still select rows from the table. However, if you try to do any updates, they will be blocked until the ALTER is complete.

mysql speed, table index and select/update/insert

We have got a MySQL table which has got more than 7.000.000 (yes seven million) rows.
We are always doing so much SELECT / INSERT / UPDATE queries per 5 seconds.
Is it a good thing that if we create MySQL INDEX for that table? Will there be some bad consequences like data corrupting or loosing MySQL services etc.?
Little info:
MySQL version 5.1.56
Server CentOS
Table engines are MyISAM
MySQL CPU load between 200% - 400% always
In general, indexes will improve the speed of SELECT operations and will slow down INSERT/UPDATE/DELETE operations, as both the base table and the indexes must be modified when a change occurs.
It is very difficult to say such a thing. I would expect that the indexing itself might take some time. But after that you should have some improvements. As said by #Joe and #Patrick, it might hurt your modification time, but the selecting will be faster.
Ofcourse, there are some other ways of improving performance on inserting and updating. You could ofcourse batch updates if it is not important to have change visible immediatly.
The indexes will help dramatically with selects. Especially if the match up well with the commonly filtered fields. And you have a good simple primary key. They will help with both the time of the queries and the processing cycles.
The drawbacks are if you are very often updating/altering/deleting these records, especially the indexed fields. Even in this case though, it is often worth it.
How much you're going to be reporting (select statement) vs updating (should!) hugely affects both your initial design as well as your later adjustments once your db is in the wild. Since you already have what you have, testing will give you the answers you need. If you really do a lot of select queries, and a lot of updating, your solution might be to copy out data now and then to a reporting table. Then you can index like crazy with no ill effects.
You have actually asked a large question, and you should study up on this more. The general things I've mentioned above hold for most all relational dbs, but there are also particular behaviors of the particular databases (MySQL in your case), mainly in how they decide when and where to use indexes.
If you are looking for performance, indexes are the way to go. Indexes speed up your queries. If you have 7 Million records, your queries are probably taking many seconds possibley a minute depending on your memory size.
Generally speaking, I would create indexes that match the most frequent SELECT statements. Everyone talks about the negative impact of indexes on table size and speed but I would neglect those impacts unless you have a table for which you are doing 95% of the time inserts and updates but even then, if those inserts happen at night and you query during the day, go and create those indexes, your users during daytime will appreciate it.
What is the actual time impact to an insert or update statement if there is an additional index, 0.001 secondes maybe? If the index saves you many seconds per each query, I guess the additional time required to update index is well worth it.
The only time I ever had an issue with creating an index (it actually broke the program logic) was when we added a primary key to a table that was previously created (by someone else) without a primary key and the program was expecting that the SELECT statement returns the records in the sequence they were created. Creating the primary key changed that, the records when selecting without any WHERE clause were returned in a different sequence.
This is obviously a wrong design in the first place, nevertheless, if you have an older program and you encounter tables without primary key, I suggest to look at the code that reads that table before adding a primary key, just in case.
One more last thought about creating indexes, the choice of fields and the sequence in which the fields appear in the index have an impact on the performance of the index.
I had the same kind of problem that you describe.
I did a few changes and 1 query passed from 11sec to a few milliseconds
1- Upgraded to MariaDB 10.1
2- Changed ALL my DB to ARIA engine
3- Changed my.cnf to the strict mininum
4- Upgraded php 7.1 (but this one had a little impact)
5- with CentOS : "Yum update" in the terminal or via ssh (by keeping everything up to date)
1- MariaDB is the new Open source version of MYSQL
2- ARIA engine is the evolution of MYISAM
3- my.cnf have usually too much change that affect performance
Here an example
[mysqld]
performance-schema=1
general_log=0
slow_query_log=0
max_allowed_packet=268435456
By removing all extra options from the my.cnf, it's telling mysql to use default values.
In MYSQL 5 (5.1, 5.5, 5.6...) When I did that ; I only noticed a small difference.
But in MariaDB -> the small my.cnf like this did a BIG difference.
******* ALL of those changes ; the server hardware remained the same.
Hope it can help you

Inserting New Column in MYSQL taking too long

We have a huge database and inserting a new column is taking too long. Anyway to speed up things?
Unfortunately, there's probably not much you can do. When inserting a new column, MySQL makes a copy of the table and inserts the new data there. You may find it faster to do
CREATE TABLE new_table LIKE old_table;
ALTER TABLE new_table ADD COLUMN (column definition);
INSERT INTO new_table(old columns) SELECT * FROM old_table;
RENAME table old_table TO tmp, new_table TO old_table;
DROP TABLE tmp;
This hasn't been my experience, but I've heard others have had success. You could also try disabling indices on new_table before the insert and re-enabling later. Note that in this case, you need to be careful not to lose any data which may be inserted into old_table during the transition.
Alternatively, if your concern is impacting users during the change, check out pt-online-schema-change which makes clever use of triggers to execute ALTER TABLE statements while keeping the table being modified available. (Note that this won't speed up the process however.)
There are four main things that you can do to make this faster:
If using innodb_file_per_table the original table may be highly fragmented in the filesystem, so you can try defragmenting it first.
Make the buffer pool as big as sensible, so more of the data, particularly the secondary indexes, fits in it.
Make innodb_io_capacity high enough, perhaps higher than usual, so that insert buffer merging and flushing of modified pages will happen more quickly. Requires MySQL 5.1 with InnoDB plugin or 5.5 and later.
MySQL 5.1 with InnoDB plugin and MySQL 5.5 and later support fast alter table. One of the things that makes a lot faster is adding or rebuilding indexes that are both not unique and not in a foreign key. So you can do this:
A. ALTER TABLE ADD your column, DROP your non-unique indexes that aren't in FKs.
B. ALTER TABLE ADD back your non-unique, non-FK indexes.
This should provide these benefits:
a. Less use of the buffer pool during step A because the buffer pool will only need to hold some of the indexes, the ones that are unique or in FKs. Indexes are randomly updated during this step so performance becomes much worse if they don't fully fit in the buffer pool. So more chance of your rebuild staying fast.
b. The fast alter table rebuilds the index by sorting the entries then building the index. This is faster and also produces an index with a higher page fill factor, so it'll be smaller and faster to start with.
The main disadvantage is that this is in two steps and after the first one you won't have some indexes that may be required for good performance. If that is a problem you can try the copy to a new table approach, using just the unique and FK indexes at first for the new table, then adding the non-unique ones later.
It's only in MySQL 5.6 but the feature request in http://bugs.mysql.com/bug.php?id=59214 increases the speed with which insert buffer changes are flushed to disk and limits how much space it can take in the buffer pool. This can be a performance limit for big jobs. the insert buffer is used to cache changes to secondary index pages.
We know that this is still frustratingly slow sometimes and that a true online alter table is very highly desirable
This is my personal opinion. For an official Oracle view, contact an Oracle public relations person.
James Day, MySQL Senior Principal Support Engineer, Oracle
usually new line insert means that there are many indexes.. so I would suggest reconsidering indexing.
Michael's solution may speed things up a bit, but perhaps you should have a look at the database and try to break the big table into smaller ones. Take a look at this: link. Normalizing your database tables may save you loads of time in the future.

MySQL ALTER TABLE on very large table - is it safe to run it?

I have a MySQL database with a MyISAM table with 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then alter the table like this:
ALTER TABLE x ORDER BY PK DESC
I order the table by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory). Three times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup.
Can a 512MB server cope with that alter statement on such a large table? I have read that a temporary table is created to perform the ALTER TABLE command.
Question: Can this alter command be safely run? What should be the expected time for the alteration of the table?
As I have just read, the ALTER TABLE ... ORDER BY ... query is useful to improve performance in certain scenarios. I am surprised that the PK Index does not help with this. But, from the MySQL docs, it seems that InnoDB does use the index. However InnoDB tends to be slower as MyISAM. That said, with InnoDB you wouldn't need to re-order the table but you would lose the blazing speed of MyISAM. It still may be worth a shot.
The way you explain the problems, it seems that there is too much data loaded into memory (maybe there is even swapping going on?). You could easily check that with monitoring your memory usage. It's hard to say as I do not know MySQL all that well.
On the other hand, I think your problem lies at a very different place: You are using a machine with only 512 Megs of RAM as Database server with a table containing more than 4Mio rows... And you are performing a very memory-heavy operation on the whole table on that machine. It seems that 512Megs will not nearly be enough for that.
A much more fundamental issue I am seeing here: You are doing development (and quite likely testing as well) in an environment that is very different to the production environment. The kind of problem you are explaining is to be expected. Your development machine has six times as much memory as your production machine. I believe I can safely say, that the processor is much faster as well. In that case, I suggest you create a virtual machine mimicking your production site. That way you can easily test your project without disrupting the production site.
What you're asking it to do is rebuild the entire table and all its indexes; this is an expensive operation particularly if the data doesn't fit in ram. It will complete, but it will be vastly slower if the data doesn't fit in ram, particularly if you have lots of indexes.
I question your judgement when choosing to run a machine with such tiny memory in production. Anyway:
Is this ALTER TABLE really necessary; what specific query are you trying to speed up, and have you tried it without?
Have you considered making your development machine more like production? I mean, using a dev box with MORE memory is never a good idea, and using a different OS is definitely not either.
There is probably also some tuning you can do to try to help; it largely depends on your schema (indexes in particular). 4M rows is not very many (for a machine with normal amounts of ram).
is the primary key auto_increment? if so, then doing ALTER TABLE ... ORDER BY isn't going to improve anything since everything will be inserted in order anyway.
(unless you have lots of deletes)
I'd probably create a View instead which is ordered by the PK value, so that for one thing you don't need to lock up that huge table while the ALTER is being performed.
If you're using InnoDB, you shouldn't have to explicitly perform the ORDER BY either post-insert or at query time. According to the MySQL 5.0 manual, InnoDB already defaults to primary key ordering for query results:
http://dev.mysql.com/doc/refman/5.0/en/alter-table.html#id4052480
MyISAM tables return records in insertion order by default, instead, which may work as well if you only ever append to the table, rather than using an UPDATE query to modify any rows in-place.