We're running a social networking site that logs every member's action (including visiting other member's pages); this involves a lot of writes to the db. These actions are stored in a MyISAM table and since something is starting to tax the CPU, my first thought was that it's the table locking of MyISAM that is causing this stress on the CPU.
There are only reads and writes, no updates to this table. I think the balance between reads and writes is about 50/50 for this table, would InnoDB therefore be a better option?
If I want to change the table to InnoDB and we don't use foreign key constraints, transactions or fulltext indexes - do I need to worry about anything?
Notwithstanding any benefits / drawbacks of its use, which are discussed in other threads ( MyISAM versus InnoDB ), migration is a nontrivial process.
Consider
Functionally testing all components which talk to the database if possible - difference engines have different semantics
Running as much performance testing as you can - some things may improve, others may be much worse. A well-known example is SELECT COUNT(*) on a large table.
Checking that all your code will handle deadlocks gracefully - you can get them without explicit use of transactions
Estimate how much space usage you'll get by converting - test this in a non-production environment.
You will doubtless need to change things in a large software platform; this is ok, but seeing as you (hopefully) have a lot of auto-test coverage, change should be acceptable.
PS: If "Something is starting to tax the CPU", then you should a) Find out what, in a non-production environment, b) Try various options to reduce it, in a non-production environment. You should not blindly start doing major things like changing database engines when you haven't fully analysed the problem.
All performance testing should be done in a non-production environment, with production-like data and on production-grade hardware. Otherwise it is difficult to interpret results correctly.
With regards to other potential migration problems:
1) Space - InnoDB tables often require more disk space, though the Barracuda file format for new versions of InnoDB have narrowed the difference. You can get a sense for this by converting a recent backup of the tables and comparing the size. Use "show table status" to compare the data length.
2) Full text search - only on MyISAM
3) GIS/Spatial datatypes - only on MyISAM
On performance, as the other answers and the referenced answer indicate, it depends on your workload. MyISAM is much faster for full table scans. InnoDB tends to be much faster for highly concurrent access. InnoDB can also be much faster if your lookups are based on the primary key.
Another performance issue is that MyISAM can always keep a row count, since it only does table level locking. So, if you're frequently trying to get the row count for a very large table, it may be much slower with InnoDB. Search the Internet if you need a workaround for this, as I've seen several proposed.
Depending on the size of the table(s), you may also need to update your MySQL config file. At the very least, you may want to shift bytes from key_buffer to innodb_buffer_pool_size. You won't get a fair comparison if you leave the database as being optimized for MyISAM. Read up on all the innodb_* configuration properties.
I think it's quite possible that switching to InnoDB would improve performance, but In my experience, you can't really be sure until you try it. If I were you, I would set up a test environment on the same server, convert to InnoDB and run a benchmark.
From my experience, MyISAM tables are only useful for text indexing where you need good performance with searches on big text, but you still don't need a full fledged search engine like Solr or ElasticSearch.
If you want to switch to InnoDB but want to keep indexing your text in a MyISAM table, I suggest you take a look at this: http://blog.lavoie.sl/2013/05/converting-myisam-to-innodb-keeping-fulltext.html
Also: InnoDB supports live atomic backups using innobackupex from Percona. This is godsent when dealing with production servers.
Related
So, one of my tables in MySQL which uses the InnoDB storage engine will contain multi-billion rows(with potentially no limit to how many will be inserted).
Can you tell me what sort of optimizations i can do to help speed up things?
Cause with a few million rows already, it will start getting slow.
Of course if you suggest to use something else. The only options i have are PostgreSQL and Sqlite3. But I've been told that sqlite3 is not a good choice for that.
As for postgresql, i have absolutely no idea how it is, as i've never used it.
I imagine though, at least about 1000-1500 inserts per second in that table.
A simple answer to your question would be yes InnoDB would be the perfect choice for a multi-billion row data set.
There is a host of optimization that is possbile.
The most obvious optimizations would be setting a large buffer pool, as buffer pool is the single most important thing when it comes to InnoDB because InnoDB buffers the data as well as the index in the buffer pool. If you have a dedicated MySQL server with only InnoDB tables, then you should set upto 80% of the available RAM to be used by InnoDB.
Another most important optimization is having proper indexes on the table (keeping in mind the data access/update pattern), both primary and secondary. (Remember that primary indexes are automatically appended to secondary indexes).
With InnoDB there are some extra goodies, such as protection from data corruption, auto-recovery etc.
As for increasing write-performance, you should setup your transaction log files to be upto a total of 4G.
One other thing that you can do is partition the table.
You can eek out more performance, by setting the bin-log-format to "row", and setting the auto_inc_lock_mode to 2 (that will ensure that innodb does not hold table level locks when inserting into auto-increment columns).
If you need any specific advice you can contact me, I would be more than willing to help.
optimizations
Take care not to have too many indexes. They are expensive when inserting
Make your datatypes fit your data, as tight fit you can. (so don't go saving ip-adresses in a text or a blob, if you know what i mean). Look in to varchar vs char. Don't forget that because varchar is more flexible, you are trading in some things. If you know a lot about your data it might help to use char's, or it might be clearly better to use varchars. etc.
Do you read at all from this table? If so, you might want to do all the reading from a replicated slave, although your connection should be good enough for that amount of data.
If you have big inserts (aside from the number of inserts), make sure your IO is actually quick enough to handle the load.
I don't think there is any reason MySQL wouldn't support this. Things that can slow you down from "thousands" to "millions" to "billions" are stuff like aforementioned indexes. There is -as far as i know- no "mysql is full" problem.
Look into Partial indexes. From wikipedia (quickest source I could find, didn't check the references, but I'm sure you can manage:)
MySQL as of version 5.4 does not
support partial indexes.[3] In MySQL,
the term "partial index" is sometimes
used to refer to prefix indexes, where
only a truncated prefix of each value
is stored in the index. This is
another technique for reducing index
size.[4]
No idea on the MySQL/InnoDB part (I'd assume it'll cope). But if you end up looking at alternatives, PostgreSQL can manage a DB of unlimited size on paper. (At least one 32TB database exists according to the FAQ.)
Can you tell me what sort of optimizations i can do to help speed up things?
Your milage will vary depending on your application. But with billions of rows, you're at least looking into partitioning your data, in order to work on smaller tables.
In the case of PostgreSQL, you'd also look into creating partial indexes where appropriate.
You may want to have a look at:
http://www.mysqlperformanceblog.com/2006/06/09/why-mysql-could-be-slow-with-large-tables/
http://forums.whirlpool.net.au/archive/954126
If you have a very large table (Billions of records) and need to data mine the table (queries that read lots of data), mysql can slow to a crawl.
Large databases (200+GB) are fine, but they are bound by IO/ temp table to disk and multiple other issues when attempting to read large groups that don't fit in memory.
I am creating an application which will store a (semi) real-time feed of a few different scales around a certain location. The weights of each scale will be put in a table with only as many rows as scales. The scale app feeds the MySQL database a new weight every second, which a PHP web app reads every 3 seconds. It doesn't seem like very much traffic that would page the hard drive very much, or if the difference would be negligible, but I'm wondering if it would be more efficient or make more sense to use a Memory/HEAP table vs a normal MyISAM table.
With anything from 100's to 1000's of concurrent read/write requests (think typical OLTP usage) innodb will out perform myisam hands down.
It's not about other people's observations, it's not about transactional/acid support, it's about the architecture of innodb which is far superior to that of the legacy myisam engine.
For example, innodb supports clustered primary key indexes http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html.
Additionally, innodb has row level locking which is far more performant under concurrent load than myisam table level locking.
I could keep going but somone's already provided a really good summary of why innodb is a better choice for OLTP: http://tag1consulting.com/MySQL_Engines_MyISAM_vs_InnoDB
Well, if you're expecting a large amount of data, I think you almost have to go MyISAM. You'll likely run out of memory if you store it all in a memory table. Not to mention that you'll lose all of your data upon power loss with a HEAP engine (Keep in mind, you may want that depending on your use case)...
I know that this question is getting dated and you've probably made a very good solution by now but I just wanted to point out to anyone who may be reading this that perhaps a relational database isn't the best way to solve this problem. To me this clearly looks like a case where a flat file database is the ideal solution. You could have saved yourself a ton of overhead by just writing these values out to a binary file and then use simple mathematical operations to select rows and fields.
I ran a lookup test against an indexed MySQL table containing 20,000,000 records, and according to my results, it takes 0.004 seconds to retrieve a record given an id--even when joining against another table containing 4,000 records. This was on a 3GHz dual-core machine, with only one user (me) accessing the database. Writes were also fast, as this table took under ten minutes to create all 20,000,000 records.
Assuming my test was accurate, can I expect performance to be as as snappy on a production server, with, say, 200 users concurrently reading from and writing to this table?
I assume InnoDB would be best?
That depends on the storage engine you're going to use and what's the read/write ratio.
InnoDB will be better if there are lot of writes. If it's reads with very occasional write, MyISAM might be faster. MyISAM uses table level locking, so it locks up whole table whenever you need to update. InnoDB uses row level locking, so you can have concurrent updates on different rows.
InnoDB is definitely safer, so I'd stick with it anyhow.
BTW. remember that right now RAM is very cheap, so buy a lot.
Depends on any number of factors:
Server hardware (Especially RAM)
Server configuration
Data size
Number of indexes and index size
Storage engine
Writer/reader ratio
I wouldn't expect it to scale that well. More importantly, this kind of thing is to important to speculate about. Benchmark it and see for yourself.
Regarding storage engine, I wouldn't dare to use anything but InnoDB for a table of that size that is both read and written to. If you run any write query that isn't a primitive insert or single row update you'll end up locking the table using MyISAM, which yields terrible performance as a result.
There's no reason that MySql couldn't handle that kind of load without any significant issues. There are a number of other variables involved though (otherwise, it's a 'how long is a piece of string' question). Personally, I've had a number of tables in various databases that are well beyond that range.
How large is each record (on average)
How much RAM does the database server have - and how much is allocated to the various configurations of Mysql/InnoDB.
A default configuration may only allow for a default 8MB buffer between disk and client (which might work fine for a single user) - but trying to fit a 6GB+ database through that is doomed to failure. That problem was real btw - and was causing several crashes a day of a database/website till I was brought in to trouble-shoot it.
If you are likely to do a great deal more with that database, I'd recommend getting someone with a little more experience, or at least oing what you can to be able to give it some optimisations. Reading 'High Performance MySQL, 2nd Edition' is a good start, as is looking at some tools like Maatkit.
As long as your schema design and DAL are constructed well enough, you understand query optimization inside out, can adjust all the server configuration settings at a professional level, and have "enough" hardware properly configured, yes (except for sufficiently pathological cases).
Same answer both engines.
You should probably perform a load test to verify, but as long as the index was created properly (meaning indexes are optimized to your query statements), the SELECT queries should perform at an acceptable speed (the INSERTS and/or UPDATES may be more of a speed issue though depending on how many indexes you have, and how large the indexes get).
It seems like I will be needing transaction with MySQL and I have no idea how should I manage transactions in Mysql with mixed InnoDB/MyISAM tables, It all seems like a huge mess.
You might ask why would I ever want to mix the tables together... the anwer is PERFORMANCE. as many developers have noticed, InnoDB tables generally have bad performance, but in return give higher isolation level etc...
does anyone have any advice regarding this issue?
I think you are overrating the performance difference between MyISAM and InnoDB. MyISAM is faster in data warehousing situations (such as full table scan reporting, etc..), but InnoDB can actually be faster in many cases with normal OLTP queries.
InnoDB is harder to tune since it has more knobs, but a properly tuned InnoDB system can often have higher throughput than MyISAM due to better locking and better I/O patterns.
Given that you can't have transactions in MyISAM tables, I am not sure what the actual problem is. Any data you need transactions for must be in an InnoDB table and you manage the transactions using whatever access library you are using or with manual SQL commands.
There are definite performance benefits of using exactly one engine.
A server tuned for one engine won't be tuned for the other - both require that you allocate a substantial amount of RAM to its exclusive use - therefore, you can't give them both an optimal amount.
Say you have 8G of ram on your (obviously 64-bit, but still relatively small) database server, you might want to assign about 3/4 of it to your innodb page cache. Alternatively, if you're using MyISAM, you may want about half of it to be your key_buffer. You can't do both.
Pick an engine and use it exclusively. There are ways of getting around performance problems - most of them aren't easy though (i.e. they require redesigning your data structure or your application).
The short answer is that there is no transaction support in MyISAM. If you start a transaction, add or modify data in some InnoDB tables, add or modify data in a MyISAM table, and then you have to rollback, your MyISAM change cannot be removed. To support mixed engines like that, your application has to know that changes to whatever data is stored MyISAM happens "outside" of the transaction.
If you need transactions for some processes, then isolate the data that must be transactionable and put all that data in InnoDB.
One of my projects use the MyISAM engine in MySQL, but I'm considering changing it to InnoDB as I need transaction support here and there.
What should I look at or consider before doing this?
Can I just change the engine, or should the data be prepared for it?
Yes absolutely, there are many things, you should test your application extremely thoroughly:
Transactions can deadlock and need to be repeated. This is the case (in some circumstances) even with an autocommitted transaction which only inserts one row.
Disc usage will almost certainly increase
I/O load during writes will almost certainly increase
Behaviour of indexing will change because InnoDB uses clustered indexes - this may be a beneficial effect in some cases
Your backup strategy will be impacted. Consider this carefully.
The migration process itself will need to be carefully planned, as it will take a long time if you have a lot of data (during which time the data will be either readonly, or completely unavailable - do check!)
There is one big caveat. If you get any kind of hardware failure (or similar) during a write, InnoDB will corrupt tables.
MyISAM will also, but a mysqlcheck --auto-repair will repair them. Trying this with InnoDB tables will fail. Yes, this is from experience.
This means you need to have a good regular data backup plan to use InnoDB.
Some other notes:
InnoDB does not reallocate free space on the filesystem after you drop a table/database or delete a record, this can be solved by "dumping and importing" or setting innodb_file_per_table=1 in my.cnf.
Adding/removing indexes on a large InnoDB table can be quite painfull, because it locks the current table, creates a temporary one with your altered indexes and inserts data - row by row. There is a plugin from Innobase, but it works only for MySQL 5.1
InnoDB is also MUCH MORE memory intense, I suggest you to have as large innodb_buffer_pool_size variable as your server memory allows (70-80% should be a safe bet). If your server is UNIX/Linux, consider reducing sysctl variable vm.swappiness to 0 and use innodb_flush_method=O_DIRECT to avoid double buffering. Always test if you hit swap when toggling those values.You can always read more at Percona blog, which is great.
Also, you can run mysqlbackup with --single-transaction --skip-lock-tables and have no table locks while the backup is commencing.
In any case, InnoDB is great, do not let some pitfalls discourage you.
Just altering the table and setting the engine should be fine.
One of the big ones to watch out for is that select count(*) from MyTable is much slower in InnoDB than MyISAM.
auto_increment values will reset to the highest value in the table +1 after a server restart -- this can cause funny problems if you have a messy db with some deletes.
Optimum server settings are going to be different to a mainly MyISAM db.
Make sure the size of the innodb file is big enough to hold all your data or you'll be crucified by constant reallocation when you change the engines of the tables.
If you are intending to use InnoDB as a way to get concurrent queries, then you will want to set innodb_file_trx_commit=1 so you get some performance back. OTOH, if you were looking to re-code your application to be transaction aware, then deciding this setting will be part of the general performance review needed of the InnoDB settings.
The other major thing to watch out for is that InnoDB does not support FullText indices, nor INSERT DELAYED. But then, MyISAM doesn't support referential integrity. :-)
However, you can move over only the tables you need transaction aware. I've done this. Small tables (up to several thousand rows) can often be changed on-the-fly, incidentally.
The performance characteristics can be different, so you may need to keep an eye on the load.
The data will be fine.