How do I make a MySQL database run completely in memory? - mysql

I noticed that my database server supports the Memory database engine. I want to make a database I have already made running InnoDB run completely in memory for performance.
How do I do that? I explored PHPMyAdmin, and I can't find a "change engine" functionality.

Assuming you understand the consequences of using the MEMORY engine as mentioned in comments, and here, as well as some others you'll find by searching about (no transaction safety, locking issues, etc) - you can proceed as follows:
MEMORY tables are stored differently than InnoDB, so you'll need to use an export/import strategy. First dump each table separately to a file using SELECT * FROM tablename INTO OUTFILE 'table_filename'. Create the MEMORY database and recreate the tables you'll be using with this syntax: CREATE TABLE tablename (...) ENGINE = MEMORY;. You can then import your data using LOAD DATA INFILE 'table_filename' INTO TABLE tablename for each table.

It is also possible to place the MySQL data directory in a tmpfs in thus speeding up the database write and read calls. It might not be the most efficient way to do this but sometimes you can't just change the storage engine.
Here is my fstab entry for my MySQL data directory
none /opt/mysql/server-5.6/data tmpfs defaults,size=1000M,uid=999,gid=1000,mode=0700 0 0
You may also want to take a look at the innodb_flush_log_at_trx_commit=2 setting. Maybe this will speedup your MySQL sufficently.
innodb_flush_log_at_trx_commit changes the mysql disk flush behaviour. When set to 2 it will only flush the buffer every second. By default each insert will cause a flush and thus cause more IO load.

Memory Engine is not the solution you're looking for. You lose everything that you went to a database for in the first place (i.e. ACID).
Here are some better alternatives:
Don't use joins - very few large apps do this (i.e Google, Flickr, NetFlix), because it sucks for large sets of joins.
A LEFT [OUTER] JOIN can be faster than an equivalent subquery because
the server might be able to optimize it better—a fact that is not
specific to MySQL Server alone.
-The MySQL Manual
Make sure the columns you're querying against have indexes. Use EXPLAIN to confirm they are being used.
Use and increase your Query_Cache and memory space for your indexes to get them in memory and store frequent lookups.
Denormalize your schema, especially for simple joins (i.e. get fooId from barMap).
The last point is key. I used to love joins, but then had to run joins on a few tables with 100M+ rows. No good. Better off insert the data you're joining against into that target table (if it's not too much) and query against indexed columns and you'll get your query in a few ms.
I hope those help.

If your database is small enough (or if you add enough memory) your database will effectively run in memory since it your data will be cached after the first request.
Changing the database table definitions to use the memory engine is probably more complicated than you need.
If you have enough memory to load the tables into memory with the MEMORY engine, you have enough to tune the innodb settings to cache everything anyway.

"How do I do that? I explored PHPMyAdmin, and I can't find a "change engine" functionality."
In direct response to this part of your question, you can issue an ALTER TABLE tbl engine=InnoDB; and it'll recreate the table in the proper engine.

In place of the Memory storage engine, one can consider MySQL Cluster. It is said to give similar performance but to support disk-backed operation for durability. I've not tried it, but it looks promising (and been in development for a number of years).
You can find the official MySQL Cluster documentation here.

Additional thoughts :
Ramdisk - setting the temp drive MySQL uses as a RAM disk, very easy to set up.
memcache - memcache server is easy to set up, use it to store the results of your queries for X amount of time.

Related

How can I limit the size of temporary tables?

I have largish (InnoDB) tables in a database; apparently the users are capable of making SELECTs with JOINs that result in temporary, large (and thus on-disk) tables. Sometimes, those are so large that they exhaust disk space, leading to all sorts of weird issues.
Is there a way to limit temp table maximum size for an on-disk table, so that the table doesn't overgrow the disk? tmp_table_size only applies to in-memory tables, despite the name. I haven't found anything relevant in the documentation.
There's no option for this in MariaDB and MySQL.
I ran into the same issue as you some months ago, I searched a lot and I finally partially solved it by creating a special storage area on the NAS for themporary datasets.
Create a folder on your NAS or a partition on an internal HDD, it will be by definition limited in size, then mount it, and in the mysql ini, assign the temporary storage to this drive: (choose either windows/linux)
tmpdir="mnt/DBtmp/"
tmpdir="T:\"
mysql service should be restarted after this change.
With this approach, once the drive is full, you still have "weird issues" with on-disk queries, but the other issues are gone.
There was a discussion about an option disk-tmp-table-size, but it looks like the commit did not make it through review or got lost for some other reason (at least the option does not exist in the current code base anymore).
I guess your next best try (besides increasing storage) is to tune MySQL to not make on-disk temp tables. There are some tips for this on DBA. Another attempt could be to create a ramdisk for the storage of the "on-disk" temp tables, if you have enough RAM and only lack disk storage.
While it does not answer the question for MySQL, MariaDB has tmp_disk_table_size and potentially also useful max_join_size settings. However, tmp_disk_table_size is only for MyISAM or Aria tables, not for InnoDB. Also, max_join_size works only on the estimated row count of the join, not the actual row count. On the bright side, the error is issued almost immediately.

Storage engine for high volume of selects

Background
I am creating an API utilizing the Bible where I would like to be able to eliminate as much as the database bottleneck as possible. My data is fairly de-normalised to eliminate most unnecessary joins.
Information
Seeing as the text of the Bible doesn't change, I will be doing hardly any INSERT statements. The only time I will insert data is when I add a new translation, which will happen periodically, but I don't care about the speed here.
I will, however, be doing tons of SELECT statements.
I do not need any transnational, ACID compliant features. My primary concern is speed.
The Question
What would the ideal MySql storage engine be to fit these conditions?
I am aware of the basics of each engine (my guess would that MyISAM is ideal), so I am looking for an answer that can be backed up with statistics or further reasoning demonstrating a deep knowledge of some of these engines.
Although using NoSQL may be better than a RDBMS, that is not the information I'm looking for.
the bible is small in terms of file size. and as you said doesnt change.
For the best performance on reads consider Memory. This has the limitation that you cant use text / blob. But providing your data is split into 65,533 char chunks you will be fine.
http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html
Using memory also means if power is lost / server is restarted all data is lost. so periodically writing to disk will be useful and on restart you will need to populate the table again.
You will need extra RAM to use this method over other methods though as all tables are stored in RAM
From the question in the comments.
The docs say
To populate a MEMORY table when the MySQL server starts, you can use
the --init-file option. For example, you can put statements such as
INSERT INTO ... SELECT or LOAD DATA INFILE into this file to load the
table from a persistent data source. See Section 5.1.3, “Server
Command Options”, and Section 13.2.6, “LOAD DATA INFILE Syntax”.
http://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html#idp82809968
http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_init-file
Again you will need to keep this file up to date with any changes. (can use a mysqldump to maintain it)
Innodb with good indexes maybe even good partitions.
innodb is designed to have better performance with multible threads clients (read more clients at the same time) vs MyISAM is not build for that.
if the server is correcly configured Innodb will really blast away myisam on performance

Can MySQL fall back to another table type if a temp memory table fills up?

When creating a temp table, I don't have a good way to estimate how much space it'll take up so sometimes running a query like
CREATE TEMPORARY TABLE t_temp ENGINE=MEMORY
SELECT t.*
FROM `table_name` t
WHERE t.`column` = 'a';
Results in the error "The table 't_temp' is full". I realize you can adjust your max_heap_table_size and tmp_table_size to allow for bigger tables but that's not a great option because these tables can get quite large.
Ideally, I'd like it to fall back to a MyISAM table instead of just erroring out. Is there some way to specify that in the query or in the server settings? Or is the best solution really just to watch for errors and then try running the query again with a different table type? That's the only solution I can think of, besides just never using MEMORY tables if there's any doubt, but it seems wasteful of database resources and is going to create more code complexity.
I'm running MySQL v5.5.27, if that affects the answer.
The memory engine is just that: if you run out of RAM, you're done unless you want to develop your own storage engine as #eggyal proposed.
With respect, there are probably better ways to optimize your system than mucking about with conditional memory tables. If I were you I'd just take ENGINE=MEMORY out of your code and move on to the next problem. MySQL is pretty good about caching tables and using the RAM it has effectively with the other storage engines.
MySQL Cluster offers the same features as the MEMORY engine with higher performance levels, and provides additional features not available with MEMORY:
...Optional disk-backed operation for data durability.
Source: MySQL 5.5 manual. http://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html
Not sure if Cluster can be combined a temp table.

PostgreSQL equivalent of MySQL memory tables?

Does PostgreSQL have an equivalent of MySQL memory tables?
These MySQL memory tables can persist across sessions (i.e., different from temporary tables which drop at the end of the session). I haven't been able to find anything with PostgreSQL that can do the same.
No, at the moment they don't exist in PostgreSQL. If you truly need a memory table you can create a RAM disk, add a tablespace for it, and create tables on it.
If you only need the temporary table that is visible between different sessions, you can use an UNLOGGED table. These are not true memory tables but they'll behave surprisingly similarly when the table data is significantly smaller than the system RAM.
Global temporary tables would be another option but are not supported in PostgreSQL as of 9.2 (see comments).
Answering a four year old question but since it comes on top of google search results even now.
There is no built in way to cache a full table in memory, but there is an extension that can do this.
In Memory Column Store is a library that acts as a drop in extension and also as a columnar storage and execution engine. You can refer here for the documentation. There is a load function that you can use to load the entire table into memory.
The advantage is the table is stored inside postgres shared_buffers, so when executing a query postgres immediately senses that the pages are in memory and fetches from there.
The downside is that shared_buffers is not really designed to operate in such a way and instabilities might occur (usually it doesn't), but you can probably have this in a secondary cluster/machine with this configuration just to be safe.
All other usual caveats about postgres and shared_buffers still apply.

Are there any pitfalls / things you need to know when changing from MyISAM to InnoDB

One of my projects use the MyISAM engine in MySQL, but I'm considering changing it to InnoDB as I need transaction support here and there.
What should I look at or consider before doing this?
Can I just change the engine, or should the data be prepared for it?
Yes absolutely, there are many things, you should test your application extremely thoroughly:
Transactions can deadlock and need to be repeated. This is the case (in some circumstances) even with an autocommitted transaction which only inserts one row.
Disc usage will almost certainly increase
I/O load during writes will almost certainly increase
Behaviour of indexing will change because InnoDB uses clustered indexes - this may be a beneficial effect in some cases
Your backup strategy will be impacted. Consider this carefully.
The migration process itself will need to be carefully planned, as it will take a long time if you have a lot of data (during which time the data will be either readonly, or completely unavailable - do check!)
There is one big caveat. If you get any kind of hardware failure (or similar) during a write, InnoDB will corrupt tables.
MyISAM will also, but a mysqlcheck --auto-repair will repair them. Trying this with InnoDB tables will fail. Yes, this is from experience.
This means you need to have a good regular data backup plan to use InnoDB.
Some other notes:
InnoDB does not reallocate free space on the filesystem after you drop a table/database or delete a record, this can be solved by "dumping and importing" or setting innodb_file_per_table=1 in my.cnf.
Adding/removing indexes on a large InnoDB table can be quite painfull, because it locks the current table, creates a temporary one with your altered indexes and inserts data - row by row. There is a plugin from Innobase, but it works only for MySQL 5.1
InnoDB is also MUCH MORE memory intense, I suggest you to have as large innodb_buffer_pool_size variable as your server memory allows (70-80% should be a safe bet). If your server is UNIX/Linux, consider reducing sysctl variable vm.swappiness to 0 and use innodb_flush_method=O_DIRECT to avoid double buffering. Always test if you hit swap when toggling those values.You can always read more at Percona blog, which is great.
Also, you can run mysqlbackup with --single-transaction --skip-lock-tables and have no table locks while the backup is commencing.
In any case, InnoDB is great, do not let some pitfalls discourage you.
Just altering the table and setting the engine should be fine.
One of the big ones to watch out for is that select count(*) from MyTable is much slower in InnoDB than MyISAM.
auto_increment values will reset to the highest value in the table +1 after a server restart -- this can cause funny problems if you have a messy db with some deletes.
Optimum server settings are going to be different to a mainly MyISAM db.
Make sure the size of the innodb file is big enough to hold all your data or you'll be crucified by constant reallocation when you change the engines of the tables.
If you are intending to use InnoDB as a way to get concurrent queries, then you will want to set innodb_file_trx_commit=1 so you get some performance back. OTOH, if you were looking to re-code your application to be transaction aware, then deciding this setting will be part of the general performance review needed of the InnoDB settings.
The other major thing to watch out for is that InnoDB does not support FullText indices, nor INSERT DELAYED. But then, MyISAM doesn't support referential integrity. :-)
However, you can move over only the tables you need transaction aware. I've done this. Small tables (up to several thousand rows) can often be changed on-the-fly, incidentally.
The performance characteristics can be different, so you may need to keep an eye on the load.
The data will be fine.