Caching mysql table for performance - mysql

In my page, I have multiple queries to fetch data from same table with different scenarios. These multiple queries give me performance issues. So I am trying to cache the table and then query that with different scenarios and in this way I don't need to hit the database all the time.
But, I don't know how to cache the table and query from it.
Can anyone help?
Is there any other way to improve the performance?

Caching the table is easy: select * from myTable, and read the data into an array. You'll then have to search it yourself in your choice of language. For a small table and simple queries this could be faster. For a large table you could run into memory problems, and complex queries will become more difficult.
There are many potential ways to improve performance. Adding indexes to appropriate columns can make a world of difference, as can the exact order in which you perform queries and subqueries. Without any idea of the schema you're using, or the the queries you're applying it's impossible to say more.

You have a few options:
If you have considerably more physical RAM than the size of your databases, set the innodb_buffer_pool_size variable to a value larger than your database. InnoDB automatically caches tables in RAM until they change.
If you have considerably more RAM than the size of the table you're interested in but don't want to rely on InnoDB's cache, try to use the MEMORY storage engine
MEMORY tables exist only in RAM so they're fast; they don't persist, but if you just want a cached version with that in mind, try this:
CREATE TABLE cachedcopy LIKE table;
ALTER TABLE cachedcopy ENGINE=MEMORY;
INSERT INTO cachedcopy SELECT * FROM table;
If your table is larger than available RAM (or you can't dedicate that memory to it), you'll have to use other techniques like creating indexes or trimming the data processed by each of your queries.

Related

MySQL: using the MEMORY STORAGE ENGINE to improve the performance of this queries

I have this use case: I need to execute multiple times the same "logic" query, a fixed number of times, over the same table (same semantic, varying only the values with which the "WHERE" statements are compared).
Query layout:
SELECT [(SUM(col_name),col_name,...)]
FROM table_name
WHERE expr AND expr...
I pretend to improve performance for this task.
From reading articles I've found here on this issue, and some extra research, I can point out the following relevant facts:
Internal Temporary Tables are not used (using EXPLAIN in the query)
Query Cache is not used (not identical queries)
If I create a temporary table in memory (RAM, ENGINE=MEMORY) mirroring the table in question, and then execute all the queries over this in-memory table, can I improve performance?:
CREATE TABLE tmp_table_name ENGINE=MEMORY SELECT * FROM table_name;
Perform the queries over tmp_table_name
DROP TABLE tmp_table_name;
see MySQL docs: The MEMORY (HEAP) Storage Engine
Thanks.
Will MEMORY improve performance? Mostly no.
No, you can't use MEMORY if you you can't use MEMORY - see limitations
No, MEMORY is slower if you spend too much time creating the table.
No because the copy won't be up to date. (OK, maybe that is not an issue.)
No because you are stealing from other caches, making other queries slower.
No -- If VARCHARs turn into CHARs in your version, the MEMORY table could be so much bigger than the non-MEMORY table that other issues come into play.
Maybe -- If everything is cached in RAM anyway, MEMORY will run at about the same speed.
Your particular example can possibly be sped up by
using a suitable "compound" index. (Please show us the details of your SELECT)
creating and maintaining Summary Tables -- if this is a "Data Warehouse" application. (Again, let's see details.)

Can MySQL fall back to another table type if a temp memory table fills up?

When creating a temp table, I don't have a good way to estimate how much space it'll take up so sometimes running a query like
CREATE TEMPORARY TABLE t_temp ENGINE=MEMORY
SELECT t.*
FROM `table_name` t
WHERE t.`column` = 'a';
Results in the error "The table 't_temp' is full". I realize you can adjust your max_heap_table_size and tmp_table_size to allow for bigger tables but that's not a great option because these tables can get quite large.
Ideally, I'd like it to fall back to a MyISAM table instead of just erroring out. Is there some way to specify that in the query or in the server settings? Or is the best solution really just to watch for errors and then try running the query again with a different table type? That's the only solution I can think of, besides just never using MEMORY tables if there's any doubt, but it seems wasteful of database resources and is going to create more code complexity.
I'm running MySQL v5.5.27, if that affects the answer.
The memory engine is just that: if you run out of RAM, you're done unless you want to develop your own storage engine as #eggyal proposed.
With respect, there are probably better ways to optimize your system than mucking about with conditional memory tables. If I were you I'd just take ENGINE=MEMORY out of your code and move on to the next problem. MySQL is pretty good about caching tables and using the RAM it has effectively with the other storage engines.
MySQL Cluster offers the same features as the MEMORY engine with higher performance levels, and provides additional features not available with MEMORY:
...Optional disk-backed operation for data durability.
Source: MySQL 5.5 manual. http://dev.mysql.com/doc/refman/5.5/en/memory-storage-engine.html
Not sure if Cluster can be combined a temp table.

MySQL individual table caching

I am hitting a fairly static table with bunch of simple SELECT queries.
In order to increase performance, I am considering writing my own memory cache for that data. But it feels like doing DB's dirty deeds.
Is there such a thing as a granular caching mechanism for a specific table?
If you use InnoDB, MySQL will automatically cache the table for you, and create hash-indexes for often used parts of the index.
I suggest you increase the amount of memory MySQL has at its disposal and it should take care of your problems all by itself.
By default MySQL is setup to conserve space, not to run fast.
Here are a few links to get you going with tuning:
http://www.debianhelp.co.uk/mysqlperformance.htm
http://www.mysqlperformanceblog.com/2006/09/29/what-to-tune-in-mysql-server-after-installation/
Also use indexes and write smarter queries.
But I cannot help you there if you don't show us the query.
There is a memory database (like innodb). You select that at table creation time.
You can try copying your static ISAM table into a temporary table that is by definition ram-resident. OTOH, it seems likely to me that the table is already cached, so that might not help much. How about showing us your query?

How to improve MySQL INSERT and UPDATE performance?

Performance of INSERT and UPDATE statements in our database seems to be degrading and causing poor performance in our web app.
Tables are InnoDB and the application uses transactions. Are there any easy tweaks that I can make to speed things up?
I think we might be seeing some locking issues, how can I find out?
You could change the settings to speed InnoDB inserts up.
And even more ways to speed up InnoDB
...and one more optimization article
INSERT and UPDATE get progressively slower when the number of rows increases on a table with an index. Innodb tables are even slower than MyISAM tables for inserts and the delayed key write option is not available.
The most effective way to speed things up would be to save the data first into a flat file and then do LOAD DATA , this is about 20x faster.
The second option would be create a temporary in memory table, load the data into it and then do a INSERT INTO SELECT in batches. That is once you have about 100 rows in your temp table, load them into the permanent one.
Additionally you can get an small improvement in speed by moving the Index file into a separate physical hard drive from the one where the data file is stored. Also try to move any bin logs into a different device. Same applies for the temporary file location.
I would try setting your tables to delay index updates.
ALTER TABLE {name} delay_key_write='1'
If you are not using indexes, they can help improve performance of update queries.
I would not look at locking/blocking unless the number of concurrent users have been increasing over time.
If the performance gradually degraded over time I would look at the query plans with the EXPLAIN statement.
It would be helpful to have the results of these from the development or initial production environment, for comparison purposes.
Dropping or adding an index may be needed,
or some other maintenance action specified in other posts.

Alternatives to the MEMORY storage engine for MySQL

I'm currently running some intensive SELECT queries against a MyISAM table. The table is around 100 MiB (800,000 rows) and it never changes.
I need to increase the performance of my script, so I was thinking on moving the table from MyISAM to the MEMORY storage engine, so I could load it completely into the memory.
Besides the MEMORY storage engine, what are my options to load a 100 MiB table into the memory?
A table with 800k rows shouldn't be any problem to mysql, no matter what storage engine you are using. With a size of 100 MB the full table (data and keys) should live in memory (mysql key cache, OS file cache, or propably in both).
First you check the indices. In most cases, optimizing the indices gives you the best performance boost. Never do anything else, unless you are pretty sure they are in shape. Invoke the queries using EXPLAIN and watch for cases where no or the wrong index is used. This should be done with real world data and not on a server with test data.
After you optimized your indices the queries should finish by a fraction of a second. If the queries are still too slow then just try to avoid running them by using a cache in your application (memcached, etc.). Given that the data in the table never changes there shouldn't be any problems with old cache data etc.
Assuming the data rarely changes, you could potentially boost the performance of queries significantly using MySql query caching.
If your table is queried a lot it's probably already cached at the operating system level, depending on how much memory is in your server.
MyISAM also allows for preloading MyISAM table indices into memory using a mechanism called the MyISAM Key Cache. After you've created a key cache you can load an index into the cache using the CACHE INDEX or LOAD INDEX syntax.
I assume that you've analyzed your table and queries and optimized your indices after the actual queries? Otherwise that's really something you should do before attempting to store the entire table in memory.
If you have enough memory allocated for Mysql's use - in the Innodb buffer pool, or for use by MyIsam, you can read the database into memory (just a 'SELECT * from tablename') and if there's no reason to remove it, it stays there.
You also get better key use, as the MEMORY table only does hash-bashed keys, rather than full btree access, which for smaller, non-unique keys might be fats enough, or not so much with such a large table.
As usual, the best thing to do it to benchmark it.
Another idea is, if you are using v5.1, to use an ARCHIVE table type, which can be compressed, and may also speed access to the contents, if they are easily compressible. This swaps the CPU time to de-compress for IO/memory access.
If the data never changes you could easily duplicate the table over several database servers.
This way you could offload some queries to a different server, gaining some extra breathing room for the main server.
The speed improvement depends on the current database load, there will be no improvement if your database load is very low.
PS:
You are aware that MEMORY tables forget their contents when the database restarts!