Why not use MySQL like memcached? - mysql

For the same reason it makes sense to use both NoSQL and RDBMSs in one application, It makes sense to me to have an RDBMS cache besides NoSQL cache.
I was thinking about using a MySQL server with all tables using the memory engine.
Is there any caveat to this approach?
Just to clarify, I am suggesting here to use a MySQL server for caching purposes only, not for the actual data storage of my application.

Memory tables are stored entirely in memory, so it is very fast.
It uses hash indexes which are also very fast, great for temp table purposes and lookups.
Memory tables have table level locks, so if concurrency is required, this is a problem
No transactions
When server shuts down or crashes ALL ROWS ARE LOST
though table definition stays the same, the data would be all gone.
You may want to check out the official documents on the Memory Engine
EDIT:
The Memory Storage Engine is a good candidate for caching purposes.
The following are a few things that the Memory Engine is good for:
Lookup/mapping tables
caching results of periodically added data
for data analysis purposes
Sessions management
Low latency operations
Better than other strategies such as CREATE TEMPORARY TABLE as the Memory table persists (if that is what you need)
There are a few negatives:
It does not support TEXT or BLOB columns, the table would be converted to a MyISAM on disk if such an event happens.
The table should not hold too much data, as it takes up resources that can otherwise be allocated to indexes/query caches
All in all the memory engine should be the best choice for you if you need caching.

another reason,
RAM is much limited compared to disk space,
you can have disk drive up to terabytes,
but hardly can let memory goes up-to terabytes

If you got the $$, the following article about MySQL and InnoDB's capabilities will be of high interest.
It outperforms any sort of cache, memory engine or memcached. Drawback is - it requires ram, lots of it.

Related

Should unused tables be archived?

There is a table in our database that takes about 25GB. It is no longer used by the current code.
Will it give any performance improvements (for rest of the tables) if we archive this table, even though it's not queried/used? Please provide explanation.
We are using MySQL with AWS Aurora.
Archiving tables will not have any impact on Aurora. Unused pages are evicted from buffer pool eventually [1], and since then, they never get pulled back onto the db instances, unless you make a query that would touch those pages.
You would continue to pay storage costs (and other in-direct costs like snapshots) by keeping them as unused. A better option would be to move the unused data to a new cluster, create a snapshot out of it, and remove the cluster. You can always recover the data when you need it by restoring a snapshot. The original database can then be cleaned by dropping these unused tables. This way you end up only paying for the snapshot, which is cheaper.
You could also export the data out of mysql (CSV let say) and store it in S3/Glacier. Only caviat is that when you need to access the data, it can end up being a much more time consuming effort to load it back to an existing or new database cluster.
[1] Buffer pool uses LRU for eviction. When you workload runs for long, you would eventually end up evicting all the pages associated with the unused table. Link: https://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html
Yes, archiving will improve performance also along with reduction in side and quickness of of backup/recovery cycles.
I have tried it on different projects in my recent full time job and results are amazing. For those who deny I would only say:
Reduction in footprint reduce disk IO and scans
Reduction in foot print reduce buffer requirements and hence RAM requirements.
YES, archiving infrequently used data will ease the burden on faster and more frequently accessed data storage systems. Older data that is unlikely to be needed often is put on systems that don’t need to have the speed and accessibility of systems that contain data still in use
Archived data is stored on a lower-cost tier of storage, serving as a way to reduce primary storage consumption and related costs. Typically, data reduplication is performed on data being moved to a lower storage tier, which reduces the overall storage footprint and lowers secondary storage costs

MySQL: Speed over reliability config

For my development machine I need no data consistency in case of a crash. Is there a config for a Debian-like system, that optimizes MySQL for speed (even if it sacrifices reliability)?
So something like: Cache the last 1 GB in RAM. Don't touch the disk with data until the 1 GB is used.
What kind of queries are going on? One of my mantras: "You cannot configure your way out of a performance problem."
Here's one thing that speeds up InnoDB, wrt transactions:
innodb_flush_log_at_trx_commit = 2
There is a simple way to speed up single-row inserts by a factor of 10.
Some 'composite' indexes can speed up a SELECT by a factor of 100.
Reformulating a WHERE can sometimes speed up a query by a factor of 100.
You can disable many of the InnoDB configurations for durability, at the risk of increased risk of losing data. But sometimes you want to operate the database in Running with scissors mode because the original data is safely stored somewhere else, and the copy in your test database is easily recreated.
This blog describes Reducing MySQL durability for testing. You aren't going to see any official MySQL recommendation to do this for any purpose other than testing!
Here's a summary of changes you can make in your /etc/my.cnf:
[mysqld]
# log_bin (comment this out to disable the binary log)
# sync_binlog=0 (irrelevant if you don't use the binary log)
sync_frm=0
innodb_flush_log_at_trx_commit=0
innodb_doublewrite=0
innodb_checksums=0
innodb_support_xa=0
innodb_log_file_size=2048M # or more
He also recommends to increase innodb_buffer_pool_size, but the size depends on your available RAM.
For what it's worth, I recently tried to set innodb_flush_log_at_trx_commit=0 in the configuration in the default Vagrant box I built for developers on my team, but I had to back out that change because it was causing too much lost time for developers who were getting corrupted databases. Just food for thought. Sometimes it's not a good tradeoff.
This doesn't do exactly what you asked (keep the last 1GB of data in RAM), as it still operates InnoDB with transaction logging and the log flushes to disk once per second. There's no way to turn that off in MySQL.
You could try using MyISAM, which uses buffered writes for data and index, and relies on the filesystem buffer. Therefore it could cache some of your data (in practice I have found that the buffer flushes to disk pretty promptly, so you're unlikely to have a full 1GB in RAM at any time). MyISAM has other problems, like lack of support for transactions. Developing with MyISAM and then using InnoDB in production can set you up for some awkward surprises.
Here's a couple of other changes you could make in your MySQL sessions for the sake of performance, but I don't recommend these even for development, because it can change your application behavior.
set session unique_checks=0;
set session foreign_key_checks=0;
Some people recommend using the MEMORY storage engine. That has its own problems, like size limits, table-locking, and lack of support for transactions.
I've also experimented with trying to put tables or tmpdir onto a tmpfs, but I found that didn't give nearly the performance boost you might expect. There's overhead in an RDBMS that is not directly related to disk I/O.
You might also like to experiment with MyRocks, a version of MySQL including the RocksDB storage engine for MySQL. Facebook developed it and released it as open-source. See Facebook rocks an open source storage engine for MySQL (InfoWorld). They promise it reduces I/O, it compresses data, and does other neat things.
But again, it's a good rule of thumb to make your development environment as close as possible to your production environment. Using a different storage engine creates a risk of not discovering some bugs until your code reaches production.
Bottom line: Tuning MySQL isn't a magic bullet. Maybe you should consider designing your application to make more use of microservices, caches, and message queues, and less reliance on direct SQL queries.
Also, I'd recommend to always supply your developers the fastest SSD-based workstation you can afford. Go for the top of the line on CPU and RAM and disk speed.
#Bill Karwin's answer has useful mysql settings to improve performance. I have used them all and was able to achieve a roughly 2x performance improvement.
However, what gave me the biggest performance boost (nearly 15x faster) for my use case -- which was reloading a mysql dump -- was to mount the underlying filesystem (ext4) using the nobarriers option.
mount -o remount,nobarrier /
More info here
You should only consider this if you have a separate partition (or logical volume) mounted at /var/lib/mysql, so that you can make this tradeoff only for MySQL, not your entire system.
Although this answer may not hit exactly the questions you ask, consider creating your tables with MEMORY engine as documented here: http://dev.mysql.com/doc/refman/5.7/en/memory-storage-engine.html
A typical use case for the MEMORY engine involves these
characteristics:
Operations involving transient, non-critical data such as session
management or caching. When the MySQL server halts or restarts, the
data in MEMORY tables is lost.
In-memory storage for fast access and low latency. Data volume can fit
entirely in memory without causing the operating system to swap out
virtual memory pages.
A read-only or read-mostly data access pattern (limited updates).
Give that a shot.
My recommendation, even for a development machine, would be to use the default InnoDB. If you choose to do transactions, InnoDB will be helpful.
This blog can help you run MySQL off of tmpfs: http://jotschi.de/2014/02/03/high-performance-mysql-testdatabase/. User Jotschi also speaks about that in a SO answer #10692398

Difference between In memory databases and disk memory database

Recently i heard about the concept of In memory database.
In any type of database we are finally storing the data in the computer,from there our program will get the data .How in memory database operations are fast when compared to the others.
Will the in memory database load all the data from the database into memory(RAM).
Thanks in advance....
An in-memory database (IMDB; also main memory database system or MMDB or memory resident database) is a database management system that primarily relies on main memory for computer data storage. It is contrasted with database management systems that employ a disk storage mechanism. Main memory databases are faster than disk-optimized databases since the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory eliminates seek time when querying the data, which provides faster and more predictable performance than disk.
Applications where response time is critical, such as those running telecommunications network equipment and mobile advertising networks, often use main-memory databases.
In reply to your query, yes it loads the data in RAM of your computer.
On-Disk Databases
All data stored on disk, disk I/O needed to move data into main
memory when needed.
Data is always persisted to disk.
Traditional data structures like B-Trees designed to store tables and
indices efficiently on disk.
Virtually unlimited database size.
Support very broad set of workloads, i.e. OLTP, data warehousing,
mixed workloads, etc.
In-Memory Databases
All data stored in main memory, no need to perform disk I/O to query
or update data.
Data is persistent or volatile depending on the in-memory database
product.
Specialized data structures and index structures assume data is
always in main memory.
Optimized for specialized workloads; i.e. communications
industry-specific HLR/HSS workloads.
Database size limited by the amount of main memory.
MySQL offerings
MySQL has several "Engines". In all engines, actions are performed in RAM. The Engines differ significantly in how good they are at making sure the data "persists" on disk.
ENGINE=MEMORY -- This is not persistent; the data is found only in RAM. It is limited to some preset max size. On a power failure, all data (in a MEMORY table) is lost.
ENGINE=MyISAM -- This is an old engine; it persists data to disk, but in the case of power failure, sometimes the indexes are corrupted and need 'repairing'.
ENGINE=InnoDB -- This is the preferred engine. It not only persists to disk but 'guarantees' consistency even across power failures.
In-memory db usually have the whole database in memory. (like MySQL DB Engine MEMORY)
This is a huge performance boost, but RAM is expensive and often not persistent, so you would loose data on restart.
There are some ways to reduce the last issue, e.g. by timed snapshots, or replication on a disk database.
Also there are some hybrid types, with just a part of the db in memory.
There are also in-memory databases like Tarantool that can work with data sets larger than available RAM. Tarantool is able to work with these sets because it is optimized for fast random writes, the main bottleneck that arises.

Which database for a web crawler, and how do I use MySQL in a distributed environment?

Which database engine should I use for a web crawler, InnoDB or MYiSAM? I have two PC's, each with 1TB hard drives. If one fills up, I'd like for it to save to the other PC automatically, but reads should go to the correct PC; how do I do that?
As for the first part of your question, it rather depends on you precise implementation. If you are going to have a single crawler limited by network bandwidth, then MYiSAM can be quicker. If you are using multiple crawlers then InnoDB will give you advantages such as transactions which may help.
AFAIK MySQL doesn't support the hardware configuration you are suggesting. If you need large storage you may wan tot look at MySQL Cluster.
MyISAM is the first choice, because you will have write only operations and crawlers -- even run in parallel -- will be configured -- I suppose -- to crawl different domains/urls. So you do not need to take care of access conflicts.
When writing a lot of data, especially text!, to Mysql avoid transactions, indexes, etc., because it will slow down MySQL drastically.

Best storage engine for constantly changing data

I currently have an application that is using 130 MySQL table all with MyISAM storage engine. Every table has multiple queries every second including select/insert/update/delete queries so the data and the indexes are constantly changing.
The problem I am facing is that the hard drive is unable to cope, with waiting times up to 6+ seconds for I/O access with so many read/writes being done by MySQL.
I was thinking of changing to just 1 table and making it memory based. I've never used a memory table for something with so many queries though, so I am wondering if anyone can give me any feedback on whether it would be the right thing to do?
One possibility is that there may be other issues causing performance problems - 6 seconds seems excessive for CRUD operations, even on a complex database. Bear in mind that (back in the day) ArsDigita could handle 30 hits per second on a two-way Sun Ultra 2 (IIRC) with fairly modest disk configuration. A modern low-mid range server with a sensible disk layout and appropriate tuning should be able to cope with quite a substantial workload.
Are you missing an index? - check the query plans of the slow queries for table scans where they shouldn't be.
What is the disk layout on the server? - do you need to upgrade your hardware or fix some disk configuration issues (e.g. not enough disks, logs on the same volume as data).
As the other poster suggests, you might want to use InnoDB on the heavily written tables.
Check the setup for memory usage on the database server. You may want to configure more cache.
Edit: Database logs should live on quiet disks of their own. They use a sequential access pattern with many small sequential writes. Where they share disks with a random access work load like data files the random disk access creates a big system performance bottleneck on the logs. Note that this is write traffic that needs to be completed (i.e. written to physical disk), so caching does not help with this.
I've now changed to a MEMORY table and everything is much better. In fact I now have extra spare resources on the server allowing for further expansion of operations.
Is there a specific reason you aren't using innodb? It may yield better performance due to caching and a different concurrency model. It likely will require more tuning, but may yield much better results.
should-you-move-from-myisam-to-innodb
I think that that your database structure is very wrong and needs to be optimised, has nothing to do with the storage