Is it possible to load a database in the RAM? - mysql

I want to load a MYSQL-database into the RAM of my computer, is there a way to do this? I am running this database under Linux. Also, if this is possible, is there a good way to make backups, because if the computer is unexpectedly shut down, I would lose all my data.

If you buffer pool is big enough, you data is -- effectively -- an in-memory database with a disk backup copy. Don't fool around with RAM databases, simply make the buffer pool size as large as you can make it.
Read this:
http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size

Yes, you can use the MEMORY engine. As far as backups, it's your call. You never said, e.g. how often you want to store to disk. But you can use traditional MySQL replication or your own solution.

absolutely, for example under linux you can mount your database in a tmpfs

If you're using innodb tables then I recommend adjusting the buffer pool size like S.Lott suggested above. Make it 110% or so of your database size if you have the ram.
If your database is > 50mb you'll also want to look at increasing the innodb_log_file_size. See http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_log_file_size Perhaps to around 25 - 50% of your buffer pool size, but 1gb max.
The innodb_log_file_size is a bit tricky to adjust. You need to shut the db down, move the current log files into a backup location, and let mysql recreate them when it's restarted (i.e. after you've changed the values in my.cnf). Google it and you'll find some answers.

Related

How to reduce the startup time for MySQL 5.7 with many databases?

I have MySQL 5.7.24 running on a Windows VM. It has a few thousand databases (7000). I understand this is not the recommended set up for MySQL but some business requirements have necessitated this multi-tenant db structure and I cannot change that unfortunately.
The server works fine when it is running but the startup time can get pretty long, almost 20-30 mins after a clean shutdown of the MySQL service and 1+ hours after a restart of the Windows VM.
Is there any way to reduce the startup time?
In my configuration, I observed that innodb_file_per_table = ON (which is the default for MySQL 5.7 I believe) and so I think that at startup it is scanning every .ibd file.
Would changing innodb_file_per_table = OFF and then altering each table to get rid of the .ibd files be a viable option. One thing to note is that in general, every database size is pretty small and even with 7000 databases, the total size of the data is about 60gb only. So to my understanding, innodb_file_per_table = ON is more beneficial when there are single tables that can get pretty large which is not the case for my server.
Question: Is my logic reasonable and could this innodb_file_per_table be the reason for the slow startup? Or is there some other config variable that I can change so that each .ibd file is not scanned before the server starts accepting connections.
Any help to guide me in the right direction would be much appreciated. Thanks in advance!
You should upgrade to MySQL 8.0.
I was working on a system with the same problem as yours. In our case, we had about 1500 schemas per MySQL instance, and a little over 100 tables per schema. So it was about 160,000+ tables per instance. It caused lots of problems trying to use innodb_file_per_table, because the mysqld process couldn't work with that many open file descriptors efficiently. The only way to make the system work was to abandon file-per-table, and move all the tables into the central tablespace.
But that causes a different problem. Tablespaces never shrink, they only grow. The only way to shrink a tablespace is to move the tables to another tablespace, and drop the big one.
One day one of the developers added some code that used a table like a log, inserting a vast number of rows very rapidly. I got him to stop logging that data, but by then it was too late. MySQL's central tablespace had expanded to 95% of the size of the database storage, leaving too little space for binlogs and other files. And I could never shrink it without incurring downtime for our business.
I asked him, "Why were you writing to that table so much? What are you doing with the data you're storing?" He shrugged and said casually, "I dunno, I thought the data might be interesting sometime, but I had no specific use for them." I felt like strangling him.
The point of this story is that one naïve developer can cause a lot of inconvenience if you disable innodb_file_per_table.
When MySQL 8.0 was being planned, the MySQL Product Manager solicited ideas for scalability criteria. I told him about the need to support instances with a lot of tables, like 160k or more. MySQL 8.0 included an all-new implementation of internal code for handling metadata about tables, and he asked the engineers to test the scalability with up to 1 million tables (with file-per-table enabled).
So the best solution to your problem is not to turn off innodb_file_per_table. That will just lead to another kind of crisis. The best solution is to upgrade to 8.0.
Re your comment:
As far as I know, InnoDB does not open tables at startup time. It opens tables when they are first queried.
Make sure you have table_open_cache and innodb_open_files tuned for your scale. Here is some reading:
https://dev.mysql.com/doc/refman/5.7/en/table-cache.html
https://www.percona.com/blog/2009/11/18/how-innodb_open_files-affects-performance/
https://www.percona.com/blog/2018/11/28/what-happens-if-you-set-innodb_open_files-higher-than-open_files_limit/
https://www.percona.com/blog/2017/10/01/one-million-tables-mysql-8-0/
I hope you are using an SSD for storage, not a spinning disk. This makes a huge difference when doing a lot of small I/O operations. SSD storage devices have been a standard recommendation for database servers for about 10 years.
Also this probably doesn't help you but I gave up on using Windows around 2007. Not as a server nor a desktop.

When configuring the innodb_buffer_pool_size should the value be different if you have a lot of small websites vs. a single one?

The general rule of thumb I've observed is configuring this property to use 70% of available RAM for dedicated SQL servers with over 4GB RAM. However I'm working on what basically amounts to a shared hosting environment experiencing a ton of traffic lately and I want to optimize this - these are dedicated mySQL servers, but have databases for 200-1000 different sites. Should I still configure using this rule?
You may have many tables and schemas in your MySQL instance, but a single buffer pool is used for all of them. It makes no difference what they're used for — one website or many websites or some database that is not for a website at all. Basically everything that is stored in a page of an InnoDB tablespace on that MySQL instance must be loaded into the buffer pool before it can be read or updated.
The recommendation of 70% of available RAM is not a magic number.
For example, it assumes you have a lot more data on storage than can fit in RAM. If you had 100GB of RAM and 2GB of data on storage, it would be unnecessary overkill to make a 70GB buffer pool. The pages from storage will only be copied into the buffer pool once, therefore for 2GB of data, your 70GB buffer pool would be mostly empty.
It also assumes that the remaining 30% of RAM is enough to support your operating system and other processes besides MySQL.
70% is just a starting suggestion. You need to understand your memory needs to size it properly.

best practical way to bacup/restore a toooo large db with size of ~2TB

The ibdata1 file has increased to ~2TB and leave only a few GBs free space left in the same disk. What's worse I forgot to turn on innodb_fiel_per_tabe.
I've read from SO that the only way to reduce ibdata1 is to backup->delete ibdata1->restore;
Now, since the ibdata1 is tooooo large, what's the good way to do it? and how many time(days??)it will take?
I have two other free disk of 2TB available which can be used for backup.
Create a replication slave with innodb_file_per_table on. Wait for it to populate. Then make it into the master server (change IP on either the server or the application).

At what point does MySQL INNODB fine tuning become a requirement?

I had a look at this:
http://www.mysqlperformanceblog.com/2009/01/12/should-you-move-from-myisam-to-innodb/
and:
http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
These answer a lot of my questions regarding INNODB vs MyISAM. There is no doubt in my mind that INNODB is the way I should go. However, I am working on my own and for development I have created a LAMP (ubuntu 10.10 x64) VM server. At present the server has 2 GB memory and a single SATA 20GB drive. I can increase both of these amounts without too much trouble to about 3-3.5 GB memory and a 200GB drive.
The reasons I hesitate to switch over to INNODB is:
A) The above articles mention that INNODB will vastly increase the size of the tables, and he recommends much larger amounts of RAM and drive space. While in a production environment I don't mind this increase, in a development environment, I fear I can not accommodate.
B) I don't really see any point in fine tuning the INNODB engine on my VM. This is likely something I will not even be allowed to do in my production environment. The articles make it sound like INNODB is doomed to fail without fine tuning.
My question is this. At what point is INNODB viable? How much RAM would I need to run INNODB on my server (with just my data for testing. This server is not open to anyone but me)? and also is it safe for me to assume that a production environment that will not allow me to fine tune the DB has likely already fine tuned it themselves?
Also, am I overthinking/overworrying about things?
IMHO, it becomes a requirement when you have tens of thousands of rows, or when you can forecast the rate of growth for data.
You need to focus on tuning the innodb buffer pool and the log file size. Also, make sure you have innodb_file_per_table enabled.
To get an idea of how big to make the innodb buffer pool in KB, run this query:
SELECT SUM(data_length+index_length)/power(1024,1) IBPSize_KB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in MB
SELECT SUM(data_length+index_length)/power(1024,2) IBPSize_MB
FROM information_schema.tables WHERE engine='InnoDB';
Here it is in GB
SELECT SUM(data_length+index_length)/power(1024,3) IBPSize_GB
FROM information_schema.tables WHERE engine='InnoDB';
I wrote articles about this kind of tuning
First Article
Second Article
Third Article
Fourth Article
IF you are limited by the amount of RAM on your server, do not surpass more than 25% of the installed for the sake of the OS.
I think you may be over thinking things. Its true that INNODB loves ram but if your database is small I don't think you'll have many problems. The only issue I have had with MYSQL or any other database is that as the data grows so do the requirements for accessing it quickly. You can also use compression on the tables to keep them smaller but INNODB is vastly better than MYISAM at data integrity.
I also wouldn't worry about tuning your application until you run into a bottleneck. Writing efficient queries and database design seems to be more important than memory unless you're working with very large data sets.

Prevent filesystem caching for MySQL queries

When i disable query cache in mysql, queries still cached. As I understand it is because of OS filesystem cache. How can i prevent filesystem on cache this data. I working on WIndows 7 but it might be the Linux.
There is no query filesystem cache in MySQL.
When i disable query cache in mysql, queries still cached
How do you disable it and how do you know queries are still cached? Why you don't want them to be cached?
SET SESSION query_cache_type = OFF;
Well now i can answer my question by myself. To prevent caching second and next queries need to set innodb_buffer_pool_size=0 config option. This buffer used by mysql to swapping data into memory and all next queries operates with memory instead HD.
You need buffer pool a bit (say 10%) larger than your data (total size of Innodb TableSpaces) because it does not only contain data pages – it also contain adaptive hash indexes, insert buffer, locks which also take some time. Though it is not as critical – for most workloads if you will have your Innodb Buffer Pool 10% less than your database size you would not loose much anyway