I have an Ubuntu 16.04 server (VPS), I installed MariaDb 10.1 on it (did it like the doc said). I would like to configure the database server, so innodb can use 2-4GB memory, so I tried to find the sample my.cnf's in /usr/share/mysql/ but not a single one is there (only these). Where can I find the sample configurations and how can I make them the actual? I read the /etc/mysql/mysql.conf.d/ dir should contain a mysqld.cnf, but for me there are only these files:
50-client.cnf 50-mysql-clients.cnf 50-mysqld_safe.cnf 50-server.cnf 50-server.cnf.backup
Which one is active? My guess is none.
I don't need anything special, that is why I wanted to just actualize the my-huge.cnf and be done.
You need to copy one of those to /etc/my.cnf. Without the file being there, you (probably) have no config other than built-in defaults.
Show us the contents for further critique.
How much RAM do you have?
The most important setting for the server my.cnf is innodb_buffer_pool_size. If you have more than 4GB of RAM, set it to about 70% of available RAM. If you have less than 4GB, use some smaller percentage. Swapping is terribly bad for performance.
Related
Good morning.
I've a MariaDB environtment, with a very basic my.cnf on it. Coming by default on product installation:
We've been working with this configuration from the beggining. So we have real productive data for a year long, on our DB.
If I substitute this my.cnf, by this one here:
https://gist.github.com/fevangelou/fb72f36bbe333e059b66
which is close to our instance specs (VM, quad-core 2.20Ghz, 8GB RAM), adapting it accordingly, do I have risk to corrupt, break, lose or dirtying existing data in any sense or meaning on DB? I mean, block size changes, buffer changes, etc. Is it dangerous?
In my case, we have this MariaDB on a WINDOWS instance Server 2k16 edition. So I would've to adapt this optimized config file. Then I should ask you too, do you have a basic/recommended optimized my.cnf file for Windows, like the one adove or so?
Thanks and regards.
I currently have a Cloud based server with the following config.
CentOS 7 64-Bit
CPU:8 vCore
RAM:16 GB
MariaDB/MySQL 5.5.5
Unfortunately, I've inherited a MyISAM database and tables that I have no control to convert to INNODB even though the application performs many writes from many connections. The data is Wordpress Posts with the typical large text and photos.
I'm experimenting with my.cnf config changes and was wondering if the config I've developed here is making use of the resources in the most effecient way. Is there anything glaring I could increase/decrease to squeak out more performance?
key_buffer_size=4G
thread_cache_size = 128
bulk_insert_buffer_size=256M
join_buffer_size=64M
max_allowed_packet=128M
query_cache_limit=128M
read_buffer_size=16M
read_rnd_buffer_size=16M
sort_buffer_size=16M
table_cache=128
tmp_table_size=128M
This will depend entirely on the type of data you are storing, the structure and size of your tables and the type of usage your database has. Not to mention the amount of available RAM and the type of disks your server has.
The best recommendation, if you have shell access to the server (which I assume you must, otherwise you couldn't change my.cnf) is to download the mysqltuner script from major.io
Run this script as a user with privileges to access your database, and preferably with root privileges on mysql too (the ideal is to run it under sudo or root) and it will analyse your database access since mysql's last restart, and then give you recommendations to change the options in my.cnf
It isn't perfect, but it'll get you much further, and more quickly, than anyone on here trying guess what values would be appropriate for your use case.
And, while not trying to pre-empt the results, I wouldn't be surprised if mysqltuner recommends that you drastically increase the size of your join buffer, table_cache and query_cache_limit.
i will really appreciate if someone help me with this.
I have spend like 8hours googling around and found no solution to problem.
I have MySQL server version 5.7.7 on Windows server 2008 R2
Table engine is innodb
innodb_file_per_table = 1
I get error "Table is full" when table reaches 4Gb.
MySQL documentation sais that there is actualy only one limit on table size, filesystem.
(http://dev.mysql.com/doc/refman/5.7/en/table-size-limit.html)
HDD where are data stored uses NTFS, just to be sure i created 5Gb file without problems. And sure there is more than 10Gb of free space.
I understand setting "innodb_data_file_path" is irrelevant if "innodb_file_per_table" is enabled, but i tried to set it. No differences.
I have tried to do clean install of mysql. Same result.
EDIT
Guy that installed MySQL server before me accidentally installed 32bit version. Migration to 64bit mysql solved that problem
About the only way for 4GB to be a file limit is if you have a 32-bit version of MySQL. Also check for 32-bit Operating system. (Moved from comment, where it was verified.)
i am also not sure but read this it may help you.
http://jeremy.zawodny.com/blog/archives/000796.html
one more thing one guy had same problem.he had made changes to
NNODB settings for the innodb_log_file_size and innodb_log_buffer_size!changes were :
1) shutdown mysql
2) cd /var/lib/mysql
3) mkdir oldIblog
4) mv ib_logfile* oldIblog
5) edit /etc/my.cnf find the line innodb_log_file_size= and increase it to an appropriate value (he went to 1000MB as he was dealing with a very large dataset... 250million rows in one table). If you are not sure I suggest doubling the number every time you get a table is full error. he set innodb_log_buffer_size to 1/4 of the size of his log file and the problems went away.
I didnt find solution to this, i have no idea why mysql is unable to create more than 4Gb table.
As a workaround i moved only this table back to ibdata by setting "innodb_file_per_table" back to 0 and recreated that table.
Interesting is that even ibdata1 reported table is full when it reached 4Gb, even without setting max and enabled autoexpand.
So i created ibdata2 and let it autoexpand, now i am able to write new data to that table.
We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.
I have very limited resources (RAM) on my server (Debian lenny) and I need to install mySQL server, it will not be extensively used. I installed apt-get install mysql-server before but it was taking about 150MB of RAM and I am looking for alternative servers, are there any, I couldn't find anything.
Thank you in advance!
It can certainly be tuned to use less ram than the default. In particular, Debian may ship it with a configuration which is more suitable for a typical server-grade machine.
If you feel the need to run MySQL on a very memory-constrained platform, consider tuning its memory usage as described here: How MySQL Uses Memory
You probably want to use InnoDB; the most important thing to tune is to make your innodb_buffer_pool a sensible size (There are other InnoDB buffers you may want to tune too; read its documentation).
If you aren't using MyISAM, reduce its key_buffer_size to a small value (say 4M). MyISAM can't be disabled as it's used internally.
If you aren't using InnoDB, turn it off entirely.