Recommended Values for my.cnf - VPS: 2.33GHz, 1GB RAM - mysql

I was wondering if these my.cnf values would be ok for a Xeon 2.33GHz VPS with 1GB of RAM.
There are ~5 sites on the server with low traffic and the server is running Apache/PHP/MySQL... Also I want to enable the MySQL query cache and would appreciate any advise re the amount of RAM to allocate to the cache...
MySQL Settings:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
# Default to using old password format for compatibility with mysql 3.x
# clients (those using the mysqlclient10 compatibility package).
old_passwords=1
# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
# symbolic-links=0
key_buffer = 16K
max_allowed_packet = 1M
thread_stack = 64K
table_cache = 4
sort_buffer = 64K
net_buffer_length = 2K
bind-address = 127.0.0.1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Thanks for the help :)

Some memory chache was set too small compare to 1GB RAM, even if you may use a virtual machine actually.
Performance may be better if you change some options and that won't eat too much resources.
If you are not sue what to change, take a look at MySQL recommanded configuration.
These files was located at /usr/share/doc/mysql-server-x.x/examples
Your configuration may belongs to my-small.conf, my-medium.cnf is better suitable for you.

I recommend you use mytop to analyze real-time activity of your database.
Also used mysqltuner to give you suggestions on your my.cnf values​​.
Too enables logging of slow queries:
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 3
The configuration of my dedicated server for a high rate of transactions if anything helps:
max_allowed_packet=16M
key_buffer_size=8M
innodb_additional_mem_pool_size=10M
innodb_buffer_pool_size=512M
join_buffer_size=40M
table_open_cache=1024
query_cache_size=40M
table_definition_cache=256
innodb_additional_mem_pool_size=10M
key_buffer_size=16M
max_allowed_packet=32M
max_connections = 300
query_cache_limit = 10M
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 3
I hope it helps somewhat. If you want to refer details about these(and many other) server parameters, refer this Official MySQL Documentation
Regards.

Related

MySQL missing key_buffer_size in my.cnf

I have never really played around with mysql settings before, but on our new linux cloud server it appears that mysql is eating up all the memory till it crashes, then it cannot restart as there is no more memory to restart the service, I have to reboot the cloud server.
So I was looking at how I could tame the memory usage, and after reading about key_buffer_size (and another setting I cannot recall of the my head) I had a look at the my.cnf file, and there is nothing with this setting on it. My my.cnf is as follows...
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysql.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
datadir = /var/lib/mysql
[mysql]
!includedir /etc/mysql/conf.d
Without the key_buffer_size set... will it just keep eating up memory till it runs out? Shouldn't this setting be set?
Cheers
The default for key_buffer_size is 8M per http://dev.mysql.com/doc/refman/5.0/en/server-parameters.html
However, I would ask that you set these first (and restart MySQL) and see if the problem persists.
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
query_cache_limit = 1M
query_cache_size = 16M
If after those settings are used, the problem persists, then it's likely not a configuration issue, but something deeper.
Additionally, see these:
https://dba.stackexchange.com/questions/31182/mysql-slowly-uses-memory-until-it-starts-to-use-swap
https://dba.stackexchange.com/questions/7400/why-does-mysql-use-all-of-memory-and-goes-into-swap-when-doing-lots-of-delayed-i
Thanks kinds for the advice!
Turns out it was the innodb_buffer_pool_size. It was set to high for my machine. I have adjusted it as well as optimized how much memory Apache uses. Everything seems to be running better and there is a bit of memory headroom. Hope that fixes it.

Cannot change the mysql connection limit

I am on linux box running only mysql with 8 cores and 16GB ram. All connections come from web server on another machine in the same network running php with codeigniter.
I cannot get more than 150 connections on mysql.
My my.cnf is:
[mysqld]
user=mysql
port = 3306
socket = /tmp/mysql.sock
datadir = /usr/local/mysql/var/
skip-external-locking
max_connections=500
max_user_connections=500
open-files-limit = 500
key_buffer_size = 2048M
max_allowed_packet = 32M
table_open_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
myisam_sort_buffer_size = 128M
thread_cache_size = 250
table_definition_cache = 1024
query_cache_size = 32M
query_cache_limit = 32M
table_cache=1024
max_heap_table_size=1024M
key_buffer=2048M
wait_timeout=60
thread_concurrency = 16
long_query_time = 1
tmp_table_size=256M
show status returns that max_connections and max_user_connections to be 500.
Since mysql is saying that connection limit is 500, I thought there are other setting in PHP, apache or codeigniter that is limiting the requests to mysql, but I cannot find any. I've searched google for few days trying to find answers without any luck.
Are there limits set on any of the above mentioned software? I will post configs if necessary.
Thank you.
Check the Max number of Apache worker processes (ServerLimit and MaxClients) in httpd.conf. Assuming a fixed number of connections per worker, you might be maxing out your number of workers, so nothing is requesting new MySQL connections.
Sorry, just found this on Google, but your config is foobar
max_connections=500
open-files-limit = 500
table_open_cache = 512
table_definition_cache = 1024
open-files-limit: 500 #; not good!
table_open_cache: 512... * 2+ open-files per table = 1024 open-files used minimum
max_connections: 500 each open connection starts a new file handle to buffer.
Your mysql will open some 130 tables just to start (core tables) which leaves you with a mere 240 file handles to share between data queries and connections. For each connection with a table query, 3+ file handles are consumed (connection, data table, index file(s)). That maxes your open-files long before you get 150 connections. open-files-limit needs to start >2048 for that kind of DB usage.
more help:
show global status like '%open%';
show global status like '%onnect%';
see for yourself how many files/connections are actually in use. some operating systems (Windows XP) hard limit files/network connections. try googling "mysql 'YOUR-OS' 150 connections" and see if that is the limiting factor.

mySQL running 5x slower after optimization?

I have a Xeon 2.0Ghz server (12 cores) with 16GB memory, running Apache and mySQL for a website with around 50,000 records in InnoDB (Percona). My queries used to return in about 0.17 to 0.25 seconds, then I ran the Percona tools mySQL optimizer, uploaded the new my.cnf file and suddenly the same queries are taking 1.20 to 1.30 seconds, so about 5x longer.
What did I do wrong? Here are my old and new my.cnf files"
NEW:
[mysqld]
default_storage_engine = InnoDB
key_buffer_size = 32M
myisam_recover = FORCE,BACKUP
max_allowed_packet = 16M
max_connect_errors = 1000000
log_bin = /var/lib/mysql/mysql-bin
expire_logs_days = 14
sync_binlog = 1
tmp_table_size = 32M
max_heap_table_size = 32M
query_cache_type = 0
query_cache_size = 0
max_connections = 200
thread_cache_size = 50
open_files_limit = 65535
table_definition_cache = 1024
table_open_cache = 2048
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table = 1
innodb_buffer_pool_size = 12G
log_error = /var/lib/mysql/mysql-error.log
log_queries_not_using_indexes = 1
slow_query_log = 1
slow_query_log_file = /var/lib/mysql/mysql-slow.log
OLD:
[mysqld]
innodb_buffer_pool_size = 12000M
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 16M
innodb_additional_mem_pool_size = 20M
innodb_thread_concurrency = 20
read_rnd_buffer_size=50M
query_cache_size=128M
query_cache_type=1
tmp_table_size=512M
wait_timeout=90
query_cache_limit=64M
key_buffer_size=128M
max_heap_table_size=512M
max_allowed_packet=32M
log_slow_queries
log-queries-not-using-indexes
long_query_time = 1
Are you swapping at all after running for a while?
You might try turning down your innodb_buffer_pool_size since you say the server is also running Apache. At the moment it looks like MySQL has the potential to use up all the server's memory for itself and leave nothing for the OS and Apache.
Try setting innodb_buffer_pool_size to 8G and then set innodb_log_file_size to 2G.
You can probably up your innodb_thread_concurrency as well, but since it isn't a dedicated MySQL server it may be fine at the default of 8. It depends on what CPU you have but the docs say:
The correct value for this variable is dependent on environment and
workload. You will need to try a range of different values to
determine what value works for your applications. A recommended value
is 2 times the number of CPUs plus the number of disks.
So play around with that and see what works best.
Also, is your database larger than the amount of RAM you have or could your entire DB fit in memory?
Just keep in mind that since you are running Apache on the same server, Apache is going to want to create a bunch of its own threads and consume as much memory as required for all the server processes and if you're running something like PHP that's going to take up memory as well.
You're going to have to find a good balance where both Apache and MySQL can both perform at maximum capacity on the same system but where neither one uses so much memory that the other has to swap.
Additional ways you can troubleshoot or profile performance would be to check your slow query log and run explains on the slow queries. In addition, you can install the Percona toolkit and run pt-query-digest to analyze your performance. Read the docs here.

Optimizing MySQL database for FAST count

i'm currently trying to optimize my database. The problem is the following:
I have a table which currently stores over 83Mio. timedependent values. They are indexed by a highres (ms) timestamp. What i need to do is count how many times a certain value appears in a given interval of time - for example say i want to know how many times value 1.56787 appeared in the interval form timestamp x to timestamp y. Right now this takes almost forever.
Im using InnoDB and i already put a lot of time into optimizing the config files, which increased the speed immensly.
Im thankful for any input, as im pretty much running out of ideas how to pull this off. The only workaround i can think of is to create tables which contain pre counted values for fixed intervals, which would not be really satisfying since the whole thing should also be fully updateable (we are talking about new values arriving every few milliseconds). Would another db system be better suited for my problem?
Here is the explain output:
Field Type Null Key Default Extra
timestamp bigint(20) NO PRI NULL
ask decimal(6,5) NO NULL
bid decimal(6,5) NO NULL
askvolume decimal(6,5) NO NULL
bidvolume decimal(6,5) NO NULL
# The MySQL server
[mysqld]
port= 3306
socket= "C:/xampp/mysql/mysql.sock"
basedir="C:/xampp/mysql"
tmpdir="C:/xampp/tmp"
datadir="C:/xampp/mysql/data"
pid_file="mysql.pid"
skip-external-locking
key_buffer = 16M
max_allowed_packet = 61M
table_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error="mysql_error.log"
bind-address="192.168.1.2"
# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
#
# commented in by lampp security
#skip-networking
skip-federated
# Replication Master Server (default)
# binary logging is required for replication
# log-bin deactivated by default since XAMPP 1.4.11
#log-bin=mysql-bin
# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id = 1
# Replication Slave (comment out master section to use this)
#
# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
# the syntax is:
#
# CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>,
# MASTER_USER=<user>, MASTER_PASSWORD=<password> ;
#
# where you replace <host>, <user>, <password> by quoted strings and
# <port> by the master's port number (3306 by default).
#
# Example:
#
# CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
# MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method, then
# start replication for the first time (even unsuccessfully, for example
# if you mistyped the password in master-password and the slave fails to
# connect), the slave will create a master.info file, and any later
# change in this file to the variables' values below will be ignored and
# overridden by the content of the master.info file, unless you shutdown
# the slave server, delete master.info and restart the slaver server.
# For that reason, you may want to leave the lines below untouched
# (commented) and instead use CHANGE MASTER TO (see above)
#
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
#server-id = 2
#
# The replication master for this slave - required
#master-host = <hostname>
#
# The username the slave will use for authentication when connecting
# to the master - required
#master-user = <username>
#
# The password the slave will authenticate with when connecting to
# the master - required
#master-password = <password>
#
# The port the master is listening on.
# optional - defaults to 3306
#master-port = <port>
#
# binary logging - not required for slaves, but recommended
#log-bin=mysql-bin
# Point the following paths to different dedicated disks
#tmpdir = "C:/xampp/tmp"
#log-update = /path-to-dedicated-directory/hostname
# Uncomment the following if you are using BDB tables
#bdb_cache_size = 4M
#bdb_max_lock = 10000
# Comment the following if you are using InnoDB tables
#skip-innodb
innodb_data_home_dir = "C:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "C:/xampp/mysql/data"
#innodb_log_arch_dir = "C:/xampp/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1024M
innodb_additional_mem_pool_size = 20M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 5M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 0
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
Oh the machine is an i7-950 with 6GB of RAM and the system+database is on a SSD. So i think that should not be the problem?
Thanks for your help, it will be highly appreciated!
I don't have a feel for the range of values that you have in you indexed timestamp value but it seems to me that partitioning your table could help you out here. Specifically RANGE partitioning or HASH partitioning.
This should give you a significant performance boost.
If the time ranges can be expressed as a series of ranges (months, days, weeks, etc.), you might introduce something like a date-prefix column, that will significatly reduce the number of examined rows using IN() expression.
Here is an article that exposes the idea: http://www.mysqlperformanceblog.com/2010/01/09/getting-around-optimizer-limitations-with-an-in-list/
First step: if you haven't done it, use explain plan to see what exactly is the bottleneck of your query, and if the engine is using the index(es) correctly.
Second step: partition your table by range on the timestamp. I'm not sure if MySQL/InnoDB has that capability, but if it doesn't you'd better change DBMS.
In any case, MySQL is not really a good choice for high performance: depending on your needs you may be better off with Oracle or Postgre, or even an in-memory storage (especially if you don't care too much for safety as opposed to performance).

how can I export 4.5 GB table from mysql?

I have a table which has 38.406.168 rows and according to size in phpmyadmin 4.5GB. I want to see the last row of the table. Unfortunately I couldn't use select * from ... limit 38.406.166,1 or even I couldn't use select count(*) from ... function.
I changed my.ini in wamp server, but still I get mysql server has gone away error while attempting execute one of these queries. BTW; I couldn't even set an index on ID to make these processes much quicker.
My last try was to export the table to look at the last row. However, It just shows me 123MB of the file.
What should I do? Please help me. The features of the computer is 2.93 GHz, 3.50GB
Here is my my.ini file:
# The MySQL server
[wampmysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 2000M
table_cache = 4096
sort_buffer_size = 2000M
net_buffer_length = 8K
read_buffer_size = 2000M
read_rnd_buffer_size = 2000M
myisam_sort_buffer_size = 2000M
basedir=c:/wamp/bin/mysql/mysql5.1.36
log-error=c:/wamp/logs/mysql.log
datadir=c:/wamp/bin/mysql/mysql5.1.36/data
(.. these parts are deleted, since there is nothing to set as value)
# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = C:\mysql\data/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = C:\mysql\data/
#innodb_log_arch_dir = C:\mysql\data/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 384M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 10M
#innodb_log_buffer_size = 64M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 180
[mysqldump]
quick
max_allowed_packet = 160M
Thank you so much for your help
I tried a lot of stuff and ended up with those 2 working:
Simply mirror the database via mysql's internal master-slave-functions (try google, you ll find good tutorials) onto a simple backup server (most cheap hosting packages will work if they have ssh access)
Try http://www.mysqldumper.net/, the best tool to copy & split huge databases into 100mb-parts. This simple open source tool did everything that "professional" backup scripts couldn't do.
You will want to use the mysqldump command to do this. Here is what I do in linux, but I think it will translate to Windows (I see that you're running WAMP).
mysqldump --opt --force -Q --user=[your_user] -p [database_name] > dump.sql
You may need to change directory to where the mysqldump file is located:
cd c:\path\to\mysql\bin