change mysql variables without server restart - mysql

When I restart mysql, website becomes too slow for about 10 minutes.I think it is because innodb_buffer pool and query cache.
innodb buffer pool =8G and query cache = 2G.
I am changing my .cnf for optimizing but i have 200 online users and slowing down server for 10 minutes makes users angy.
Is there anyway I can change mysql variables without mysql restart?

Those aren't the variables making you slow after restart, those are the variables making you faster after a few minutes when everything is in memory again.
Some global variables may be changed with SET global varname=..., some can't. So, it all depends. Just a reminder: if you're satisfied with them... don't forget to add them in the config for when you DO need to restart. You wouldn't be the first to accidentally use a vital piece of configuration that way.
Having multiple mysql servers & taking one offline for a reload (the other one gets all the queries) can be blessing, but may be pricey of course.

Related

What is happening as my Sphinx search server warms up?

I have Sphinx Search running on a Linux server with 38GB of RAM. The sphinx index contains 35M full text documents plus meta data indexed from a MySQL table. When I launch a fresh server, I run a script that "warms up the sphinx cache" by sending my 10,000 most common queries through it. It takes about an hour to run the warm up script the first time, but the same script completes in just a few minutes if I run it again.
My confusion arises from the fact that Sphinx doesn't have any documented caching, other than a file based cache that I am not using. The index is loaded into memory when Sphinx starts, but individual queries take the same length of time each time they are run after the system has been "warmed up".
There is a clear warm up period when I run my scripts. What is going on? Is Linux caching something that helps Sphinx run faster? Does the underlying MySQL system cache queries ( I believe Sphinx is basically a custom MySQL storage engine )? How are new queries that have never been run being made faster by what is going on?
I realize there is likely a very complex explanation for this, but even a little direction should help be dig deeper.
( I believe Sphinx is basically a custom MySQL storage engine )
SphinxSE is a 'fake' storage engine. fake because it doesnt store any data - but rather take requests for data from its 'table', but really it just proxies it back to a running searchd instance in the background.
searchd itself doesnt have any caching - but as mentioned as indexed are read from, the OS may well start caching the files - so dont have to go all the way back to disk.
If you are using SphinxSE - then queries may be cached by the normal mysql query cache - so whole result sets are cached. But in addiction, the usual way to use SphinxSE, is to join the search results back with the original dataset, so you get both returned to the app in one go. So your queries are also dependent on the real mysql data tables. And they will be subject to the same OS caching - as mysql reads data it will be cached.
When I launch a fresh server
that suggests you are using a VM? If so the virtual disk might actully be located on a remote SAN. (or EBS on Amazon ec2)
which means loading a large sphinx index via that route might well be slow.
Depending on where your VM is hosted might be able to get some special high performance disks - ideally local to the host - maybe even SSD - which may well help.
Anyway to trace the issue, more you should almost certainly enable the sphinx query log. Look into that to see if queries are slow executing there. There is also a startup upoption to searchd - where you can enable iostats. This will log more information to the quyery log about io stats as queries are run. This can give you additional insights.
Sphinx doesn't cache your queries, but file system does. So, yes, second time queries executing faster than first time.

Flush InnoDB cache

I have some reporting queries that are rarely run, which I need to be performant without relying on them being cached anywhere in the system. In testing various schema and sproc changes I'll typically see the first run be very slow and subsequent runs fast, so I know there's some caching going on that's making it cumbersome to test changes. Restarting mysqld or running several other large queries are the only reliable ways to reproduce it. I'm wondering if there's a better way.
The MySQL Query Cache is turned OFF.
Monitoring the disk, I don't see any reads happening except on the first run. I'm not that familiar with disk cache but I would expect if that's where the caching is happening I'd still see disk reads, they'd just be very fast.
MONyog gives me what I think is the definitive proof, which is the InnoDB cache hit ratio. Monitoring it I see that when the query's fast it's hitting the InnoDB buffer, when it's slow it's hitting disk.
On a live system I'll gladly let InnoDB do this, but for development and test purposes I'm interested in worst case scenarios.
I'm using MySQL 5.5 on Windows Server 2008R2
I found a post on the Percona blog that says:
For MySQL Caches you can restart MySQL and this is the only way to clean all of the caches. You can do FLUSH TABLES to clean MySQL table cache (but not Innodb table meta data) or you can do “set global key_buffer_size=0; set global key_buffer_size=DEFAULT” to zero out key buffer but there is no way to clean Innodb Buffer Pool without restart.
In the comments he goes on to say:
Practically everything has caches. To do real profiling you need to profile real query mix which will have each query having appropriate cache/hit ratio not running one query in the loop and assuming results will be fine.
I guess that sums it up. It does make it hard to test individual queries. My case is that I want to try forcing different indices to make sure the query planner is picking the right one, and apparently I'll have to restart MySQL between tests to take the cache out of the equation!

Necessity of static cache for mysql queries?

This seems to be a clear issue; but I was unable to find an explicit answer. Consider a simple mysql database with indexed ID; without any complicated process. Just reading a row with WHERE clause. Does it really need to be cached? Reducing mysql queries apparently satisfies every one. But I tested reading a text from a flat cache file and by mysql query in a for loop of 1 - 100,000 cycles. Reading from flat file was only 1-2 times faster (but needed double memory). The CPU usage (by rough estimate from top in SSH) was almost the same.
Now I do not see any reason for using flat file cache. Am I right? or the case is different in long term? What may make slow query in such a simple system? Is it still useful to reduce mysql queries?
P.S. I do not discuss internal QC or systems like memcached.
It is depending of how you see the problem.
There is a limit on number of mysql connection can be established at any one time.
Holding the mysql connection resources in a busy site could lead to max connection error.
Establish a connection to mysql via TCP is a resource eater (if your database is sitting in different server). In this case, access a local disk file will be much faster.
If your server is located outside the network, the cost of physical distance will be heavier.
If records are updated once daily, stored into cache is truly request once and reused for the day.

Using a MySQL database is slow

We have a dedicated MySQL server, with about 2000 small databases on it. (It's a Drupal multi-site install - each database is one site).
When you load each site for the first time in a while, it can take up to 30s to return the first page. After that, the pages return at an acceptable speed. I've traced this through the stack to MySQL. Also, when you connect with the command line mysql client, connection is fast, then "use dbname" is slow, and then queries are fast.
My hunch is that this is due to the server not being configured correctly, and the unused dbs falling out of a cache, or something like that, but I'm not sure which cache or setting applies in this case.
One thing I have tried is the innodb_buffer_pool size. This was set to the default 8M. I tried raising it to 512MB (The machine has ~ 2GB of RAM, and the additional RAM was available) as the reading I did indicated that more should give better performance, but this made the system run slower, so it's back at 8MB now.
Thanks for reading.
With 2000 databases you should adjust the table cache setting. You certainly have a lot of cache miss in this cache.
Try using mysqltunner and/or tunning_primer.sh to get other informations on potential issues with your settings.
Now drupal makes Database intensive work, check you Drupal installations, you are maybe generating a lot (too much) of requests.
About the innodb_buffer_pool_size, you certainly have a lot of pagination cache miss with a little buffer (8Mb). The ideal size is when all your data and indexes size can fit in this buffer, and with 2000 databases... well it is quite certainly a very little size but it will be hard for you to grow. Tunning a MySQL server is hard, if MySQL takes too much RAM your apache won't get enough RAM.
Solutions are:
check that you do not make the connexion with DNS names but with IP
(in case of)
buy more RAM
set MySQL on a separate server
adjust your settings
For Drupal, try to set the session not in the database but in memcache (you'll need RAM for that but it will be better for MySQL), modules for that are available. If you have Drupal 7 you can even try to set some of the cache tables in memcache instead of MySQL (do not do that with big cache tables).
edit: last thing, I hope you have not modified Drupal to use persistent database connexions, some modules allows that (or having an old drupal 5 which try to do it automatically). With 2000 database you would kill your server. Try to check mysql error log for "too many connections" errors.
Hello Rupertj as I read you are using tables type innodb, right?
innodb table is a bit slower than myisam tables, but I don't think it is a major problem, as you told, you are using drupal system, is that a kind of mult-sites, like a word-press system?
If yes, sorry about but this kind of systems, each time you install a plugin or something else, it grow your database in tables and of course in datas.. and it can change into something very very much slow. I have experiencied by myself not using Drupal but using Word-press blog system, and it was a nightmare to me and my friends..
Since then, I have abandoned the project... and my only advice to you is, don't install a lot of plugins in your drupal system.
I hope this advice help you, because it help me a lot in word-press.
This sounds like a caching issue in Drupal, not MYSQL. It seems there are a few very heavy queries, or many, many small ones, or both, that hammer the database-server. Once that is done, Drupal caches that in several caching layers. After which only one (or very few) queries are all that is needed to build up a page. Slow in the beginning, fast after that.
You will have to profile it to determine what the cause is, but the table cache seems like a likely suspect.
However, you should also be mindful of persistent connections - which should absolutely definitely, always be turned off (yes, for everyone, not just you). Apache / PHP persistent connections are a pessimisation that you and everyone else can generally do without.

MySQL active connections at once, Windows Server

I have read every possible answer to this question and searched via Google in order to find the correct answer to the following question, but I am rather a novice and don't seem to get a clear understanding.
A lot I've read has to do with web servers, but I don't have a web server, but an intranet database.
I have a MySQL dsatabase in a Windows server at work.
I will have many users accessing this database constantly to perform simple queries and writting back to it new records.
The read/write will not be that heavy (chances are 50-100 users will do so exactly at the same time, even if 1000's could be connected).
The GUI will be either via Excel forms and/or Access.
What I need to know is the maximum number of active connections I can have at any given time to the database.
I know I can change the number on Mysql Admin however I really need to know what will really work...
I don't want to put 1000 users if the system will really handle 100 correctly (after that, although connected, the performance will be too slow, for example)
Any ideas or own experiences will be appreciated
This depends mainly on your server hardware (RAM, cpu, networking) and server load for other processes if not dedicated to the database. I think you won't have an absolute answer and the best way is testing.
I think something like 1000 should work ok, as long as you use 64 bit MySQL server. With 32 bit, too many connections may create virtual memory pressure - a connection has an own thread, and every thread needs a stack, so the stack memory will reduce possible size of the buffer pool and other buffers.
MySQL generally does not slow down if you have many idle connections, however special commands e.g "show processlist" or "kill", that enumerate every connection will be somewhat slower.
If idle connection stays idle for too long (idle time exceeds wait_timeout parameter), it is dropped by the server. If this is the case in your possible scenario, you might want to increase wait_timeout (its default value is 8 hours)