I have a Couchbase bucket consisting of ~110 mn documents occupying ~58 GB of disk space. The allocated Dynamic RAM Quota of the bucket is 48.8 GB. Index RAM quota for the cluster is ~36 GB. I'm trying to build a secondary index on the bucket using GSI.
The query to create the index runs for ~2 mins and returns an error GSI CreateIndex() - cause: Request Timeout , also I'm getting the following warning from the web UI : Approaching full Indexer RAM warning. Usage of Indexer RAM on node "127.0.0.1" is around 2669%. This is above the threshold of 75%.
Is there someway I can increase the timeout period for the query? Also, the query only runs for about 2 min before timing out, does that have something to do with the RAM warning, as in an increased hardware requirement?
I created the Query Workbench in the Couchbase 4.5 UI. Are you using the 4.5DP version?
There is indeed a timeout on queries issued from the UI. It should be set for 5 minutes, are you sure about the 2 minutes you are reporting, or could it be 5 minutes? If it's 2 minutes, there could well be a bug.
Please note, however, that index creation continues after this timeout. If you go to the indexes tab, you should see that the index continues to build. So it shouldn't be a problem that the Query Workbench timed out. (I believe we fixed the error message to indicate this in a later version.)
If the indexes tab does not show the index continuing to build, that is very possibly a bug, if so please provide more details about which version you are using.
In general, the entire UI will log you out after 10 minutes of inactivity, so the workbench isn't the right place for long-running queries. The right tool for long-running queries is the 'cbq' command-line tool, which does not have a time limit.
W.r.t. the messages about "Indexer RAM Warning", that is completely unrelated to the timeout in the Query Workbench. You can stop these messages by increasing the amount of RAM given to the indexer in the Settings -> Cluster tab.
Related
We are running MySQL 5.6 on Windows Server 2008r2.
Every 30 minutes it runs very slowly for around 40 seconds and then goes back to normal for another 30 minutes. It is happening like clockwork with each ‘hang’ being 30 minutes after the last one finished.
Any ideas? We are stumped and don’t know where next to look.
Background / things we have ruled out below.
Thanks.
• Our initial thoughts were a locking query but we have eliminated this.
• The slow query log shows affected queries but with zero lock time.
• General logs show nothing (as an aside, is there a way to increase the logging level to get it to log when it is flushing the cache etc? What does MySQL run every 30 minutes?)
• When it is running slowly, it is still running but even simple queries like Select ‘Hello World’; take over a second to run.
• All MySQL operations run slowly at the time in question including monitoring tools and especially making new connections. InnoDB and MyISAM are equally affected.
• We have switched from using the SAN array to using local SSD and it has made no difference ruling out disk / spindles.
• The machine has Sophos Endpoint Protection but this is not scanning anything on the database drives.
• It is as if the machine is maxed out but local performance monitoring does show any unusual system metrics. CPU, disk queue, disk throughput, memory, network activity etc. are all flat.
• The machine is a VM running on VMware. Hypervisor monitoring is not showing any performance issues – but I am not convinced it is granular enough to pick up a 30 second spike.
• We have tried adjusting MySQL settings like the InnoDB cache size, log size etc and this has made no difference.
• The server runs nothing other than a couple of MySQL instances.
• The other instances are unaffected - as far as we can tell.
There's some decent advice here on Server Fault:
https://serverfault.com/questions/733590/mysql-stops-responding-periodically
Have you monitored Disk I/O? Is there an increase in I/O wait times or
queued transactions? It's possible that requests are queueing up at
the storage level due to an I/O limitation put on by your host. Also,
have you checked if you're hitting your max allowable mysql clients?
If these queries are suddenly taking a lot longer to complete, it's
also possible that it's not leaving enough available connections for
normal site traffic because the other connections aren't closing fast
enough.
I'd recommend using IOSTAT and seeing if you're saturating your disks. It should show if all your disks are at 100% usage, etc.
My Rails application takes a JSON blob of ~500 entries (from an API endpoint), throws it into a sidekiq/ redis background queue. The background job parses the blob then loops through the entries to perform a basic Rails Model.find_or_initialize_by_field_and_field() and model.update_attributes().
If this job were in the foreground, it would take a matter of seconds (if that long). I'm seeing these jobs remain in the sidekiq queue for 8 hours. Obviously, something's not right.
I've recently re-tuned the MySQL database to use 75% of available RAM as the buffer_pool_size and divided that amongst 3 buffer pools. I originally thought that might be part of the deadlock but the load avg on the box is still well below any problematic status ( 5 CPU and a load of ~ 2.5 ) At this point, I'm not convinced the DB is the problem though, of course, I can't rule it out.
I'm sure, at this point, I need to scale back the sidekiq worker instances. In anticipation of the added load I increased the concurrency to 300 per worker (I have 2 active workers on different servers.) Under a relatively small amount of load there queues operate as expected; even the problematic jobs are completed in ~1 minute. Though, per the sidekiq documentation >50 concurrent workers is a bad idea. I wasn't having any stability issues at 150 workers per instance. The problem has been this newly introduced job that performs ~500 MySQL finds and updates.
If this were a database timeout issue, the background job should have failed and been moved from the active (busy) queue to the failed queue. That's not the case. They're just getting stuck in the queue.
What other either MySQL or Rails/ sidekiq tuning parameters should I be examining to ensure these jobs succeed, fail, or properly time out?
I've got a query that is running 5x slower on my staging server as opposed to my local dev machine.
Stackoverflow doesn't want to play nicely with the formatting; the query, describes, and explains are located here
Looking at the describe statements, I can't see any difference between the local and remote schemas.
The record counts for the 2 machines are in the same order of magnitude (500k vs 600k)
Edit In Response to Comments
It was my highly unscientific approach of throwing the queries into MySQL Workbench and looking at the query time. The local query time was on the order of 1.3 seconds and the remote query time was on the order of 5.2 seconds (so its 4x as slow). I'm sure there's a better way to test this query time.
The machines are different. My dev machine is a Mac Book Pro with 8 gigs of RAM. The staging server is a linode VPS with 512 megabytes of RAM. There shouldn't be much load on the staging server (I'm the only one that uses it). I've noticed most queries run in approximately the same time frame on the local machine and staging server, so I was confused as to why this one had such a drastically different time frame.
RAM Issue
Since a temporary table isn't being used (no mention in the EXPLAINS), is the amount of RAM still an issue?
Output from free
total used free shared buffers cached
Mem: 508576 453880 54696 0 4428 254200
-/+ buffers/cache: 195252 313324
Swap: 262140 19500 242640
Profiling Added to Gist
It looks like the remote is taking 2.5 seconds 'sending data' whereas the local is only taking 0.5 seconds. Is this an I/O issue? (Complete profiling info in gist)
Your staging server has one sixteenth of the RAM that you Mac Book Pro has.
Without knowing how much RAM is available to your two instances of MySQL, it's hard to be definitive, but that's the first place I'd look.
Also, if you run these queries from the MySQL command line, locally, how do the times compare?
It could be that the increase in time is in network transfer and not query processing.
Actually... network transfer time is the first place I'd look... then MySQL memory usage.
EDIT following question updates
The 'sending data' phase is the phase where the server is sending data to the client ref. I don't know exactly how large your dataset is, but 2.5s seems pretty high for what's probably 50kB of data or so.
Having looked at the profiling data, nearly all the time is spent sending data, so I'd strongly suspect the network here.
EDIT 2
Some research lead me to this page which indicates that the 'Sending data' is misleading and that this is actually the time spend executing your query.
Thus, I really think you need to be looking at CPU and memory usage on your server since it's specced at a level so much lower than your MacBook.
I have enabled mysql slow query in my server to log queries that takes over 5 seconds. When I run the same query from the mysql>, the queries that reported over 10 seconds took less than 1 second. Why is that?
TIA,
-peter
It's likely due to caching.
Databases cache some data in memory while leaving most data on disk. When data is fetched by a sql query that data is loaded -- and then kept in memory in case it gets requested again in the near future. Eventually if that data is not requested again, it will be overwritten in the cache by new data coming into the cache from later queries.
On some systems there is additional caching on the hard disk itself -- data fetched from disk once may be kept in cache by the disk controllers. This is because data fetched from disk recently is likely to be fetched again.
So once data has been requested from the query prompt it will be cached -- potentially by the disk controller and by the database itself.
On your application, this is less likely to happen since new users come on at different times and access their own data (which is not likely to be cached before they arrive).
One of the most common ways to speed up a database is to increase the amount of memory the server has available so more data can be cached.
For more information, here is some reading from MySQL's docs on the MySQL Query Cache
my site started dragging lately, the queries taking exceptionally longer than I would expect with properly tuned indexes. I just restarted the mysql server after 31 days uptime and every query is now substantially faster and the whole site renders 3-4 times faster.
Would there be anything that jumps out at you as to why this may have been? Improper settings on my.cnf perhaps? Any ideas as to what I can start looking at to try and pinpoint why?
thanks
updated note: I have a 16GB dedicated db box and mysql runs at about 71% of memory after a week or so.
Try to execute show processlist;, maybe there are some long lasting threads that were not killed for some reason.
Similarly execute SHOW SESSION STATUS LIKE 'Created%'; to check if mysql hasn't created to many temporary tables.
Server restart automatically closes all open temp tables and kills threads, so the application might run quicker.
Do you have temporary table(s) that might not be getting cleared/collected?
I would suggest using MySQL Enterprise for analysis purposes. It comes with a 30 day trial. We just used it. We got alerts such as :
CRITICAL Alert - Table Scans Excessive
The target server does not appear to be using indexes efficiently.
CRITICAL Alert - Connection Usage Excessive
CRITICAL Alert - CPU Usage Excessive
WARNING Alert - MyISAM Key Cache Has Sub-Optimal Hit Rate
Just something to explore!