Is there a way to reset the memory used by SQL Server 2008 R2 to what it would be if I restarted the service? (but I don't want to restart the service)
I tried using
Checkpoint -- Write dirty pages to disk
DBCC FreeProcCache -- Clear entire proc cache
DBCC DropCleanBuffers -- Clear entire data cache
but I always free up more memory by restarting the service.
You can use the sp_configure procedure to change the max server memory (MB) configuration setting. SQL Server will adjust to the new setting without a restart.
You should configure this setting with the desired value then leave it if you find that SQL Server is hogging memory needed by other processes.
I found the excellent webcast Mission-Critical SQLCLR, which explains that the max mem setting in SQL Server does not include memory used by SQLCLR.
To find out how much memory is actually used by SQLCLR, you can run:
select *
from sys.dm_os_memory_objects
where [type] like '%CLR%'
Related
I have built the MySQL server from source and am able to run mysqld in gdb while running a MySQL shell in another terminal window. When I connect to the MySQL server I can interrupt the server in gdb and see that an extra thread has been created that is monitoring the created connection. I then continue the server in gdb. If I send the server a query such as:
SELECT * FROM table;
how can I set a breakpoint in order to see the parsing, planning, and access methods required to run this query?
As a follow-up, I see that the server and the InnoDB storage engine both have their own SQL parsers. Why are there two?
I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!
I run a service that needs to be able to support about 4000+ IOPS and keep replica lag <=1 second to function properly.
I am using AWS RDS MySQL instances and have 2 read replica's. My service was experiencing giant replica lag spikes on the read replica's so I was in contact with AWS support for a week trying to understand why I was experiencing the lag--I had 6000 IOPS provisioned and my instances were very powerful. They gave me all kinds of reasons.
After changing instance types, upgrading to MySQL 5.6 from 5.5 to take advantage of multi-threading, and them replacing underlying hardware I was still seeing significant replica lag randomly.
Eventually I decided to start tinkering with the parameter groups changing my configs for just the read replica's on anything I could find that was involved in the replication process and am now finally experiencing <= 1 second of replica lag.
Here are the settings I changed and their values that appear to be successful (I copied the default mysql 5.6 param group and changed these values applying the updated paramater group to just the read replicas):
innodb_flush_log_at_trx_commit=0
sync_binlog=0
sync_master_info=0
sync_relay_log=0
sync_relay_log_info=0
Please read about each of these to understand the impact of the modifications: http://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html
Other things to make sure you take care of:
Convert any MyISAM tables to InnoDB
Upgrade from MySQL < 5.6 to MySQL >= 5.6
Ensure that your provisioned IOPS are > the combined read/write IOPS you require
Ensure that your read replica instances are >= master instance
If anyone else has any additional parameters that could be modified on the read replica's or master DB to get the best replication performance I'd love to hear more.
UPDATE 7-8-2014
To take advantage of Mysql 5.6 multi-thread replication I've set:
slave_parallel_workers=5 (Set it to the number of read replica DBs you have running)
I found this in this here:
https://blogs.oracle.com/MySQL/entry/benchmarking_mysql_replication_with_multi
Mysql replication executes all the transactions on a single database in order , and master - can execute those transactions in parallel.
You probably have most of the updates executed on a single DA, and that is what not allowing you to get advantage of multithreaded replication.
Check the iostat on your replica server. Most of the time those problem occurs because of high IO on the machine.
In order to decrease the IO on a machine - there are several additional changes that you can do:
Increase innodb_buffer_pool_size - this is the first thing you should change from default. If this instance runs only mysql - you can allocate about 80% of your available the memory here.
Verify also the following parameters:
log_slave_updates = false
binlog_format = STATEMENT
(if you have MIXED or ROW binlog_format configured - verify that you understand what does that means from here http://dev.mysql.com/doc/refman/5.6/en/binary-log-setting.html
If you have a lot of data that is being changed for several times - increasing
innodb_max_dirty_pages_pct to 90 or 95% can be worth checking.
I have a service that is running and connected to sql server 2008 database, the problem is that i have queries that takes a long time when run for the first time, but when cached it is finishing very fast. Does SQL server 2008 makes automatic clear cache every period of time?
SQL Server will not release memory unless there is memory pressure on the server or you explicitly tell it to.
See Microsoft support:
http://support.microsoft.com/kb/321363
Another cause could be that other database objects which need to be put in memory are pushing the ones you are using out of the buffer. In this case more memory allocated to the instance or more efficient queries will help.
So either there is memory pressure from other applications on the server or you do not have enough memory allocated to the instance for your current workload, but there is not regular scheduled process per se that cleans out SQL Server memory buffers.
I have a SQL Server 2008 in production environment (Windows 2003 -64 bit) and
it is consuming 10 GB memory of installed 20GB. Is this normal behavior or is there anything wrong with the configuration ?
P.S. I have hosted one web application which is used by hundreds of users concurrently everyday .
SQL Server reserves memory which is why you are seeing high peaks. It might show up as using 10GB in your Task Manager, but the real memory usage can be checked from within the Management Studio.
Also, you can establish upper and lower limits to the amount of memory (buffer pool) used by the SQL Server database engine with the min server memory and max server memory configuration options.
Check this article out http://support.microsoft.com/kb/321363
Microsoft has adopted the strategy for memory management that any unused memory is wasted memory. Microsoft's newer OS's and SQL Server versions will allocate more memory for caching, until the system requests it for other purposes.
So, what you are seeing is probably normal.
Much of that allocated memory can be released to other applications as needed. As distressing as that memory usage may seem, it is not as dire a situation as it may appear.
There is nothing wrong with that behavior, SQL is just caching your data. If there is something else you'd like to use that memory for you can configure SQL Server to use less, however, configuring it that way may make queries slower.