SQL Server 2008 on production using 10GB memory . Is this normal? - sql-server-2008

I have a SQL Server 2008 in production environment (Windows 2003 -64 bit) and
it is consuming 10 GB memory of installed 20GB. Is this normal behavior or is there anything wrong with the configuration ?
P.S. I have hosted one web application which is used by hundreds of users concurrently everyday .

SQL Server reserves memory which is why you are seeing high peaks. It might show up as using 10GB in your Task Manager, but the real memory usage can be checked from within the Management Studio.
Also, you can establish upper and lower limits to the amount of memory (buffer pool) used by the SQL Server database engine with the min server memory and max server memory configuration options.
Check this article out http://support.microsoft.com/kb/321363

Microsoft has adopted the strategy for memory management that any unused memory is wasted memory. Microsoft's newer OS's and SQL Server versions will allocate more memory for caching, until the system requests it for other purposes.
So, what you are seeing is probably normal.
Much of that allocated memory can be released to other applications as needed. As distressing as that memory usage may seem, it is not as dire a situation as it may appear.

There is nothing wrong with that behavior, SQL is just caching your data. If there is something else you'd like to use that memory for you can configure SQL Server to use less, however, configuring it that way may make queries slower.

Related

MySql automatic restart after memory runs full

we're using MySql on CloudSql for quite some time now.
Obviously, we started with Mysql 5 but after a long wait and the final release of Mysql8 we decided to upgrade our database server.
As the title promotes, we now see a strange behavior of our memory utilization.
As you can see here it constantly fills up until server max resources are reached and then restarts and start filling up again.
I mean there could be an issue with one of our services but before the upgrade our memory consumption looked like this:
So you can see, memory consumption was more or less constant.
Furthermore, we increased resources when we upgraded to mysql8 and switched from db-n1-standard-1 to db-n1-standard-2, to have more available resources when data grows up.
Does anyone knows this behavior? Is there a change in Mysql5 to 8? I didn't find any information about it. Just found some notes that it's normal that Mysql takes as much memory as it can get. But I'm still wondering why it didn't on Mysql5.
Some more details on the configuration:
We're using read replica for HA
Binarylogs activated
Slow Query log enabled with FILE output.
Everything else is default CloudSql Configuration.
Any help is much appreciated.
Best regards,
Chris
Indeed, it seems that MySQL 8 is consuming more memory than MySQL 5. As shown in some tests performed by the author of the article MySQL 8 and MySQL 5.7 Memory Consumption on Small Devices
, the memory used by the version 8 in same VM settings is considerably higher than on versions 5, including both resident and virtual memories - even though these are tests in small VMs, it's a good indication that this occurs in bigger configurations as well.
So, yes, it seems that, as you mentioned, it's normal that Mysql takes as much memory as it can get, but that indeed, MySQL 8 is consuming more memory than the 5 one.

Applications downs due to heavy MySQL server load

We have a 2GB Digital Ocean server, and it is dedicated for a MySQL server of other two PHP servers. we are using Percona MySQL Server 5.6 on this server. We configured MySQL replication and these configuration is working fine
Our issue is sometime our site monitoring tools reporting that some of the URL hosted with this server is down (May be this is happening once in a week or two). When I am checking, I could see that Mysql Master server load is too much high (May be 35 - 40), so the MySQL server was not responded. # that I usually do a MySQl service restart, this restart cause to server load become normal and the sites started working after service restart.
This is a back-end MySQL database server of 20-25 PHP applications (WordPress, Drupal and some custom applications server).
Here are my questions,
Why this server load automatically goes down, after a spikes happens?
Is there any way in which database is causing issues? So that I can identify the application too.
How can I identify the root cause of this issues
Depending upon your working dataset, a 2GB server providing access for 20-25 PHP applications (WordPress, Drupal and some custom applications server) could be the issue.
For example, if you have a 1.4GB buffer pool (assuming all tables are InnnoDB) and 10GB of data, then your various applications could end up competing for resources, such as I/O, buffer pool pages, Adaptive Hash Index, query cache. They could also, assuming caching is used, be invalidating theit caches within a similar timeframe, thus sending expensive queries to the database.
Whilst a load of 50 is something that you would normally want to avoid, the load average is not something that you should concern yourself with if showing in isolation.
The use of the uninterruptible state has since grown in the Linux
kernel, and nowadays includes uninterruptible lock primitives. If the
load average is a measure of demand in terms of running and waiting
threads (and not strictly threads wanting hardware resources), then
they are still working the way we want them to.
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
If the issue is happening once per week then it is starting to sound like a batch process, or cache expiration issue - too much happening at once for the resources available.
The best thing to do is to monitor and look for the cause. Since you are already using Percona Server, using PMM should give you the perfect insight to find the cause, although it works with Oracle MySQL, MariaDB, Aurora, etc. You can try a demo to see the insights that you can gain:
https://pmmdemo.percona.com. The software is Open Source and free to use.
You can look in QAN to find the most expensive queries, whilst looking at Prometheus data to give an insight into the host itself. There are some recommendations to get the most from PMM, depending upon your flavour of MySQL.

sql server 2008 memory caching

I have a service that is running and connected to sql server 2008 database, the problem is that i have queries that takes a long time when run for the first time, but when cached it is finishing very fast. Does SQL server 2008 makes automatic clear cache every period of time?
SQL Server will not release memory unless there is memory pressure on the server or you explicitly tell it to.
See Microsoft support:
http://support.microsoft.com/kb/321363
Another cause could be that other database objects which need to be put in memory are pushing the ones you are using out of the buffer. In this case more memory allocated to the instance or more efficient queries will help.
So either there is memory pressure from other applications on the server or you do not have enough memory allocated to the instance for your current workload, but there is not regular scheduled process per se that cleans out SQL Server memory buffers.

MS SQL 2008 cpu usage

My application uses MS SQL server 2008 and it is hosted in a Windows 2003 Enterprise Server SP2 (32 bit) 2-CPU 8 gig Ram VM machine. The application has 2 or more windows services.One of those service access the DB frequently. When the load of the DB is set 65k or something , the CPU usage hikes upto 75-95% and it doesnt seem to reduce until unless the service is stopped.
This issue we have not faced in Oracle 10 g, with the same application and same load.
How to reduce the cpu usage ?
Is there something I need to do with the application code or with the SQL server.?
Any help will be appreciated.
Thanks,
Priya.
When it accesses the database, is it logging in, doing it's work, and then logging out? If so, see if you can retain the same connection rather than tearing down each time.
To see if it is an issue with the work it is doing, run SQL Profiler against the server and look for high read counts, high cpu count or long duration queries.

ASP.NET with MySql or SqlExpress?

I have already a website and moving that to the new VPS. I have option either to go with MySql database or Sql Server Express 2008 edition database (I do not want to pay for SQL Server as I can't afford as I have other expenses also). I have around 10K hits per day for my knowledge based website.
My questions are
If I go with ASP.NET with MySql, will this be able to handle current load as well as load in the future?
If I go with ASP.NET with Sql Server Express 2008, will the Sql Server express 2008 be able to handle this much of load as I see the sample application with Sql Express database is slower. I am aware that there is limitation in terms of CPU and Memory (1 GB RAM) in Sql Server express
My current server has Windows Server 2008 R2, 1GB RAM. Increasing RAM will help with Sql Server Express?
Any suggestions please
I don't know MySQL, but 10K hits/day is nothing (unless, of course, they all happen within 10 minutes and the rest of the day is without traffic).
Same as 1.
SQL server express will use 1GB, but RAM is also used for other things in the system. Upgrading to 2GB would certainly help, above that is uncertain, but RAM is cheap so buy 4GB to be safe.
Edit: 10K hits/day, evenly distributed, is 0.12 hits/second, or 1 hit every 8.5 seconds.
I would go for SQL Server Express so you can easily take advantage of the ASP.NET Membership elements. It will handle loads higher than you mention but like Erik says, give it some more RAM.
Kindness,
Dan