Our Windows Servers are crashing and we can't understand why. Microsoft support advised us to enable Memory Dump for troubleshooting. What are performance implications for enabling Memory Dump? What are performance implications for enabling Complete Memory Dump, which requires increasing Paging File to exceed amount of RAM?
If your servers are crashing then there will be no performance impact but the loss of service. The paging file must the size of your physical RAM plus 257MB.
Related
I'm trying to optimise the Digital Ocean droplet that my Laravel web app is running on, and have noticed that MySQL is constantly using ~50% of its 1GB RAM. By far the most common and well-attested method for decreasing MySQL's memory footprint is to disable its Performance Schema feature by setting performance_schema = 0 in /etc/mysql/my.cnf.
However, no answer I've seen yet makes any mention of what exactly this feature does, why it's enabled by default, and the implications of disabling it. To me it seems too be good to be true, and while I'm all for optimisation, I also don't want to compromise the integrity of my web app's server.
The performance_schema is for monitoring and instrumenting the MySQL Server. Many types of monitoring tools may depend on it. I won't describe the specific events it monitors, because that's in the manual.
You can run MySQL Server without the performance_schema enabled, but monitoring will be compromised. If you disable monitoring, you will not be able to diagnose performance problems or resource usage.
The IT industry is becoming increasingly aware that monitoring is an important feature of servers and infrastructure. I don't think it's a good tradeoff to disable the performance_schema in MySQL Server to gain a mere 512MB of memory. If you are that constrained on memory, then you should reconsider if MySQL Server is the right technology choice for your platform.
I was wondering if a MySQL database run faster on Windows server 2019 or on Windows 10.
Are the performance based on the operating system or just on the installed hardware?
The only significant differences are
HDD (slower) or SSD (faster)
Network latency (now close are the client and server)
Other things:
CPU -- Today's CPUs don't have much range in speed. And most queries are fast enough that a CPU speed difference is usually not noticeable
CPU cores -- See CPU.
RAID with hardware write cache -- Beneficial for heavy write situations.
OS -- Only if you have very little I/O and network latency might you even be able to measure an OS speed difference.
I have set up NBC Cluster at my Office. There are two physical Machines with 128G/each. The database size is around 2G. We are an ISP and we have kept the RADIUS database in the cluster.
The thing that is worrying me at the moment is, in both the Systems, the process is consuming 122G each out of 128 and I think its shocking.
I am quite new to database so I am having trouble debugging the issue.
The memory used by NDB data nodes is defined by your cluster
configuration. So even if the database is only 2GB in size, if you
have configured to run with up to 64 GByte of memory, this memory
is preallocated to ensure that it is there when it is needed.
So look into your config.ini file to see how you configured the
NDB data nodes.
I have a SQL Server 2008 in production environment (Windows 2003 -64 bit) and
it is consuming 10 GB memory of installed 20GB. Is this normal behavior or is there anything wrong with the configuration ?
P.S. I have hosted one web application which is used by hundreds of users concurrently everyday .
SQL Server reserves memory which is why you are seeing high peaks. It might show up as using 10GB in your Task Manager, but the real memory usage can be checked from within the Management Studio.
Also, you can establish upper and lower limits to the amount of memory (buffer pool) used by the SQL Server database engine with the min server memory and max server memory configuration options.
Check this article out http://support.microsoft.com/kb/321363
Microsoft has adopted the strategy for memory management that any unused memory is wasted memory. Microsoft's newer OS's and SQL Server versions will allocate more memory for caching, until the system requests it for other purposes.
So, what you are seeing is probably normal.
Much of that allocated memory can be released to other applications as needed. As distressing as that memory usage may seem, it is not as dire a situation as it may appear.
There is nothing wrong with that behavior, SQL is just caching your data. If there is something else you'd like to use that memory for you can configure SQL Server to use less, however, configuring it that way may make queries slower.
How good it'd be to have 64 bit MySQL on 64 bit Linux ofcouse?
Presently I have 32bit Mysql / OS but 64bit hardware.
Shall I consider upgrade? What advantages do I have?
You will get the main advantage of 64bit software - you will be able to address more RAM
What advantages do I have?
Potentially bigger in-memory caches. And big caches can really help database performance ... in some cases. However, I've also noticed evidence that suggests that there is a limit to which application caches help, and that in some situations a OS level file caches work better at reducing disc IO delays.
For some hints on how 64bit MySQL (with appropriate tuning) might help, read How MySQL Uses Memory from the MySQL manual.