Memory overloading, very high memory swap amount - sql-server-2008

We run a server at work using a program called Shipworks, we are an ecommerce company, so aids us in shipping our orders.
We have been having intermittent issues with latency in shipping labels printing and searches through the program(which uses a sql database) when all our users are on. We have between 8 - 12 users actively on shipworks.
Our server has 8gb of RAM and a quad core processor. I was using new relic to monitor the server to determine the issue and it looks like memory amounts are going beyond where they should be.
Screenshot: http://tinypic.com/r/2j5bga0/5
My memory is staying at a constant 8600 mb of system swap Ram and 5400 MB of Used RAM. The server only has 8gb of RAM but this sounds like it is using around 14gb I know there is virtual RAM but there has to be something wrong here. If anyone can help, it'd be much appreciated

It turns out that what we needed was the upgrade from sql express to standard. We just got it up yesterday and everything is going great now. Thanks guys

Related

Node.js high memory usage

I'm currently running a node.js server that communicates with a remote MySQL database as well as performs webrequests to various APIs. When the server is idle, the CPU usage ranges from 0-5% and RAM usage at around 300MB. Yet when the server is under load, the RAM usage linearly goes up and CPU usage jumps all around and even up to 100% at times.
I setup a snapshot solution that that would take a snapshot of the heap when a leak was detected using node-memwatch. I downloaded 3 different snapshots when the server was at 1GB 1.5GB and 2.5GB RAM usage and attempted to analyze them yet I have no idea where the problem is because the total amount of storage in the analytics seem to add up to something much lower.
Here is one of the snapshots, when the server had a memory usage of 1107MB.
https://i.gyazo.com/e3dadeb727be3bdb4eeb833094291ebf.png
Does that match up? From what I see there is only a maximum of 500 MB allocated to objects there. Also, would anyone have any ideas of the crazy CPU usage that I'm getting? Thanks.
what you need is better tool to proper diagnose that leak, Looks like you can get some help using N|Solid https://nodesource.com/products/nsolid , it will help you to visualize and monitor your app, is free to use in a develop environment.

ASP .Net 2, Classic Pipeline on IIS 8 64 bit scalability issues

Apologies for the fairly generic nature of the question - I'm simply hoping someone can contribute some suggestions and/or ideas as I'm out of both!
The background:
We run a fairly large (35M hits/month, peak around 170 connections/sec) site which offers free software downloads (stricly legal) and which is written in ASP .NET 2 (VB .Net :( ). We have 2 web servers, sat behind a dedicated hardware load balancer and both servers are fairly chunky machines, Windows Server 2012 Pro 64 bit and IIS 8. We serve extensionless URLs by using a custom 404 page which parses out the requested URL and Server.Transfers appropriately. Because of this particular component, we have to run in classic pipeline mode.
DB wise we use MySQL, and have two replicated DBs, reads are mainly done from the slave. DB access is via a DevArt library and is extensively cached.
The Problem:
We recently (past few months) moved from older servers, running Windows 2003 Server and IIS6. In the process, we also upgraded the Devart Component and MySql (5.1). Since then, we have suffered intermitted scalability issues, which have become significantly worse as we have added more content. We recently increased the number of programs from 2000 to 4000, and this caused response times to increase from <300ms to over 3000ms (measured with NewRelic). This to my mind points to either a bottleneck in the DB (relatively unlikely, given the extensive caching and from DB monitoring) or a badly written query or code problem.
We also regularly see spikes which seem to coincide with cache refreshes which could support the badly written query argument - unfortunately all caching is done for x minutes from retrieval so it can't always be pinpointed accurately.
All our caching uses locks (like this What is the best way to lock cache in asp.net?), so it could be that one specific operation is taking a while and backing up requests behind it.
The problem is... I can't find it!! Can anyone suggest from experience some tools or methods? I've tried to load test, I've profiled the code, I've read through it line by line... NewRelic Pro was doing a good job for us, but the trial expired and for political reasons we haven't purchased a full licence yet. Maybe WinDbg is the way forward?
Looking forward to any insight anyone can add :)
It is not a good idea to guess on a solution. Things could get painful or expensive quickly. You really should start with some standard/common triage techniques and make an educated decision.
Standard process for troubleshooting performance problems on a data driven app go like this:
Review DB indexes (unlikely) and tune as needed.
Check resource utilization: CPU, RAM. If your CPU is maxed-out, then consider adding/upgrading CPU or optimize code or split your tiers. If your RAM is maxed-out, then consider adding RAM or split your tiers. I realize that you just bought new hardware, but you also changed OS and IIS. So, all bets are off. Take the 10 minutes to confirm that you have enough CPU and RAM, so you can confidently eliminate those from the list.
Check HDD usage: if your queue length goes above 1 very often (more than once per 10 seconds), upgrade disk bandwidth or scale-out your disk (RAID, multiple MDF/LDFs, DB partitioning). Check this on each MySql box.
Check network bandwidth (very unlikely, but check it anyway)
Code: a) Consider upgrading to .net 3.5 (or above). It was designed for better scalability and has much better options for caching. b) Use newer/improved caching. c) pick through the code for harsh queries and DB usage. I have had really good experiences with RedGate Ants, but equiv. products work good too.
And then things get more specific to your architecture, code and platform.
There are also some locking mechanisms for the Application variable, but they are rarely the cause of lockups.
You might want to keep an eye on your pool recycle statistics. If you have a memory leak (or connection leak, etc) IIS might seem to freeze when the pool tops-out and restarts.

MySQL query runs 5x slower on staging server than local dev maching

I've got a query that is running 5x slower on my staging server as opposed to my local dev machine.
Stackoverflow doesn't want to play nicely with the formatting; the query, describes, and explains are located here
Looking at the describe statements, I can't see any difference between the local and remote schemas.
The record counts for the 2 machines are in the same order of magnitude (500k vs 600k)
Edit In Response to Comments
It was my highly unscientific approach of throwing the queries into MySQL Workbench and looking at the query time. The local query time was on the order of 1.3 seconds and the remote query time was on the order of 5.2 seconds (so its 4x as slow). I'm sure there's a better way to test this query time.
The machines are different. My dev machine is a Mac Book Pro with 8 gigs of RAM. The staging server is a linode VPS with 512 megabytes of RAM. There shouldn't be much load on the staging server (I'm the only one that uses it). I've noticed most queries run in approximately the same time frame on the local machine and staging server, so I was confused as to why this one had such a drastically different time frame.
RAM Issue
Since a temporary table isn't being used (no mention in the EXPLAINS), is the amount of RAM still an issue?
Output from free
total used free shared buffers cached
Mem: 508576 453880 54696 0 4428 254200
-/+ buffers/cache: 195252 313324
Swap: 262140 19500 242640
Profiling Added to Gist
It looks like the remote is taking 2.5 seconds 'sending data' whereas the local is only taking 0.5 seconds. Is this an I/O issue? (Complete profiling info in gist)
Your staging server has one sixteenth of the RAM that you Mac Book Pro has.
Without knowing how much RAM is available to your two instances of MySQL, it's hard to be definitive, but that's the first place I'd look.
Also, if you run these queries from the MySQL command line, locally, how do the times compare?
It could be that the increase in time is in network transfer and not query processing.
Actually... network transfer time is the first place I'd look... then MySQL memory usage.
EDIT following question updates
The 'sending data' phase is the phase where the server is sending data to the client ref. I don't know exactly how large your dataset is, but 2.5s seems pretty high for what's probably 50kB of data or so.
Having looked at the profiling data, nearly all the time is spent sending data, so I'd strongly suspect the network here.
EDIT 2
Some research lead me to this page which indicates that the 'Sending data' is misleading and that this is actually the time spend executing your query.
Thus, I really think you need to be looking at CPU and memory usage on your server since it's specced at a level so much lower than your MacBook.

CPU usage PostgreSQL vs MySQL on windows

Currently i have this server
processor : 3
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Xeon(TM) CPU 2.40GHz
stepping : 9
cpu MHz : 2392.149
cache size : 512 KB
My application cause more 96% of cpu usage to MySQL with 200-300 transactions per seconds.
Can anyone assist, provide links me on how
to do benchmark to PostgreSQL
do you think PostgreSQL can improve CPU utilization instead of MySQL
links , wiki that simply present the benchmark comparison
A common misconception for database users is that high CPU use is bad.
It isn't.
A database has exactly one speed: as fast as possible. It will always use up every resource it can, within administrator set limits, to execute your queries quickly.
Most queries require lots more of one particular resource than others. For most queries on bigger databases that resource is disk I/O, so the database will be thrashing your storage as fast as it can. While it is waiting for the hard drive it usually can't do any other work, so that thread/process will go to sleep and stop using the CPU.
Smaller databases, or queries on small datasets within big databases, often fit entirely in RAM. The operating system will cache the data from disk and have it sitting in RAM and ready to return when the database asks for it. This means the database isn't waiting for the disk and being forced to sleep, so it goes all-out processing the data with the CPU to get you your answers quickly.
There are two reasons you might care about CPU use:
You have something else running on that machine that isn't getting enough CPU time; or
You think that given the 100% cpu use you aren't getting enough performance from your database
For the first point, don't blame the database. It's an admin issue. Set operating system scheduler controls like nice levels to re-prioritize the workload - or get a bigger server that can do all the work you require of it without falling behind.
For the second point you need to look at your database tuning, at your queries, etc. It's not a "database uses 100% cpu" problem, it's a "I'm not getting enough throughput and seem to be CPU-bound" problem. Database and query tuning is a big topic and not one I'll get into here, especially since I don't generally use MySQL.

MySql, why would my site be hitting 100% CPU when pages loads quickly?

I'm trying to figure out possible reasons why my database could be causing 100% CPU time.
Its been like this for a while even though I've recently made changes so that pages / queries run much faster.
Heres a video my ISP produced of my site, showing the CPU usage.
Heres some questions I asked my ISP..
Me : would you say its a fast server ?
ISP: yeah it has 4 cpu cores lol and 3.5gb ram
ISP: 4 x intel xeon's 3.4ghz it has
ISP: its also running raid 5 on ultra scsi 320 drivers
Me : what the mysql caching settings ?
ISP: which handles 320 mb/s
ISP: hmm maybe the mysql cache is low
ISP: have emailed it to you.
Me : was it low ?
ISP: if you do post for advice one thing to mention is that this
is not a dedicated mysql server
ISP: so it can't be setup to use the server maximum resources
Heres the My.ini he sent me a copy...
Also heres my phpMyAdmin status page. I think I'm right in saying that theres nothing in the slow log, I think the slow queries is from before my fixes.
Considering that your long_query time is set to 2, and it looks like there are multiple queries firing per page, but returning reasonably quickly, no wonder you don't know what's slow. (NB you can override this within your code to record more detailled information for the session).
You've not said if this is a dedicated database server or if its running other stuff. Nor is there anything in the video to suggest that the CPU is a direct consequence of mysql (compared, say with a badly configured AV scanner).
There are a whole lot of potential causes, but with a MSWindows platform, its very difficult to diagnose most of them, and even less scope for actually fixing a lot of them.
But if you're happy with the time it takes for the pages to be generated, why do you care about CPU usage?
Its also interesting to note that you've got approx twice as many change db ops as select ops - suggests your data has been split across 2 databases?
Maybe you'll find something usefull for you here mysql-high-cpu-usage