I'm currently running a node.js server that communicates with a remote MySQL database as well as performs webrequests to various APIs. When the server is idle, the CPU usage ranges from 0-5% and RAM usage at around 300MB. Yet when the server is under load, the RAM usage linearly goes up and CPU usage jumps all around and even up to 100% at times.
I setup a snapshot solution that that would take a snapshot of the heap when a leak was detected using node-memwatch. I downloaded 3 different snapshots when the server was at 1GB 1.5GB and 2.5GB RAM usage and attempted to analyze them yet I have no idea where the problem is because the total amount of storage in the analytics seem to add up to something much lower.
Here is one of the snapshots, when the server had a memory usage of 1107MB.
https://i.gyazo.com/e3dadeb727be3bdb4eeb833094291ebf.png
Does that match up? From what I see there is only a maximum of 500 MB allocated to objects there. Also, would anyone have any ideas of the crazy CPU usage that I'm getting? Thanks.
what you need is better tool to proper diagnose that leak, Looks like you can get some help using N|Solid https://nodesource.com/products/nsolid , it will help you to visualize and monitor your app, is free to use in a develop environment.
Related
I have an application that is hosted in AWS ECS and having the database in AWS RDS. I'm using a microservice-based container architecture for my application. The frontend of the application is in Angular and Backends are in Java and Python. Right now, the database size is ~1GB. The database size will increase day by day as the scraped data will be inserted daily.
Right now, some queries are taking 4-6 seconds to execute. We need to host this application to the public and there are a lot of users will be using the application. So when we load tested the application with 50 users, I found that the CPU of RDS reached 100% and some queries had taken more than 60 seconds to execute and then timed-out. Also, the CPU and memory of other microservices (frontend and backend) are normal. I have tried vertically scaling the application up to 64GB RAM and 4 vCPUs but still, this condition remains.
Is this an issue with the query or can I do anything with the database server configuration?
The RDS storage I'm using is 100GB with a general-purpose SSD. So, I guess there will be only 300 IOPS, right? I'm planning to use RDS read replicas but before that, I need to know is there anything that I need to do for improving the performance? Any database configurations etc?
I also not have a good idea about the MySQL connection count. Right now, it is using a total of 24 connections. Do I need to change the connection count also?
Query Optimisation
As Tim pointed out, try and optimise the queries. Since you have more data getting inserted, consider indexing the table and make the queries to use indexed columns if possible. also consider archiving unused old data.
number of connections
If you have control over the code, you can make use of database pools to control the number of connections that your applications can use.
CPU usage
the CPU usage is highly related to the performance of the queries, once you optimise the queries, the CPU usage should come down.
disk usage
Use the cloudwatch metrics to monitor the disk usage, based on that, you can decide on a provisioned IOPS disk.
hope this helps.
I currently have a cloud server with 2 CPU's and 2GB RAM, Ive noticed that MySQL performs SELECT queries quite slowly on a table size greater than 250,000 rows unless the queries are cached. The table has all the correct indexes and my queries are optimized.
I have run the same MySQL setup on my personal computer and the queries are much faster. So I am thinking its more to do with the server itself rather than the MySQL queries or table setup.
I am considering upgrading the cloud server to 4GB Ram and 4CPU's or switching to a dedicated server with the same spec.
Will changing from a cloud server to a dedicated server improve MySQL performance?
Though Cloud computing utilizes the SAN storage, you will be sharing the resources like CPU, RAM and Storage among number of VMs. This may cause the slowness in I/O operations and hence in your MySQL queries.
In dedicated server, all resources will be allocated to your applications only. Thus, switching to a dedicated server will certainly improve the MySQL server performance. Also, when you have database driven application, SSD drives are highly recommended.
2GB is tiny by today's norms. Ar you using InnoDB or MyISAM? What is the cache size? Does it have SSDs?
It sounds like the table is bigger than can be cached, and that the slow query is scanning the entire table. That says that the query and/or indexes are not well optimized. Let's see the SELECT and SHOW CREATE TABLE; more tuning may be possible.
Does the Cloud limit your IOPs? That mighe explain the difference. And it would further indicate that we should study the query.
Given your query takes 20 seconds to run on Cloud environment, we can rule out network latency problems. To find out the bottleneck. Try using "top" while running the large query on cloud.
If CPU utilization is high, change to a high CPU type, maybe higher memory as well (you can see memory utilization with "top" as well).
If the %CPU of idle or wait is high, it basically suggests underlying disk is the bottleneck here. You can verify this by running some "fio" test, or use "iostat" and monitor the device usage for the query. The solution is adding more disk space or use a SSD device on cloud.
I will prefer cloud always. when it come to dedicated server there you can't find elasticity on server usage. choose a good cloud architecture to enhance the application performance. you have set proper auto scaling, devops methods.
You can also use a remote database to connect your application in order to avoid I/O request to application server.
Apologies for the fairly generic nature of the question - I'm simply hoping someone can contribute some suggestions and/or ideas as I'm out of both!
The background:
We run a fairly large (35M hits/month, peak around 170 connections/sec) site which offers free software downloads (stricly legal) and which is written in ASP .NET 2 (VB .Net :( ). We have 2 web servers, sat behind a dedicated hardware load balancer and both servers are fairly chunky machines, Windows Server 2012 Pro 64 bit and IIS 8. We serve extensionless URLs by using a custom 404 page which parses out the requested URL and Server.Transfers appropriately. Because of this particular component, we have to run in classic pipeline mode.
DB wise we use MySQL, and have two replicated DBs, reads are mainly done from the slave. DB access is via a DevArt library and is extensively cached.
The Problem:
We recently (past few months) moved from older servers, running Windows 2003 Server and IIS6. In the process, we also upgraded the Devart Component and MySql (5.1). Since then, we have suffered intermitted scalability issues, which have become significantly worse as we have added more content. We recently increased the number of programs from 2000 to 4000, and this caused response times to increase from <300ms to over 3000ms (measured with NewRelic). This to my mind points to either a bottleneck in the DB (relatively unlikely, given the extensive caching and from DB monitoring) or a badly written query or code problem.
We also regularly see spikes which seem to coincide with cache refreshes which could support the badly written query argument - unfortunately all caching is done for x minutes from retrieval so it can't always be pinpointed accurately.
All our caching uses locks (like this What is the best way to lock cache in asp.net?), so it could be that one specific operation is taking a while and backing up requests behind it.
The problem is... I can't find it!! Can anyone suggest from experience some tools or methods? I've tried to load test, I've profiled the code, I've read through it line by line... NewRelic Pro was doing a good job for us, but the trial expired and for political reasons we haven't purchased a full licence yet. Maybe WinDbg is the way forward?
Looking forward to any insight anyone can add :)
It is not a good idea to guess on a solution. Things could get painful or expensive quickly. You really should start with some standard/common triage techniques and make an educated decision.
Standard process for troubleshooting performance problems on a data driven app go like this:
Review DB indexes (unlikely) and tune as needed.
Check resource utilization: CPU, RAM. If your CPU is maxed-out, then consider adding/upgrading CPU or optimize code or split your tiers. If your RAM is maxed-out, then consider adding RAM or split your tiers. I realize that you just bought new hardware, but you also changed OS and IIS. So, all bets are off. Take the 10 minutes to confirm that you have enough CPU and RAM, so you can confidently eliminate those from the list.
Check HDD usage: if your queue length goes above 1 very often (more than once per 10 seconds), upgrade disk bandwidth or scale-out your disk (RAID, multiple MDF/LDFs, DB partitioning). Check this on each MySql box.
Check network bandwidth (very unlikely, but check it anyway)
Code: a) Consider upgrading to .net 3.5 (or above). It was designed for better scalability and has much better options for caching. b) Use newer/improved caching. c) pick through the code for harsh queries and DB usage. I have had really good experiences with RedGate Ants, but equiv. products work good too.
And then things get more specific to your architecture, code and platform.
There are also some locking mechanisms for the Application variable, but they are rarely the cause of lockups.
You might want to keep an eye on your pool recycle statistics. If you have a memory leak (or connection leak, etc) IIS might seem to freeze when the pool tops-out and restarts.
We run a server at work using a program called Shipworks, we are an ecommerce company, so aids us in shipping our orders.
We have been having intermittent issues with latency in shipping labels printing and searches through the program(which uses a sql database) when all our users are on. We have between 8 - 12 users actively on shipworks.
Our server has 8gb of RAM and a quad core processor. I was using new relic to monitor the server to determine the issue and it looks like memory amounts are going beyond where they should be.
Screenshot: http://tinypic.com/r/2j5bga0/5
My memory is staying at a constant 8600 mb of system swap Ram and 5400 MB of Used RAM. The server only has 8gb of RAM but this sounds like it is using around 14gb I know there is virtual RAM but there has to be something wrong here. If anyone can help, it'd be much appreciated
It turns out that what we needed was the upgrade from sql express to standard. We just got it up yesterday and everything is going great now. Thanks guys
Currently i have this server
processor : 3
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Xeon(TM) CPU 2.40GHz
stepping : 9
cpu MHz : 2392.149
cache size : 512 KB
My application cause more 96% of cpu usage to MySQL with 200-300 transactions per seconds.
Can anyone assist, provide links me on how
to do benchmark to PostgreSQL
do you think PostgreSQL can improve CPU utilization instead of MySQL
links , wiki that simply present the benchmark comparison
A common misconception for database users is that high CPU use is bad.
It isn't.
A database has exactly one speed: as fast as possible. It will always use up every resource it can, within administrator set limits, to execute your queries quickly.
Most queries require lots more of one particular resource than others. For most queries on bigger databases that resource is disk I/O, so the database will be thrashing your storage as fast as it can. While it is waiting for the hard drive it usually can't do any other work, so that thread/process will go to sleep and stop using the CPU.
Smaller databases, or queries on small datasets within big databases, often fit entirely in RAM. The operating system will cache the data from disk and have it sitting in RAM and ready to return when the database asks for it. This means the database isn't waiting for the disk and being forced to sleep, so it goes all-out processing the data with the CPU to get you your answers quickly.
There are two reasons you might care about CPU use:
You have something else running on that machine that isn't getting enough CPU time; or
You think that given the 100% cpu use you aren't getting enough performance from your database
For the first point, don't blame the database. It's an admin issue. Set operating system scheduler controls like nice levels to re-prioritize the workload - or get a bigger server that can do all the work you require of it without falling behind.
For the second point you need to look at your database tuning, at your queries, etc. It's not a "database uses 100% cpu" problem, it's a "I'm not getting enough throughput and seem to be CPU-bound" problem. Database and query tuning is a big topic and not one I'll get into here, especially since I don't generally use MySQL.