CPU usage PostgreSQL vs MySQL on windows - mysql

Currently i have this server
processor : 3
vendor_id : GenuineIntel
cpu family : 15
model : 2
model name : Intel(R) Xeon(TM) CPU 2.40GHz
stepping : 9
cpu MHz : 2392.149
cache size : 512 KB
My application cause more 96% of cpu usage to MySQL with 200-300 transactions per seconds.
Can anyone assist, provide links me on how
to do benchmark to PostgreSQL
do you think PostgreSQL can improve CPU utilization instead of MySQL
links , wiki that simply present the benchmark comparison

A common misconception for database users is that high CPU use is bad.
It isn't.
A database has exactly one speed: as fast as possible. It will always use up every resource it can, within administrator set limits, to execute your queries quickly.
Most queries require lots more of one particular resource than others. For most queries on bigger databases that resource is disk I/O, so the database will be thrashing your storage as fast as it can. While it is waiting for the hard drive it usually can't do any other work, so that thread/process will go to sleep and stop using the CPU.
Smaller databases, or queries on small datasets within big databases, often fit entirely in RAM. The operating system will cache the data from disk and have it sitting in RAM and ready to return when the database asks for it. This means the database isn't waiting for the disk and being forced to sleep, so it goes all-out processing the data with the CPU to get you your answers quickly.
There are two reasons you might care about CPU use:
You have something else running on that machine that isn't getting enough CPU time; or
You think that given the 100% cpu use you aren't getting enough performance from your database
For the first point, don't blame the database. It's an admin issue. Set operating system scheduler controls like nice levels to re-prioritize the workload - or get a bigger server that can do all the work you require of it without falling behind.
For the second point you need to look at your database tuning, at your queries, etc. It's not a "database uses 100% cpu" problem, it's a "I'm not getting enough throughput and seem to be CPU-bound" problem. Database and query tuning is a big topic and not one I'll get into here, especially since I don't generally use MySQL.

Related

"Read Only Database" Vs "Read and Write database" Configuration in mysql

We are using two databases one is as read-only and the second one is for "read and write" and able to achieve what we are getting.
But sometimes our read-only database took more time to execute the same query and looks like queries going to the queue kind of thing.
Does it due to we are using high configuration "Read, Write " database as compare to "Read-only" Database. (Amazon RDS)
we tried to found the article or any post but we couldn't find it. Can you help me to understand, please. because my theory says it is something if you put water from a Big pipe to small pipe then in any of the time it could create the problem.
Server is on Heroku and database M4 Large (Read & Write) && T2 Mid (Read Only) – Arvind 7 mins ago
Your databases are on different "hardware", they'll have different performance.
The most significant different I see is memory: 4 vs 8 GB. This will affect how much caching each database can do. Your leader (read & write) has more memory and can cache more. Your follower (read only), with less memory, might have things pushed out of cache that your leader retains.
There is also network performance. t2.medium is listed at "low to moderate" while m4.large is "moderate". What that actually means I have no idea except that the T2 has less.
Finally, a T2 instance is "burstable" meaning it's normally running at about 20% CPU capacity with bursts to maximum performance using CPU credits. If you start to run out of CPU credits in standard mode (the default for T2) CPU performance will drop. It's possible your T2 follower is in standard mode and periodically running low on CPU credits.

Resize Amazon RDS storage

We are currently working with a 200 GB database and we are running out of space, so we would like to increment the allocated storage.
We are using General Purpose (SSD) and a MySQL 5.5.53 database (without Multi-AZ deployment).
If I go to the Amazon RDS menu and change the Allocated storage to a bit more (from 200 to 500) I get the following "warnings":
Deplete the initial General Purpose (SSD) I/O credits, leading to longer conversion times: What does this mean?
Impact instance performance until operation completes: And this is the most important question for me. Can I resize the instance with 0 downtime? I mean, I dont care if the queries are a bit slower if they work while it's resizing, but what I dont want to to is to stop all my production websites, resize the instance, and open them again (aka have downtime).
Thanks in advance.
You can expect degraded performance but you should really test the impact in a dev environment before running this on production so you're not caught off guard. If you perform this operation during off-peak hours you should be fine though.
To answer your questions:
RDS instances can burst with something called I/O credits. Burst means its performance can go above the baseline performance to meet spikes in demand. It shouldn't be a big deal if you burn through them unless your instance relies on them (you can determine this from the rds instance metrics). Have a read through I/O Credits and Burst Performance.
Changing the disk size will not result in a complete rds instance outage, just performance degradation so it's better to do it during off-peak hours to minimise the impact as much as possible.
First according to RDS FAQs, there should be no downtime at all as long as you are only increasing storage size but not upgrading instance tier.
Q: Will my DB instance remain available during scaling?
The storage capacity allocated to your DB Instance can be increased
while maintaining DB Instance availability.
Second, according to RDS documentation:
Baseline I/O performance for General Purpose SSD storage is 3 IOPS for
each GiB, which means that larger volumes have better performance....
Volumes below 1 TiB in size also have ability to burst to 3,000 IOPS
for extended periods of time (burst is not relevant for volumes above
1 TiB). Instance I/O credit balance determines burst performance.
I can not say for certain why but I guess when RDS increase the disk size, it may defragment the data or rearrange data blocks, which causes heavy I/O. If you server is under heavy usage during the resizing, it may fully consume the I/O credits and result in less I/O and longer conversion times. However given that you started with 200GB I suppose it should be fine.
Finally I would suggest you to use multi-az deployemnt if you are so worried about downtime or performance impact. During maintenance windows or snapshots, there will be a brief I/O suspension for a few seconds, which can be avoided with standby or read replicas.
The technical answer is that AWS supports no downtime when scaling storage.
However, in the real world you need to factor how busy your current database is and how the "slowdown" will affect users. Consider the possibility that connections might timeout or the site may appear slower than usual for the duration of the scaling event.
In my experience, RDS storage resizing has been smooth without problems. However, we pick the best time of day (least busy) to implement this. We also go thru a backup procedure. We snapshot and bring up a standby server to switch over to manually just in case.

Node.js high memory usage

I'm currently running a node.js server that communicates with a remote MySQL database as well as performs webrequests to various APIs. When the server is idle, the CPU usage ranges from 0-5% and RAM usage at around 300MB. Yet when the server is under load, the RAM usage linearly goes up and CPU usage jumps all around and even up to 100% at times.
I setup a snapshot solution that that would take a snapshot of the heap when a leak was detected using node-memwatch. I downloaded 3 different snapshots when the server was at 1GB 1.5GB and 2.5GB RAM usage and attempted to analyze them yet I have no idea where the problem is because the total amount of storage in the analytics seem to add up to something much lower.
Here is one of the snapshots, when the server had a memory usage of 1107MB.
https://i.gyazo.com/e3dadeb727be3bdb4eeb833094291ebf.png
Does that match up? From what I see there is only a maximum of 500 MB allocated to objects there. Also, would anyone have any ideas of the crazy CPU usage that I'm getting? Thanks.
what you need is better tool to proper diagnose that leak, Looks like you can get some help using N|Solid https://nodesource.com/products/nsolid , it will help you to visualize and monitor your app, is free to use in a develop environment.

Does MySQL scale on a single multi-processor machine?

My application's typical DB usage is to read/update on one large table. I wonder if MySQL scales read operations on a single multi-processor machine? How about write operations - are they able to utilize multi-processors?
By the way - unfortunately I am not able to optimize the table schema.
Thank you.
Setup details:
X64, quard core
Single hard disk (no RAID)
Plenty of memory (4GB+)
Linux 2.6
MySQL 5.5
If you're using conventional hard disks, you'll often find you run out of IO bandwidth before you run out of CPU cores. The only way to pin a four core machine is to have a very high performance SSD striped RAID array.
If you're not able to optimize the schema you have very limited options. This is like asking to tune a car without lifting the hood. Maybe you can change the tires or use better gasoline, but fundamental performance gains come from several factors, including, most notably, additional indexes and strategically de-normalizing data.
In database land, 4GB of memory is almost nothing, 8GB is the absolute minimum for a system with any loading, and a single disk is a very bad idea. At the very least you should have some form of mirroring for data integrity reasons.

MySql, why would my site be hitting 100% CPU when pages loads quickly?

I'm trying to figure out possible reasons why my database could be causing 100% CPU time.
Its been like this for a while even though I've recently made changes so that pages / queries run much faster.
Heres a video my ISP produced of my site, showing the CPU usage.
Heres some questions I asked my ISP..
Me : would you say its a fast server ?
ISP: yeah it has 4 cpu cores lol and 3.5gb ram
ISP: 4 x intel xeon's 3.4ghz it has
ISP: its also running raid 5 on ultra scsi 320 drivers
Me : what the mysql caching settings ?
ISP: which handles 320 mb/s
ISP: hmm maybe the mysql cache is low
ISP: have emailed it to you.
Me : was it low ?
ISP: if you do post for advice one thing to mention is that this
is not a dedicated mysql server
ISP: so it can't be setup to use the server maximum resources
Heres the My.ini he sent me a copy...
Also heres my phpMyAdmin status page. I think I'm right in saying that theres nothing in the slow log, I think the slow queries is from before my fixes.
Considering that your long_query time is set to 2, and it looks like there are multiple queries firing per page, but returning reasonably quickly, no wonder you don't know what's slow. (NB you can override this within your code to record more detailled information for the session).
You've not said if this is a dedicated database server or if its running other stuff. Nor is there anything in the video to suggest that the CPU is a direct consequence of mysql (compared, say with a badly configured AV scanner).
There are a whole lot of potential causes, but with a MSWindows platform, its very difficult to diagnose most of them, and even less scope for actually fixing a lot of them.
But if you're happy with the time it takes for the pages to be generated, why do you care about CPU usage?
Its also interesting to note that you've got approx twice as many change db ops as select ops - suggests your data has been split across 2 databases?
Maybe you'll find something usefull for you here mysql-high-cpu-usage