Heavy mysql usage CPU or Memory - mysql

I have an Amazon s3 instance and the project we have on the server does a lot of INSERTs and UPDATEs and a few complex SELECTs
We are finding that MySQL will quite often take up a lot of the CPU.
I am trying to establish whether a higher memory or higher cpu is better of the above setup.
Below is an output of cat /proc/meminfo
MemTotal: 7347752 kB
MemFree: 94408 kB
Buffers: 71932 kB
Cached: 2202544 kB
SwapCached: 0 kB
Active: 6483248 kB
Inactive: 415888 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 168264 kB
Writeback: 0 kB
AnonPages: 4617848 kB
Mapped: 21212 kB
Slab: 129444 kB
SReclaimable: 86076 kB
SUnreclaim: 43368 kB
PageTables: 54104 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 3673876 kB
Committed_AS: 5384852 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 180 kB
VmallocChunk: 34359738187 kB
Current Setup:
High-CPU Extra Large Instance
7 GB of memory 20 EC2 Compute Units (8
virtual cores with 2.5 EC2 Compute
Units each) 1690 GB of instance
storage 64-bit platform I/O
Performance: High API name: c1.xlarge
Possible Setup:
High-Memory Double Extra Large
Instance
34.2 GB of memory 13 EC2 Compute Units (4 virtual cores with 3.25 EC2 Compute
Units each) 850 GB of instance storage
64-bit platform I/O Performance: High
API name: m2.2xlarge

I would go for 32GB memory and maybe more harddisks in RAID. CPU won't help that much - you have eough cpu power. You also need to configure mysql correctly.
Leave 1-2 GB for OS cache and for temp tables.
Increase tmp_table_size
remove swap
optimize query_cache_size (don't make it too big - see mysql documentation about it)
periodically run FLUSH QUERY CACHE. if your query cache is <512 MB - run it every 5 minutes. This doesn't clean the cache, it optimizes it (defragment). This is from mysql docs:
Defragment the query cache to better
utilize its memory. FLUSH QUERY CACHE
does not remove any queries from the
cache, unlike FLUSH TABLES or RESET
QUERY CACHE.
However I noticed that the other solution has the half disk space: 850GB, which might be reduced number of hard disks. That's generally a bad idea. The biggest problem in databases is hard disks. If you use RAID5 - make sure you don't use less hard disks. If you don't use raid at all - I would suggest raid 0.

Use vmstat and iostat to find out whether CPU or I/O is the bottleneck (if I/O - add more RAM and load data into memory). Run from shell and check results:
vmstat 5
iostat -dx 5
if CPU is the problem vmstat will show high values in us column, and iostat will show low disk use (util)
if I/O is the problem then vmstat will show low values in us column and iostat will show high disk utilization (util); by high I mean >50%

It depends on the application.
You could use memcached to cache mysql queries. This would ease cpu usage a bit, however with this method you would want to increase RAM for storing the queries.
On the other hand if it's not feasible based on type of application then I would recommend higher CPU.

There are not many reasons for a MySQL to use a lot of CPU: It is either processing of stored routines (stored procedures or stored functions) or sorting going on that can eat CPU.
If you are using a lot of CPU due to stored routines, you are doing it wrong and your soul cannot be saved anyway.
If you are using a lot of CPU due to sorting going on, some things can be done, depending on the nature of your queries: You can extend indexes to include the ORDER BY columns at the end, or you can drop the ORDER BY clauses and sort in the client.
What approach to chose depends on the actual cause of the CPU usage - is it queries and sorting? And on the actual queries. So in any case you will need better monitoring first.
Not having monitoring information, the general advice is always: Buy more memory, not more CPU for a database.

Doesn't the on-demand nature of EC2 make it rather straightforward to rent the possible setup for a day, and do some load testing? Measurements speak louder than words.

Use "High-CPU Extra Large Instance".
In your current setup, MySQL is not constrained by memory:
MemTotal: 7347752 kB
MemFree: 94408 kB
Buffers: 71932 kB
Cached: **2202544 kB**
Out of 7 GB memory, 2 GB is unused and being used by OS as I/O cache.
In this case, increasing CPU count would give you more bang for buck.

Related

MYSQL default RAM consumption

I have a 32GB RAM MYSQL server. It's still totally new and no database attached except the default ones. However, when I run free -m command, I get the following:
total used free shared buff/cache available
Mem: 32768 2972 29718 10 76 29692
Swap: 16384 0 16384
When I contacted the host, they told me that MYSQL consumes 10% of the main memory by default and they advised me to configure the following parameters:
key_buffer_size = 8192M
myisam_sort_buffer_size = 10922M
innodb_buffer_pool_size = 16384M
Those values I think represent the maximum consumption that could be allocated not what's consumed by default and they are the recommended values by MYSQL. For example 8192M / 32768M (total memory) = 25% which is the recommended value. Can anyone explain this memory consumption?
Those values I think represent the maximum consumption that could be allocated not what's consumed by default
The entire InnoDB buffer pool is allocated at server startup, so reducing the size of innodb_buffer_pool_size will reduce the initial memory footprint used by MySQL.
I believe the same is also true of other MySQL buffers such as key_buffer_size and myisam_sort_buffer_size.
However, you should consider the actual server workload when tuning these parameters. The amount of memory used at startup is irrelevant; the interesting thing is how the memory usage looks when the server is in use with real databases.
Since you mentioned (elsewhere) that you're using Jelastic, you should delete the #Jelastic autoconfiguration mark. line from your my.cnf (usually at/near line 1) if you want to manually tune these settings; otherwise they are scaled automatically to suit your cloudlet scaling limit (i.e. your changes will be overwritten each time you adjust cloudlet limits or restart MySQL).
key_buffer_size
The maximum size of the key_buffer_size variable is 4 GB on 32 bit machines, and larger for 64 bit machines. MySQL recommends that you keep the key_buffer_size less than or equal to 25% of the RAM on your machine.
innodb_buffer_pool_size
Recommended range: 60~80%
MySQL 5.7 and it’s online buffer pool resize feature should make this an easier principle to follow.

Google Compute Engine VM disk is very slow

We just switched over to Google Compute Engine and are having major issues with disk speed. It's been about 5% of Linode or worse. It's never exceeded 20M/s for writing and 10M/s for reading. Most of the time it's 15M/s for writing and 5M/s for reading.
We're currently running a n1-highmem-4 (4 vCPU, 26 GB memory) machine. CPU & memory aren't the bottleneck. Just running a script that reads rows from PostgreSQL database, processes them, then writes back to PostgreSQL. It's just for a common job to update database row in batch. Tried running 20 processes to take advantage of multi-core but the overall progress is still slow.
We're thinking disk may be bottleneck because traffic is abnormally low.
Finally we decided to do benchmarking. We found it's not only slow but seems to have a major bug which is reproducible:
create & connect to instance
run the benchmark at least three times:
dd if=/dev/zero bs=1024 count=5000000 of=~/5Gb.file
We found it becomes extremely slow and aren't able to finish the benchmarking at all.
Persistent Disk performance is proportional to the size of the disk itself and the VM that it is attached to. The larger the disk (or the VM), the higher the performance, so in essence, the price you are paying for the disk or the VM pays not only for the disk/CPU/RAM but also for the IOPS and throughput.
Quoting the Persistent Disk documentation:
Persistent disk performance depends on the size of the volume and the
type of disk you select. Larger volumes can achieve higher I/O levels
than smaller volumes. There are no separate I/O charges as the cost of
the I/O capability is included in the price of the persistent disk.
Persistent disk performance can be described as follows:
IOPS performance limits grow linearly with the size of the persistent disk volume.
Throughput limits also grow linearly, up to the maximum bandwidth for the virtual machine that the persistent disk is attached to.
Larger virtual machines have higher bandwidth limits than smaller virtual machines.
There's also a more detailed pricing chart on the page which shows what you get per GB of space that you buy (data below is current as of August 2014):
Standard disks SSD persistent disks
Price (USD/GB per month) $0.04 $0.025
Maximum Sustained IOPS
Read IOPS/GB 0.3 30
Write IOPS/GB 1.5 30
Read IOPS/volume per VM 3,000 10,000
Write IOPS/volume per VM 15,000 15,000
Maximum Sustained Throughput
Read throughput/GB (MB/s) 0.12 0.48
Write throughput/GB (MB/s) 0.09 0.48
Read throughput/volume per VM (MB/s) 180 240
Write throughput/volume per VM (MB/s) 120 240
and a concrete example on the page of what a particular size of a disk will give you:
As an example of how you can use the performance chart to determine
the disk volume you want, consider that a 500GB standard persistent
disk will give you:
(0.3 × 500) = 150 small random reads
(1.5 × 500) = 750 small random writes
(0.12 × 500) = 60 MB/s of large sequential reads
(0.09 × 500) = 45 MB/s of large sequential writes

Improving MySQL I/O Performance (Hardware & Partitioning)

I need to improve I/O performance for my database. I'm using the "2xlarge" HW described below & considering upgrading to the "4xlarge" HW (http://aws.amazon.com/ec2/instance-types/). Thanks for the help!
Details:
CPU usage is fine (usually under 30%), uptime load averages anywhere from 0.5 to 2.0 (but I believe I'm supposed to divide that by the number of CPU's) so that looks okay as well. However, the I/O is bad: iostat show favorable service times, but the time spent in queue (I suppose this means waiting to access the disk) is far too high. I've configured MySQL to flush to disk every 1sec instead of every write, which helps, but not enough. Profiling shows there are a handful of tables that are the culprits for most of the load (both read && write operations). Queries are already indexed & optimized, but not partitioned. Average MySQL states are: Sending Data # 45%, statistics # 20%, Updating # 15%, Sorting result # 8%.
Questions:
How much performance will I get by upgrading HW?
Same question, but if I partition the high-load tables?
Machines:
m2.2xlarge
64-bit
4 vCPU
13 ECU
34.2 Gb Mem
EBS-Optimized
Network Performance: "Moderate"
m2.4xlarge
64-bit
6 vCPU
26 ECU
68.4 Gb Mem
EBS-Optimized
Network Performance: "High"
In my experience, the biggest boost in MySQL performance comes from IO. You have alot of RAM. Try setting up a ram drive and point the tmpdir to it.
I have several MySQL servers that are very busy. My settings are below - maybe this can help you tweak your settings.
My Setup is:
-Dual 2.66 CPU 8 core box with a 6-drive Raid-1E array - 1.3TB.
-innodblogs on a separate SSD drives.
-tmpdir is on a 2GB tempfs partition.
-32GB of ram
InnoDB settings:
innodb_thread_concurrency=16
innodb_buffer_pool_size = 22G
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_log_files_in_group=8
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 2 (This is a slave machine - 1 is not required fo my purposes)
innodb_flush_method=O_DIRECT
Current Queries per second avg: 5185.650
I am using Percona Server, which is quite a bit faster that other MySQLs from my testing.

What does "MySQL's maximum memory usage is dangerously high" mean by mysqltuner?

I am trying to optimize my mysql runs on a 2GB mem VPS, I used mysqltuner, and I dont quite understand how to deal with the following recommendations, especially the one says:
MySQL's maximum memory usage is dangerously high, deal with this one? Can someone help explain? Thanks.
-------- Performance Metrics -------------------------------------------------
[--] Up for: 3h 17m 7s (49K q [4.190 qps], 1K conn, TX: 70M, RX: 7M)
[--] Reads / Writes: 60% / 40%
[--] Total buffers: 314.0M global + 6.4M per thread (300 max threads)
[!!] Maximum possible memory usage: 2.2G (119% of installed RAM)
[OK] Slow queries: 1% (785/49K)
[OK] Highest usage of available connections: 85% (256/300)
[!!] Cannot calculate MyISAM index size - re-run script as root user
[OK] Query cache efficiency: 92.4% (38K cached / 41K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (0 temp sorts / 633 sorts)
[!!] Temporary tables created on disk: 45% (315 on disk / 699 total)
[OK] Thread cache hit rate: 74% (359 created / 1K connections)
[OK] Table cache hit rate: 95% (141 open / 148 opened)
[OK] Open file limit used: 12% (189/1K)
[OK] Table locks acquired immediately: 99% (6K immediate / 6K locks)
-------- Recommendations -----------------------------------------------------
General recommendations:
Add skip-innodb to MySQL configuration to disable InnoDB
MySQL started within last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries without LIMIT clauses
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
tmp_table_size (> 32M)
max_heap_table_size (> 32M)
"[!!] Maximum possible memory usage: 2.2G (119% of installed RAM)"
This means you essentially lied to MySQL, telling it you have more memory available than you really have, 2.2G > 2G. This might work for weeks or months, but it's a bad idea. If MySQL doesn't have the memory you told it to use, MySQL will crash randomly at the worst possible time.
If you add "skip-innodb" to your /etc/my.cnf file, that might save you some memory. I assume you're not using InnoDB. This is a tangent but I strongly advise you convert your data from MyISAM to InnoDB. MyISAM is old technology. InnoDB is the more modern engine.
Look for anything in your my.cnf that you can lower to save memory. The first thing I typically look at, unused connections. 15% of your connections aren't being used, but listen to the "started within 24 hours" warning. Typically lowering (unused) connections in my.cnf will save lots of memory. I don't know what your application does, but 256 connections sounds high to me. So I'd make sure your application really needs that many connections. Maybe you have 256 PHP children on your server and that could be cut way down 12 children. More children != faster response. If you have 12 PHP children, maybe you only need 13 database connections.
119% is obviously too high but I think 96% is too high also. (Which is why I'm here looking for the best % to use.) Obviously the operating system needs some memory too. How much memory should you leave unused for your operating system, I would like to know! I would ask this as a separate question here, if it hasn't been asked already. (Please post the link here if you do this.) Or you can just listen to mysqltuner's recommendation.
Just testing here:
"[!!] Maximum possible memory usage: 3.4G (88% of installed RAM)"
Lower my.cnf settings again.
"[!!] Maximum possible memory usage: 3.3G (86% of installed RAM)"
Still too high?
"[OK] Maximum possible memory usage: 3.2G (83% of installed RAM)"
Take the advice from mysqltuner with a grain of salt. It's making incorrect estimates about the maximum possible memory usage. And it can't make a correct estimate.
See http://www.percona.com/blog/2009/02/12/how-much-memory-can-mysql-use-in-the-worst-case/ for an explanation.
It's true that there each connection uses some memory, but how much varies. You won't always have 300 connections in use, and even when they are, they won't always be running queries simultaneously, and even if they are, the queries won't always be using all possible buffers to the maximum size.
Mysqltuner is warning about a theoretical maximum memory usage that will never happen.
Another way of looking at it: I've analyzed hundreds of MySQL configurations, and every one of them could in theory allocate more memory than the physical RAM on the server.

Why is this difference in INSERT performance...?

Hi recently I conducted a test on two different ubuntu servers.
Here are the results:
innodb_flush_trx_commit = 1
Staging Server: 10,000 Inserts ----> 81 seconds
innodb_flush_trx_commit = 2
Staging Server: 10,000 Inserts ----> 61 seconds
Dev Server:
innodb_flush_trx_commit = 1
10,000 Inserts ----> 5 seconds
Dev Server:
innodb_flush_trx_commit = 2
10,000 Inserts ----> 2 seconds
I am clear that performance vary with innodb_flush setting.
But why there is a huge diff in performance from the server to server ?
What are the things to consider here...?
Here are the some of the details considered but no significant thing to suspect:
staging server: Intel(R) Xeon(R) CPU X5355 #2.66GHz
processor 0, 1
mysql 5.1.61
innodb_buffer_pool : 8MB
RAM: 4GB
dev server: AMD Opteron(tm) Processor 4130 #2.60ghZ
processor 0, 1
mysql 5.0.67
innodb_buffer_pool : 8MB
RAM: 4GB
Please help in understanding what is the exact thing that has lead to this huge difference in perfromance on different servers...?
NOTE: same script used in the same way on noth the servers and not from remote
sesrvers.
Thanks in advance.
Regards,
UDAY
Some questions I'd go through...
Is binary logging turned on with one instance and not the other?
Is the staging server using a networked drive to access the mysql data?
Is the filesystem type the same on both servers (ext3, ext2, etc.)?
Disk activity seems to be the culprit here.
Is you staging server a production server? In that case it is probably a concurrency issue. If many users are working on that server, inserts may be slower, especially if the table(s) you are inserting into are being used by others too.
The dev server probably has much less load, with only a few developers using it simultaneously.
Keeping the hard ware differences aside (which is not a easy assumption). The factors that influence a query performance are:
Version of the DBMS engine
Query plan and db statistics used
Network latency
Memory allocated to the DBMS engine (Transaction buffer log included)
Process/thread priority of the DBMS engine
I also would suggest to check the disk utilization. To get a raw idea how your disk utilization performs while you are running your test you could use iostat (on redhat based systems it is in the package systat)
For example you could try the following:
iostat -xd 1
which would lead to an output simmilar to this:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.31 0.00 0.05 0.16 2.89 56.67 0.00 2.99 0.62 0.00
Compare here the await time on both machines. This metric shows you the average time in milliseconds which the disk needs to serve I/O requests. Its a quick way to check if there are big differences. Also interesting could be the %util metric. This one gives you the percentage of cpu time which is spent while issuing I/O operations to the device, as higher this value gets, as closer you get to the full saturation of your device.
For the other options check man iostat.
Of course above described only gives you a very basic overview of how your disks perform. So keep this in mind as an additional operation to the previous mentioned one in order to track down your problem.